prompt
stringlengths 48
2.37k
| chosen
stringlengths 7
4.28k
| rejected
stringlengths 11
4.44k
|
---|---|---|
Question: I am making a forum that accesses threads based off the category in the URL using the GET method. I want to redirect to an error page if no parameters exist in the url, but I want this to be a generic piece of code that can be used around my whole site.
For example:
The url would normally contain the category id:
```
localhost/myforum/threads.php?categoryid=1
```
I want it so that when the url is:
```
localhost/myforum/threads.php
```
it is to redirect to an error page, and that this piece of code is usable all around the website
Answer: | The most reliable way is to check if the URL contains a question mark:
```php
if (false !== strpos($_SERVER['REQUEST_URI'], '?')) {
// There is a query string (including cases when it's empty)
}
``` | Try:
```
$gets = parse_url($url));
if($gets['query'] == "")
{
echo "No GET variables";
}
``` |
Question: I am making a forum that accesses threads based off the category in the URL using the GET method. I want to redirect to an error page if no parameters exist in the url, but I want this to be a generic piece of code that can be used around my whole site.
For example:
The url would normally contain the category id:
```
localhost/myforum/threads.php?categoryid=1
```
I want it so that when the url is:
```
localhost/myforum/threads.php
```
it is to redirect to an error page, and that this piece of code is usable all around the website
Answer: | The most reliable way is to check if the URL contains a question mark:
```php
if (false !== strpos($_SERVER['REQUEST_URI'], '?')) {
// There is a query string (including cases when it's empty)
}
``` | You can use is\_set to check if the parameter exists like this,
```
isset($_GET)
``` |
Question: I have a structure 1x300 called `struct` with 3 fields but I'm using only the third field called `way`. This field is, for each 300 lines, a vertor of index.
Here an exemple with 3 lines to explain my problem : I woud like to search if the last index of the first line is present in an other vector (line) of the field `way`.
```
way
[491751 491750 491749 492772 493795 494819 495843 496867]
[491753 491754 491755 491756]
[492776 493800 494823 495847 496867]
```
I tried with intersect function :
```
Inter=intersect(struct(1).way(end), struct.way);
```
but Matlab returns me an error :
```
Error using intersect (line 80)
Too many input arguments.
Error in file2 (line 9)
Inter=intersect(struct(1).way(end), struct.way);
```
I don't understand why I have this error. Any explanations and/or other(s) solution(s)?
Answer: | Let the data be defined as
```
st(1).way = [491751 491750 491749 492772 493795 494819 495843 496867];
st(2).way = [491753 491754 491755 491756];
st(3).way = [492776 493800 494823 495847 496867]; % define the data
sought = st(1).way(end);
```
---
If you want to know **which vectors** contain the desired value: pack all vectors into a cell array and pass that to `cellfun` with an anonymous function as follows:
```
ind = cellfun(@(x) ismember(sought, x), {st.way});
```
This gives:
```
ind =
1×3 logical array
1 0 1
```
---
If you want to know for each vector **the indices** of the matching: modify the anonymous function to output a cell with the indices:
```
ind = cellfun(@(x) {find(x==sought)}, {st.way});
```
or equivalently
```
ind = cellfun(@(x) find(x==sought), {st.way}, 'UniformOutput', false);
```
The result is:
```
ind =
1×3 cell array
[8] [1×0 double] [5]
```
Or, to exclude the reference vector:
```
n = 1; % index of vector whose final element is sought
ind = cellfun(@(x) {find(x==st(n).way(end))}, {st([1:n-1 n+1:end]).way});
``` | You propbably want to use [`ismember`](https://uk.mathworks.com/help/matlab/ref/ismember.html).
Consider what you are passing to the `intersect`/`ismember` functions too, `struct.way` isn't a valid argument, you may need to loop to iterate over each line of your struct (in this case it would be easier to have a cell array, or matrix with equal length rows).
```
output = zeros(300);
for ii = 1:300
for jj = 1:300
if ii ~= jj && ismember(struct(ii).way(end), struct(jj).way)
output(ii,jj) = 1;
end
end
end
```
Now you have a matrix `output` where the elements which are 1 identify a match between the last element in way in the struct row `ii` and the vector `struct(jj).way`, where `ii` are the matrix row numbers and `jj` the column numbers. |
Question: I have an amount of items in a list or `list_1`. Each item is a row from a database.
An import of these items is going to happen on another list, or `list_2`. `list_2` already contains content of the same type.
Is there a way to make this happen so that only those items that are not present in `list_2` get imported?
Note that the original order must be retained in `list_2` and the new items are to be imported on top of that original order. The order is important because `list_2` actually is a concurrentqueue. So, the content of `list_1` should be updated somehow (clearing the duplicates) before I can start importing it into `list_2`. The actual import into `list_2` (queue) is not a problem, it's the way to clean up `list_1` that interests me. Thanks.
Answer: | I tried the following and got the results I think you want:
```
from py2neo import neo4j, node, rel
graph = neo4j.GraphDatabaseService("http://localhost:7474/db/data/")
alice, = graph.create(node(name="Alice")) # Comma unpacks length-1 tuple.
bob, = graph.create(node(name="Bob"))
carol, = graph.create(node(name="Carol Smith"))
dev, = graph.create(node(name="Dev"))
graph.create(rel(alice, "KNOWS", bob))
graph.create(rel(dev, "KNOWS", carol))
graph.create(rel(alice, "KNOWS", carol))
```
My graph now looks like this in the browser:

Alternatively, you can create the graph in one `graph.create()` statement:
```
from py2neo import neo4j, node, rel
graph = neo4j.GraphDatabaseService("http://localhost:7474/db/data/")
graph.create(
node(name="Alice"), #0
node(name="Bob"), #1
node(name="Carol Smith"), #2
node(name="Dev"), #3
rel(0, "KNOWS", 1),
rel(3, "KNOWS", 2),
rel(0, "KNOWS", 2)
)
```
And the output is the same. Hope this helps. | The reference node was a feature included in earlier versions of Neo4j and, by extension, py2neo. It has since been deprecated and removed so I should have also removed all traces from the py2neo documentation - it appears I've missed one!
Thanks for pointing this out, I'll add myself a task to get this page up to date.
In terms of creating a relationship, Nicole's answer is spot on and should have all the information you need. |
Question: I have the following asynchronous code:
```
public async Task<List<PreprocessingTaskResult>> Preprocess(Action onPreprocessingTaskFinished)
{
var preprocessingTasks = BuildPreprocessingTasks();
var preprocessingTaskResults = new List<PreprocessingTaskResult>();
while (preprocessingTasks.Count > 0)
{
//Wait till at least one task is completed
await TaskEx.WhenAny(preprocessingTasks.ToArray());
onPreprocessingTaskFinished();
}
return
}
```
And the asynchronous usage code
```
var preprocessingTaskResults = await Preprocess(someAction);
```
For some cases I need to call it in synchronous way. For simpler cases (when async method returns just a task) I used to do the following:
```
var someTask = AsyncMethod();
Task.Wait(someTask);
```
But I am confused how I should implement it here.
Answer: | A task's `Result` property will block until the task completes and then return its result:
```
List<PreprocessingTaskResult> result = Preprocess(someAction).Result;
```
<http://msdn.microsoft.com/en-us/library/dd321468(v=vs.110).aspx> | There is no easy way to call asynchronous code synchronously. Stephen Toub covers some various approaches [here](http://blogs.msdn.com/b/pfxteam/archive/2012/04/13/10293638.aspx) but there is no approach that works in all situations.
The best solution is to change the synchronous calling code to be asynchronous. With the advent of `Microsoft.Bcl.Async` and recent releases of Xamarin, asynchronous code is now supported on .NET 4.0 (and higher), Windows Phone 7.1 (and higher), Windows Store, Silverlight 4 (and higher), iOS/MonoTouch, Android/MonoDroid, and portable class libraries for any mix of those platforms.
So there's very little reason these days *not* to use `async`.
But if you *absolutely need* a synchronous API, then the best solution is to make it synchronous all the way. If you do need both asynchronous and synchronous APIs, this will cause code duplication which is unfortunate but it is the best solution at this time. |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | Store all of your records in a hash (byDate) first. And then enumerate over that. See this [JSFiddle](http://jsfiddle.net/PGVG3/)
```
var rows = [
{date: "01.01.2014", name: "Joe Bloggs", score: "25"},
{date: "01.01.2014", name: "Jim Jones", score: "50"},
{date: "02.01.2014", name: "Alice Smith", score: "33"},
{date: "01.01.2014", name: "Eve Harris", score: "40"},
];
var byDate = {};
$.each(rows, function() {
var r = byDate[this.date] || (byDate[this.date] = []);
r.push(this);
});
var table = $('#table-results');
for (var d in byDate) {
table.append($('<tr><td>'+d+'</td></tr>'));
$.each(byDate[d], function() {
var row = $('<tr>');
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
row.append(name, score);
table.append(row);
});
}
``` | You should be able to sort the table, like so:
```
<table id="mytable">
<tbody>
<tr>
<td>02.01.2014</td>
<td>hello</td>
</tr>
<tr>
<td>02.01.2014</td>
<td>hello2</td>
</tr>
<tr>
<td>01.01.2014</td>
<td>hello</td>
</tr>
</tbody>
</table>
function sortTable(){
var rows = $('#mytable tbody tr').get();
rows.sort(function(a, b) {
var A = $(a).children('td').eq(0).text().toUpperCase();
var B = $(b).children('td').eq(0).text().toUpperCase();
if(A < B) {
return -1;
}
if(A > B) {
return 1;
}
return 0;
});
$.each(rows, function(index, row) {
$('#mytable').children('tbody').append(row);
});
}
sortTable();
``` |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | <http://jsfiddle.net/fwpzr/1/>
Group data first, then dump to table; does not assume sorted data.
```
// Assumption: JSON data is in "rows"
var data = {};
var dates = [];
$.each(rows, function () {
if (typeof data[this.date] == "undefined")
{
data[this.date] = [];
}
data[this.date].push(this);
if (dates.indexOf(this.date) == -1)
{
dates.push(this.date);
}
});
dates = dates.sort();
var table = $('#table-results');
$.each(dates, function () {
table.append(
$("<tr>").append(
$("<th>").attr("colspan", "2")
.html(this)
)
);
data[this] = data[this].sort(function (a, b) {
return a.name > b.name;
});
$.each(data[this], function () {
table.append(
$("<tr>").append(
$("<td>").html(this.name)
).append(
$("<th>").html(this.score)
)
);
});
});
``` | You should be able to sort the table, like so:
```
<table id="mytable">
<tbody>
<tr>
<td>02.01.2014</td>
<td>hello</td>
</tr>
<tr>
<td>02.01.2014</td>
<td>hello2</td>
</tr>
<tr>
<td>01.01.2014</td>
<td>hello</td>
</tr>
</tbody>
</table>
function sortTable(){
var rows = $('#mytable tbody tr').get();
rows.sort(function(a, b) {
var A = $(a).children('td').eq(0).text().toUpperCase();
var B = $(b).children('td').eq(0).text().toUpperCase();
if(A < B) {
return -1;
}
if(A > B) {
return 1;
}
return 0;
});
$.each(rows, function(index, row) {
$('#mytable').children('tbody').append(row);
});
}
sortTable();
``` |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | ```
$.each(rows, function() {
var table = $('#table-results');
var matchDate = this.date;
var headerRow = $('#header_for_' + matchDate.replace(/\./g, ''));
if(headerRow.length === 0) {
headerRow = $('<tr>');
$(headerRow).attr('id', 'header_for_' + matchDate.replace(/\./g, ''));
headerRow.append(matchDate);
table.append(headerRow);
}
var dataRow = $('<tr>');
$(dataRow).attr('data-date', matchDate);
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
dataRow.append(name, score);
var lastDataRowThisDate = $('tr[data-date="' + matchDate + '"]').last();
if(lastDataRowThisDate.length === 0) {
lastDataRowThisDate = headerRow;
}
dataRow.insertAfter(lastDataRowThisDate);
});
``` | You should be able to sort the table, like so:
```
<table id="mytable">
<tbody>
<tr>
<td>02.01.2014</td>
<td>hello</td>
</tr>
<tr>
<td>02.01.2014</td>
<td>hello2</td>
</tr>
<tr>
<td>01.01.2014</td>
<td>hello</td>
</tr>
</tbody>
</table>
function sortTable(){
var rows = $('#mytable tbody tr').get();
rows.sort(function(a, b) {
var A = $(a).children('td').eq(0).text().toUpperCase();
var B = $(b).children('td').eq(0).text().toUpperCase();
if(A < B) {
return -1;
}
if(A > B) {
return 1;
}
return 0;
});
$.each(rows, function(index, row) {
$('#mytable').children('tbody').append(row);
});
}
sortTable();
``` |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | Store all of your records in a hash (byDate) first. And then enumerate over that. See this [JSFiddle](http://jsfiddle.net/PGVG3/)
```
var rows = [
{date: "01.01.2014", name: "Joe Bloggs", score: "25"},
{date: "01.01.2014", name: "Jim Jones", score: "50"},
{date: "02.01.2014", name: "Alice Smith", score: "33"},
{date: "01.01.2014", name: "Eve Harris", score: "40"},
];
var byDate = {};
$.each(rows, function() {
var r = byDate[this.date] || (byDate[this.date] = []);
r.push(this);
});
var table = $('#table-results');
for (var d in byDate) {
table.append($('<tr><td>'+d+'</td></tr>'));
$.each(byDate[d], function() {
var row = $('<tr>');
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
row.append(name, score);
table.append(row);
});
}
``` | Even if dates are unordered, create a new empty json object, do a loop over the original json, at each iteration add to the new json object indexed by date.
The final object would be like:
```
{
'01.01.2014': [
{'Joe Bloggs',25},
{'Jim Jones',50},
{'Eve Harris',40}
]
/* and so on */
}
```
Then you can format this new json object easily.
Ok ? |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | <http://jsfiddle.net/fwpzr/1/>
Group data first, then dump to table; does not assume sorted data.
```
// Assumption: JSON data is in "rows"
var data = {};
var dates = [];
$.each(rows, function () {
if (typeof data[this.date] == "undefined")
{
data[this.date] = [];
}
data[this.date].push(this);
if (dates.indexOf(this.date) == -1)
{
dates.push(this.date);
}
});
dates = dates.sort();
var table = $('#table-results');
$.each(dates, function () {
table.append(
$("<tr>").append(
$("<th>").attr("colspan", "2")
.html(this)
)
);
data[this] = data[this].sort(function (a, b) {
return a.name > b.name;
});
$.each(data[this], function () {
table.append(
$("<tr>").append(
$("<td>").html(this.name)
).append(
$("<th>").html(this.score)
)
);
});
});
``` | Even if dates are unordered, create a new empty json object, do a loop over the original json, at each iteration add to the new json object indexed by date.
The final object would be like:
```
{
'01.01.2014': [
{'Joe Bloggs',25},
{'Jim Jones',50},
{'Eve Harris',40}
]
/* and so on */
}
```
Then you can format this new json object easily.
Ok ? |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | ```
$.each(rows, function() {
var table = $('#table-results');
var matchDate = this.date;
var headerRow = $('#header_for_' + matchDate.replace(/\./g, ''));
if(headerRow.length === 0) {
headerRow = $('<tr>');
$(headerRow).attr('id', 'header_for_' + matchDate.replace(/\./g, ''));
headerRow.append(matchDate);
table.append(headerRow);
}
var dataRow = $('<tr>');
$(dataRow).attr('data-date', matchDate);
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
dataRow.append(name, score);
var lastDataRowThisDate = $('tr[data-date="' + matchDate + '"]').last();
if(lastDataRowThisDate.length === 0) {
lastDataRowThisDate = headerRow;
}
dataRow.insertAfter(lastDataRowThisDate);
});
``` | Even if dates are unordered, create a new empty json object, do a loop over the original json, at each iteration add to the new json object indexed by date.
The final object would be like:
```
{
'01.01.2014': [
{'Joe Bloggs',25},
{'Jim Jones',50},
{'Eve Harris',40}
]
/* and so on */
}
```
Then you can format this new json object easily.
Ok ? |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | <http://jsfiddle.net/fwpzr/1/>
Group data first, then dump to table; does not assume sorted data.
```
// Assumption: JSON data is in "rows"
var data = {};
var dates = [];
$.each(rows, function () {
if (typeof data[this.date] == "undefined")
{
data[this.date] = [];
}
data[this.date].push(this);
if (dates.indexOf(this.date) == -1)
{
dates.push(this.date);
}
});
dates = dates.sort();
var table = $('#table-results');
$.each(dates, function () {
table.append(
$("<tr>").append(
$("<th>").attr("colspan", "2")
.html(this)
)
);
data[this] = data[this].sort(function (a, b) {
return a.name > b.name;
});
$.each(data[this], function () {
table.append(
$("<tr>").append(
$("<td>").html(this.name)
).append(
$("<th>").html(this.score)
)
);
});
});
``` | Store all of your records in a hash (byDate) first. And then enumerate over that. See this [JSFiddle](http://jsfiddle.net/PGVG3/)
```
var rows = [
{date: "01.01.2014", name: "Joe Bloggs", score: "25"},
{date: "01.01.2014", name: "Jim Jones", score: "50"},
{date: "02.01.2014", name: "Alice Smith", score: "33"},
{date: "01.01.2014", name: "Eve Harris", score: "40"},
];
var byDate = {};
$.each(rows, function() {
var r = byDate[this.date] || (byDate[this.date] = []);
r.push(this);
});
var table = $('#table-results');
for (var d in byDate) {
table.append($('<tr><td>'+d+'</td></tr>'));
$.each(byDate[d], function() {
var row = $('<tr>');
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
row.append(name, score);
table.append(row);
});
}
``` |
Question: I'm trying to get a Google Spreadsheet online generated by a PHP script. Since it seems there is no native PHP API for creating spreadsheets (could only find Java and .NET), I figured the easiest way would be to generate an XLSX and have it converted to a Google Spreadsheet.
Uploading the file works fine:
```
$mime = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
$file = new Google_DriveFile();
$file->setTitle('My Spreadsheet');
$file->setMimeType($mime);
$data = file_get_contents('my_spreadsheet.xlsx');
$createdFile = $service->files->insert($file, array(
'data' => $data,
'mimeType' => $mime,
'convert' => true,
));
```
When it shows up in Google Drive, though, it's just a regular Excel file. I have to right-click it in the Google Drive web interface and select "Open in Google Sheets" before it's converted (but the conversion doesn't have errors).
How can I force the file to be converted immediately?
Answer: | <http://jsfiddle.net/fwpzr/1/>
Group data first, then dump to table; does not assume sorted data.
```
// Assumption: JSON data is in "rows"
var data = {};
var dates = [];
$.each(rows, function () {
if (typeof data[this.date] == "undefined")
{
data[this.date] = [];
}
data[this.date].push(this);
if (dates.indexOf(this.date) == -1)
{
dates.push(this.date);
}
});
dates = dates.sort();
var table = $('#table-results');
$.each(dates, function () {
table.append(
$("<tr>").append(
$("<th>").attr("colspan", "2")
.html(this)
)
);
data[this] = data[this].sort(function (a, b) {
return a.name > b.name;
});
$.each(data[this], function () {
table.append(
$("<tr>").append(
$("<td>").html(this.name)
).append(
$("<th>").html(this.score)
)
);
});
});
``` | ```
$.each(rows, function() {
var table = $('#table-results');
var matchDate = this.date;
var headerRow = $('#header_for_' + matchDate.replace(/\./g, ''));
if(headerRow.length === 0) {
headerRow = $('<tr>');
$(headerRow).attr('id', 'header_for_' + matchDate.replace(/\./g, ''));
headerRow.append(matchDate);
table.append(headerRow);
}
var dataRow = $('<tr>');
$(dataRow).attr('data-date', matchDate);
var name = $('<td>').html(this.name);
var score = $('<td>').html('<strong>' + this.score + '</strong>');
dataRow.append(name, score);
var lastDataRowThisDate = $('tr[data-date="' + matchDate + '"]').last();
if(lastDataRowThisDate.length === 0) {
lastDataRowThisDate = headerRow;
}
dataRow.insertAfter(lastDataRowThisDate);
});
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | As others have commented on the logic, I'll comment on the rest :)
```
weight = eval(input("Please enter the weight of the package: "))
```
Why have you used `eval()`? It is used for running raw code from the input which you don't really need.
Instead, do:
```
int(input("Please enter the weight of the package: "))
```
Here, you convert the input to an integer, at the same time. If you want to use floats, simply do:
```
float(input("Please enter the weight of the package: "))
```
---
As far as I can see, there's no need to add to the rate because you're only calling the function once. So rate will **always** be 0 at the beginning.
So instead, remove the `rate = 0` and change the `rate += 1.25`, etc... to `rate = 1.25`, etc...
---
There's also no need to `print()` `main()` because you have a print statement already in it.
Instead, you could **return** the string that you are currently printing, or just remove the surrounding `print()` from `print(main())`
---
With those changes in mind:
```
def main():
weight = int(input("Please enter the weight of the package: "))
if weight <= 2:
rate = 1.25
elif weight > 2 and weight <= 5:
rate = 2.35
elif weight > 5 and weight <= 10:
rate = 4.05
elif weight > 10:
rate = 5.55
charge = rate * weight
return "The shipping charge is: $%s" % charge
print(main())
``` | Since you have covered all possible cases, the last
```
elif weight > 10:
rate += 5.55
```
can just be:
```
else:
rate += 5.55
```
Also, in python, you can do:
```
elif 2 < weight <= 5:
```
Finally, your main doesn't return anything, so instead of
```
print(main())
```
you can just do:
```
main()
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | You are checking much more than you need to. Consider the first two cases:
```
if weight <= 2:
# A
elif weight > 2 and weight <= 5:
# B
```
If `weight <= 2`, we fall into case A. If we're not in case A, only then do we continue onto the next condition checks (because we're using `elif` and not `if`). As such, if we're even in the path that checks the condition which leads to `B`, we already know that `weight > 2`. Thus, that check is redundant and can be reduced to simply `weight <= 5`.
The whole body can become:
```
if weight <= 2:
rate += 1.25
elif weight <= 5:
rate += 2.35
elif weight <= 10:
rate += 4.05
else:
rate += 5.55
```
Which isn't just less code, it's easier to reason about since all your bounds are visually in the same column. | By extracting the computation of the rate to a different function, you can simplify the code by returning early when you know the rate:
```
def compute_rate(weight):
if weight <= 2:
return 1.25
if weight <= 5:
return 2.35
if weight <= 10:
return 4.05
return 5.55
```
Used like this in main:
```
charge = compute_rate(weight) * weight
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | You are checking much more than you need to. Consider the first two cases:
```
if weight <= 2:
# A
elif weight > 2 and weight <= 5:
# B
```
If `weight <= 2`, we fall into case A. If we're not in case A, only then do we continue onto the next condition checks (because we're using `elif` and not `if`). As such, if we're even in the path that checks the condition which leads to `B`, we already know that `weight > 2`. Thus, that check is redundant and can be reduced to simply `weight <= 5`.
The whole body can become:
```
if weight <= 2:
rate += 1.25
elif weight <= 5:
rate += 2.35
elif weight <= 10:
rate += 4.05
else:
rate += 5.55
```
Which isn't just less code, it's easier to reason about since all your bounds are visually in the same column. | Since you have covered all possible cases, the last
```
elif weight > 10:
rate += 5.55
```
can just be:
```
else:
rate += 5.55
```
Also, in python, you can do:
```
elif 2 < weight <= 5:
```
Finally, your main doesn't return anything, so instead of
```
print(main())
```
you can just do:
```
main()
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | As others have commented on the logic, I'll comment on the rest :)
```
weight = eval(input("Please enter the weight of the package: "))
```
Why have you used `eval()`? It is used for running raw code from the input which you don't really need.
Instead, do:
```
int(input("Please enter the weight of the package: "))
```
Here, you convert the input to an integer, at the same time. If you want to use floats, simply do:
```
float(input("Please enter the weight of the package: "))
```
---
As far as I can see, there's no need to add to the rate because you're only calling the function once. So rate will **always** be 0 at the beginning.
So instead, remove the `rate = 0` and change the `rate += 1.25`, etc... to `rate = 1.25`, etc...
---
There's also no need to `print()` `main()` because you have a print statement already in it.
Instead, you could **return** the string that you are currently printing, or just remove the surrounding `print()` from `print(main())`
---
With those changes in mind:
```
def main():
weight = int(input("Please enter the weight of the package: "))
if weight <= 2:
rate = 1.25
elif weight > 2 and weight <= 5:
rate = 2.35
elif weight > 5 and weight <= 10:
rate = 4.05
elif weight > 10:
rate = 5.55
charge = rate * weight
return "The shipping charge is: $%s" % charge
print(main())
``` | By extracting the computation of the rate to a different function, you can simplify the code by returning early when you know the rate:
```
def compute_rate(weight):
if weight <= 2:
return 1.25
if weight <= 5:
return 2.35
if weight <= 10:
return 4.05
return 5.55
```
Used like this in main:
```
charge = compute_rate(weight) * weight
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | You are checking much more than you need to. Consider the first two cases:
```
if weight <= 2:
# A
elif weight > 2 and weight <= 5:
# B
```
If `weight <= 2`, we fall into case A. If we're not in case A, only then do we continue onto the next condition checks (because we're using `elif` and not `if`). As such, if we're even in the path that checks the condition which leads to `B`, we already know that `weight > 2`. Thus, that check is redundant and can be reduced to simply `weight <= 5`.
The whole body can become:
```
if weight <= 2:
rate += 1.25
elif weight <= 5:
rate += 2.35
elif weight <= 10:
rate += 4.05
else:
rate += 5.55
```
Which isn't just less code, it's easier to reason about since all your bounds are visually in the same column. | As others have commented on the logic, I'll comment on the rest :)
```
weight = eval(input("Please enter the weight of the package: "))
```
Why have you used `eval()`? It is used for running raw code from the input which you don't really need.
Instead, do:
```
int(input("Please enter the weight of the package: "))
```
Here, you convert the input to an integer, at the same time. If you want to use floats, simply do:
```
float(input("Please enter the weight of the package: "))
```
---
As far as I can see, there's no need to add to the rate because you're only calling the function once. So rate will **always** be 0 at the beginning.
So instead, remove the `rate = 0` and change the `rate += 1.25`, etc... to `rate = 1.25`, etc...
---
There's also no need to `print()` `main()` because you have a print statement already in it.
Instead, you could **return** the string that you are currently printing, or just remove the surrounding `print()` from `print(main())`
---
With those changes in mind:
```
def main():
weight = int(input("Please enter the weight of the package: "))
if weight <= 2:
rate = 1.25
elif weight > 2 and weight <= 5:
rate = 2.35
elif weight > 5 and weight <= 10:
rate = 4.05
elif weight > 10:
rate = 5.55
charge = rate * weight
return "The shipping charge is: $%s" % charge
print(main())
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | You are checking much more than you need to. Consider the first two cases:
```
if weight <= 2:
# A
elif weight > 2 and weight <= 5:
# B
```
If `weight <= 2`, we fall into case A. If we're not in case A, only then do we continue onto the next condition checks (because we're using `elif` and not `if`). As such, if we're even in the path that checks the condition which leads to `B`, we already know that `weight > 2`. Thus, that check is redundant and can be reduced to simply `weight <= 5`.
The whole body can become:
```
if weight <= 2:
rate += 1.25
elif weight <= 5:
rate += 2.35
elif weight <= 10:
rate += 4.05
else:
rate += 5.55
```
Which isn't just less code, it's easier to reason about since all your bounds are visually in the same column. | I like to separate logic from data, to make both more readable. Here, the key rate change points are in a separate dictionary. We can find the correct rate by checking from the end of the list. Shipping is calculated in the function, or we could return just the rate.
```
RATES = {
0: 1.25,
2: 2.35,
5: 4.05,
10: 5.55,
}
def calc_shipping(weight):
# Make reversed key weight list
key_weights = sorted(RATES.keys(), reverse=True)
for more_than in key_weights:
if weight > more_than:
return RATES[more_than] * weight
else:
return 0.0
```
To test for the correctness of the solution compared to the original, I used this py.test parameterized function:
```
@pytest.mark.parametrize("weight, expected_shipping", [
(0, 0),
(1.5, 1.875),
(2, 2.5),
(2.0001, 4.700235),
(4, 9.4),
(5, 11.75),
(5.0001, 20.250405),
(8, 32.4),
(10, 40.5),
(15, 83.25),
])
def test_calc_shipping(weight, expected_shipping):
# Check for approximate equality
assert abs(calc_shipping(weight) - expected_shipping) < 0.0000001
``` |
Question: The program here is simply supposed to tell you the shipping charge based off of the weight of the package. I am just wondering if there is any further way to concise it? At the moment, I do not think it can go any further than that. The only place where I'm unsure whether I could improve it, could be Booleans. Possibly use a `while` statement?
```
def main():
weight = eval(input("Please enter the weight of the package: "))
rate = 0
if weight <= 2:
rate += 1.25
elif weight > 2 and weight <= 5:
rate += 2.35
elif weight > 5 and weight <= 10:
rate += 4.05
elif weight > 10:
rate += 5.55
charge = rate * weight
print("The shipping charge is: $%s" % charge)
print(main())
```
Answer: | As others have commented on the logic, I'll comment on the rest :)
```
weight = eval(input("Please enter the weight of the package: "))
```
Why have you used `eval()`? It is used for running raw code from the input which you don't really need.
Instead, do:
```
int(input("Please enter the weight of the package: "))
```
Here, you convert the input to an integer, at the same time. If you want to use floats, simply do:
```
float(input("Please enter the weight of the package: "))
```
---
As far as I can see, there's no need to add to the rate because you're only calling the function once. So rate will **always** be 0 at the beginning.
So instead, remove the `rate = 0` and change the `rate += 1.25`, etc... to `rate = 1.25`, etc...
---
There's also no need to `print()` `main()` because you have a print statement already in it.
Instead, you could **return** the string that you are currently printing, or just remove the surrounding `print()` from `print(main())`
---
With those changes in mind:
```
def main():
weight = int(input("Please enter the weight of the package: "))
if weight <= 2:
rate = 1.25
elif weight > 2 and weight <= 5:
rate = 2.35
elif weight > 5 and weight <= 10:
rate = 4.05
elif weight > 10:
rate = 5.55
charge = rate * weight
return "The shipping charge is: $%s" % charge
print(main())
``` | I like to separate logic from data, to make both more readable. Here, the key rate change points are in a separate dictionary. We can find the correct rate by checking from the end of the list. Shipping is calculated in the function, or we could return just the rate.
```
RATES = {
0: 1.25,
2: 2.35,
5: 4.05,
10: 5.55,
}
def calc_shipping(weight):
# Make reversed key weight list
key_weights = sorted(RATES.keys(), reverse=True)
for more_than in key_weights:
if weight > more_than:
return RATES[more_than] * weight
else:
return 0.0
```
To test for the correctness of the solution compared to the original, I used this py.test parameterized function:
```
@pytest.mark.parametrize("weight, expected_shipping", [
(0, 0),
(1.5, 1.875),
(2, 2.5),
(2.0001, 4.700235),
(4, 9.4),
(5, 11.75),
(5.0001, 20.250405),
(8, 32.4),
(10, 40.5),
(15, 83.25),
])
def test_calc_shipping(weight, expected_shipping):
# Check for approximate equality
assert abs(calc_shipping(weight) - expected_shipping) < 0.0000001
``` |
Question: There is an Arraylist which has size 14 and I have to divide it into 6 sub-arraylists.
```
List<Integer> list;
list = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
```
For example like below :
14/6 = 2.333
Now when I'll take ceil value 3 then only 5 sub-arraylist will be created and when I'll take floor value i.e. 2 then 7 sub-arraylist will be created **but I need 6**.
How to achieve this ?
Answer: | Something like this, might have some issues, but can be tweaked
```
public static void main(String[] args) {
List list = new ArrayList(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14));
int number = 6;
int factor = list.size() / number;
int fromIndex = 0;
int toIndex = factor;
for (int i = 0; i < number; i++) {
if (i < number - 1) {
System.out.println(list.subList(fromIndex, toIndex));
fromIndex = toIndex;
toIndex = fromIndex + factor;
} else {
System.out.println(list.subList(fromIndex, list.size()));
}
}
}
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13, 14`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13, 14]
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13]
``` | I would suggest to use a "divide and conquer" method, use a recursive call to split the arrays in half until you get the number of arrays you need. Of course if you want an odd number of arrays, that would mean that you have one that is double the size of the rest.
Another solution would be to create 6 empty lists and just add an item to each while removing from the original until the original list is exhausted. |
Question: There is an Arraylist which has size 14 and I have to divide it into 6 sub-arraylists.
```
List<Integer> list;
list = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
```
For example like below :
14/6 = 2.333
Now when I'll take ceil value 3 then only 5 sub-arraylist will be created and when I'll take floor value i.e. 2 then 7 sub-arraylist will be created **but I need 6**.
How to achieve this ?
Answer: | Something like this, might have some issues, but can be tweaked
```
public static void main(String[] args) {
List list = new ArrayList(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14));
int number = 6;
int factor = list.size() / number;
int fromIndex = 0;
int toIndex = factor;
for (int i = 0; i < number; i++) {
if (i < number - 1) {
System.out.println(list.subList(fromIndex, toIndex));
fromIndex = toIndex;
toIndex = fromIndex + factor;
} else {
System.out.println(list.subList(fromIndex, list.size()));
}
}
}
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13, 14`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13, 14]
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13]
``` | you can do something like below code
```
public static void main(String[] args) {
List<String> arrayList = new ArrayList(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
int noofsublist = 6; //no of subparts
int chopsize = Math.floor(arrayList.size()/noofsublist);
for (int start = 0; start < arrayList.size(); start += chopsize) {
int end = Math.min(start + chopsize, arrayList.size());
List<String> sublist = arrayList.subList(start, end);
System.out.println(sublist);
}
}
``` |
Question: There is an Arraylist which has size 14 and I have to divide it into 6 sub-arraylists.
```
List<Integer> list;
list = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
```
For example like below :
14/6 = 2.333
Now when I'll take ceil value 3 then only 5 sub-arraylist will be created and when I'll take floor value i.e. 2 then 7 sub-arraylist will be created **but I need 6**.
How to achieve this ?
Answer: | Something like this, might have some issues, but can be tweaked
```
public static void main(String[] args) {
List list = new ArrayList(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14));
int number = 6;
int factor = list.size() / number;
int fromIndex = 0;
int toIndex = factor;
for (int i = 0; i < number; i++) {
if (i < number - 1) {
System.out.println(list.subList(fromIndex, toIndex));
fromIndex = toIndex;
toIndex = fromIndex + factor;
} else {
System.out.println(list.subList(fromIndex, list.size()));
}
}
}
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13, 14`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13, 14]
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13]
``` | I guess this is a homework problem, but if it was a real-life scenario, I'd use Guava's [`Lists.partition()`](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/collect/Lists.html#partition(java.util.List,%20int)) method:
```
List<List<Integer>> partitions = Lists.partition(yourList, n);
```
Where `n` is the maximum size of your sub list. A naive formula for n is `yourList.size() / desiredNumberOfSubLists() + 1`
In your case it would be `14 / 6 + 1 = 3`, but unfortunately that creates 5 sub lists, not 6.
So here's a custom method that splits into n partitions, distributing the elements to the individual lists as fair as possible:
```
public static <T> List<List<T>> partitionIntoNLists(final List<T> list, final int n) {
final List<List<T>> listOfLists = new ArrayList<>(n);
final int[] partitionSizes = new int[n];
int offset = 0;
// round robin to distribute partition sizes
for (int i = 0; i < list.size(); i++) {
partitionSizes[offset++]++;
if (offset == n) {
offset = 0;
}
}
offset = 0;
for (final int partitionSize : partitionSizes) {
listOfLists.add(list.subList(offset, offset + partitionSize));
offset += partitionSize;
}
return listOfLists;
}
```
Test code:
```
public static void main(final String[] args) {
final List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14);
for (List<Integer> integers : partitionIntoNLists(list, 6)) {
System.out.println(integers);
}
}
```
Output:
```
[1, 2, 3]
[4, 5, 6]
[7, 8]
[9, 10]
[11, 12]
[13, 14]
``` |
Question: There is an Arraylist which has size 14 and I have to divide it into 6 sub-arraylists.
```
List<Integer> list;
list = new ArrayList<Integer>(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
```
For example like below :
14/6 = 2.333
Now when I'll take ceil value 3 then only 5 sub-arraylist will be created and when I'll take floor value i.e. 2 then 7 sub-arraylist will be created **but I need 6**.
How to achieve this ?
Answer: | Something like this, might have some issues, but can be tweaked
```
public static void main(String[] args) {
List list = new ArrayList(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14));
int number = 6;
int factor = list.size() / number;
int fromIndex = 0;
int toIndex = factor;
for (int i = 0; i < number; i++) {
if (i < number - 1) {
System.out.println(list.subList(fromIndex, toIndex));
fromIndex = toIndex;
toIndex = fromIndex + factor;
} else {
System.out.println(list.subList(fromIndex, list.size()));
}
}
}
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13, 14`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13, 14]
```
Input - `1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12, 13`
Output
```
[1, 2]
[3, 4]
[5, 6]
[7, 8]
[9, 10]
[11, 12, 13]
``` | ```
List<String> list = new ArrayList(Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14));
int parts = 6;
for (int i = 0; i < parts; i++) {
List<String> part = list.subList(i * list.size() / parts,
(i + 1) * list.size() / parts);
System.out.println(part);
}
```
outputs
```
[1, 2]
[3, 4]
[5, 6, 7]
[8, 9]
[10, 11]
[12, 13, 14]
``` |
Question: I am currently writing a script for our Gitlab CI that automatically uploads files to an NFSShare folder in the network. Since I want to organize the builds and we're using maven, I thought I could "easily" get the project name from the pom.xml.
Is there a way to get the properties available from within a pom.xml through a command-line tool or something? My only other way I could think of was "regex-grepping the value by hand" - not a very clean solution in my opinion.
I already found the the properties plugin, but it only seem to ADD new properties through actual .properties files...
Any help would be much appreciated!
Answer: | I know the question is old but I spent some time looking for this.
To filter output you may use flags "-q -DforceStdout" where "-q" prevents output and "-DforceStdout" forces outputting result of plugin. E.g.:
```
BUILD_VERSION=$(mvn help:evaluate -Dexpression=project.version -q -DforceStdout)
echo $BUILD_VERSION
```
will result in printing version of project from POM.
Second important problem I had was accessing "properties" which is explained in Nick Holt comment. To access properties you just access them directly
```
<project ...>
<version>123</version>
(...)
<properties>
(...)
<docker.registry>docker.registry.com</docker.registry>
(...)
</properties>
(...)
</project>
```
WRONG
```
mvn help:evaluate -Dexpression=project.properties.docker.registry -q -DforceStdout
```
OK
```
mvn help:evaluate -Dexpression=docker.registry -q -DforceStdout
``` | If you know the name of the property you want, you can get the value with:
```
mvn help:evaluate -Dexpression=[property-name] | findstr /R ^^[^^\[INFO\]]
```
For example:
```
mvn help:evaluate -Dexpression=basedir | findstr /R ^^[^^\[INFO\]]
```
Will output:
```
C:\Users\nick\Local\Projects\example
```
This obviously assumes your building on a Windows box with the `findstr` removing all the other logging that Maven does when it runs. You'll be able to do something similar on Unix with a `grep`, but I leave that to you. |
Question: I am currently writing a script for our Gitlab CI that automatically uploads files to an NFSShare folder in the network. Since I want to organize the builds and we're using maven, I thought I could "easily" get the project name from the pom.xml.
Is there a way to get the properties available from within a pom.xml through a command-line tool or something? My only other way I could think of was "regex-grepping the value by hand" - not a very clean solution in my opinion.
I already found the the properties plugin, but it only seem to ADD new properties through actual .properties files...
Any help would be much appreciated!
Answer: | If you know the name of the property you want, you can get the value with:
```
mvn help:evaluate -Dexpression=[property-name] | findstr /R ^^[^^\[INFO\]]
```
For example:
```
mvn help:evaluate -Dexpression=basedir | findstr /R ^^[^^\[INFO\]]
```
Will output:
```
C:\Users\nick\Local\Projects\example
```
This obviously assumes your building on a Windows box with the `findstr` removing all the other logging that Maven does when it runs. You'll be able to do something similar on Unix with a `grep`, but I leave that to you. | You may try the following option by `github.com/jkot`
<https://gist.github.com/jkot/8668441#echo-all-available-maven-properties>
```xml
<!-- pom.xml -->
...
<build>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echoproperties />
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
...
```
run `mvn validate` and you'll get all properties |
Question: I am currently writing a script for our Gitlab CI that automatically uploads files to an NFSShare folder in the network. Since I want to organize the builds and we're using maven, I thought I could "easily" get the project name from the pom.xml.
Is there a way to get the properties available from within a pom.xml through a command-line tool or something? My only other way I could think of was "regex-grepping the value by hand" - not a very clean solution in my opinion.
I already found the the properties plugin, but it only seem to ADD new properties through actual .properties files...
Any help would be much appreciated!
Answer: | I know the question is old but I spent some time looking for this.
To filter output you may use flags "-q -DforceStdout" where "-q" prevents output and "-DforceStdout" forces outputting result of plugin. E.g.:
```
BUILD_VERSION=$(mvn help:evaluate -Dexpression=project.version -q -DforceStdout)
echo $BUILD_VERSION
```
will result in printing version of project from POM.
Second important problem I had was accessing "properties" which is explained in Nick Holt comment. To access properties you just access them directly
```
<project ...>
<version>123</version>
(...)
<properties>
(...)
<docker.registry>docker.registry.com</docker.registry>
(...)
</properties>
(...)
</project>
```
WRONG
```
mvn help:evaluate -Dexpression=project.properties.docker.registry -q -DforceStdout
```
OK
```
mvn help:evaluate -Dexpression=docker.registry -q -DforceStdout
``` | You may try the following option by `github.com/jkot`
<https://gist.github.com/jkot/8668441#echo-all-available-maven-properties>
```xml
<!-- pom.xml -->
...
<build>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.7</version>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echoproperties />
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
...
```
run `mvn validate` and you'll get all properties |
Question: ```
public class Excel {
public static void main(String[] args) throws IOException, FileNotFoundException {
try {
InputStream input = new BufferedInputStream(new FileInputStream("D:/one"));
POIFSFileSystem fs = new POIFSFileSystem(input);
HSSFWorkbook wb = new HSSFWorkbook(fs);
HSSFSheet sheet = wb.getSheetAt(0);
Iterator rows = sheet.rowIterator();
while (rows.hasNext()) {
HSSFRow next = (HSSFRow) rows.next();
System.out.println("\n");
Iterator cells = next.cellIterator();
while (cells.hasNext()) {
HSSFCell next2 = (HSSFCell) cells.next();
if (HSSFCell.CELL_TYPE_NUMERIC == next2.getCellType()) {
System.out.println(next2.getNumericCellValue() + "");
} else if (HSSFCell.CELL_TYPE_STRING == next2.getCellType()) {
System.out.println(next2.getStringCellValue());
} else if (HSSFCell.CELL_TYPE_BOOLEAN == next2.getCellType()) {
System.out.println(next2.getBooleanCellValue() + "");
} else if (HSSFCell.CELL_TYPE_BLANK == next2.getCellType()) {
System.out.println("BLANK ");
} else {
System.out.println("unknown cell type");
}
}
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
```
Answer: | First of all, you can't do this:
```
int* x;
*x = 3;
```
because `x` doesn't point to a valid `int`. De-referencing it us *undefined behaviour*.
You could do this:
```
int y = 3;
int* x = &y;
```
Then you can pass `*x` to the function.
```
increaseRef(*x);
``` | After the edit, I see your point better.
In the second case, you are *referencing the reference itself*.
In other terms, you are giving an `&(int*)` to the function instead of an `&(int)`.
Otherwise:
You are not allocating any memory for your pointer.
You need to call malloc() or new() before you can dereference the pointer.
Try
```
int *x=new int;
*x=3;
increaseRef(*x);
delete x; //and don't forget to delete your pointers, or you will leak memory.
return 0;
```
You can also pass the allocation to c++ by defining a local variable, as @juanchopanza stated. That way, memory will be automatically freed on return (and the variable will be created on the stack), the downside being that after your function defining the given variable returns, your pointer will be invalid, and if you dereference it outside the function, you will get a segfault. |
Question: I read a lot of technical documentation, especially in the computer programming space. Today I was reading the following paragraph:
>
> Any type that implements a Read (or Write) method with this signature is said to implement io.Reader (or io.Writer). For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
[http://golang.org/doc/articles/laws\_of\_reflection.html]
This paragraph could be re-written like this:
>
> Any type that implements a Read method with this signature is said to implement io.Reader. *Also, any type that implements a Write method with this signature is said to implement io.Writer.* For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
I'm guessing the first paragraph was used instead of something like the second because the second (my) example is longer.
This all got me thinking, "It'd be great if there were a way to write this paragraph that is clear and concise". Something like:
>
> Any type that implements a [Read|Write] method with this signature is said to implement io.[Reader|Writer]. For the purposes of this discussion, that means that a variable of type io.[Reader|Writer] can hold any value whose type has a [Read|Write] method:
>
>
>
In my example above, one could imagine the square brackets allowing for a choice of words with the options separated by a pipe.
Before going down the road of thinking more about what a language like this may look like, I'm wondering if this has already been done? I.e. is there a language or writing style that addresses these concerns? ("These concerns" being how to be clear and concise in technical writing.)
One specific issue I'd like to address is have a construct for xor.
Answer: | *Of course* there is a language which addresses your concerns! In technical writing, use *regular expressions* to identify sets of related sentences that you want to express clearly and concisely. For example, the regular expression
>
> Any type that implements a (**Read**|**Write**) method with this signature is said to implement **io**\**.**\g-1\. For the purposes of this discussion, that means that a variable of type **io**\**.**\g-1 can hold any value whose type has a \g-1 method\.
>
>
>
clearly and concisely identifies the following two sentences:
>
> 1. Any type that implements a **Read** method with this signature is said to implement **io.Read**. For the purposes of this discussion, that means that a variable of type **io.Read** can hold any value whose type has a **Read** method.
> 2. Any type that implements a **Write** method with this signature is said to implement **io.Write**. For the purposes of this discussion, that means that a variable of type **io.Write** can hold any value whose type has a **Write** method.
>
>
>
Or, even *more* clearly and concisely:
>
> Any type that (implement)s (a (**Read**|**Write**) method) with this signature is said to \g-3 **io\.**\g-1\. For the purposes of this discussion, that means that a variable of type **io\.**\g-1 can hold any value whose type has \g-2\.
>
>
>
References
----------
“[Regular expression](http://en.wikipedia.org/wiki/Regular_expression)”, *Wikipedia*
“[Perl regular expressions](http://perldoc.perl.org/perlre.html)”, *perldoc.perl.org*
“[Source of the famous ‘Now you have two problems’ quote](http://regex.info/blog/2006-09-15/247)”, *Jeffrey Friedl’s Blog*
:-) | The only answer for this is 'yes' but that doesn't tell you what you want.
One way to get this is by osmosis: read lots of technical things and try to imitate their style. This is not particularly translatable to others, but is how many people learn how to do it. This is not restricted to technical writing but can work for any style.
Another method, which works specifically for technical language, (academic, legal, medical, engineering, building, technical instructions) is to emphasize strict meanings of words and strict patterns of sentences and to follow stipulated (explicitly authoritatively required) rule.
But what you are asking for I think is actual explicit guidelines on how to write technically. There are numerous guides (books, on-line advice) on how to do this well. |
Question: I read a lot of technical documentation, especially in the computer programming space. Today I was reading the following paragraph:
>
> Any type that implements a Read (or Write) method with this signature is said to implement io.Reader (or io.Writer). For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
[http://golang.org/doc/articles/laws\_of\_reflection.html]
This paragraph could be re-written like this:
>
> Any type that implements a Read method with this signature is said to implement io.Reader. *Also, any type that implements a Write method with this signature is said to implement io.Writer.* For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
I'm guessing the first paragraph was used instead of something like the second because the second (my) example is longer.
This all got me thinking, "It'd be great if there were a way to write this paragraph that is clear and concise". Something like:
>
> Any type that implements a [Read|Write] method with this signature is said to implement io.[Reader|Writer]. For the purposes of this discussion, that means that a variable of type io.[Reader|Writer] can hold any value whose type has a [Read|Write] method:
>
>
>
In my example above, one could imagine the square brackets allowing for a choice of words with the options separated by a pipe.
Before going down the road of thinking more about what a language like this may look like, I'm wondering if this has already been done? I.e. is there a language or writing style that addresses these concerns? ("These concerns" being how to be clear and concise in technical writing.)
One specific issue I'd like to address is have a construct for xor.
Answer: | *Of course* there is a language which addresses your concerns! In technical writing, use *regular expressions* to identify sets of related sentences that you want to express clearly and concisely. For example, the regular expression
>
> Any type that implements a (**Read**|**Write**) method with this signature is said to implement **io**\**.**\g-1\. For the purposes of this discussion, that means that a variable of type **io**\**.**\g-1 can hold any value whose type has a \g-1 method\.
>
>
>
clearly and concisely identifies the following two sentences:
>
> 1. Any type that implements a **Read** method with this signature is said to implement **io.Read**. For the purposes of this discussion, that means that a variable of type **io.Read** can hold any value whose type has a **Read** method.
> 2. Any type that implements a **Write** method with this signature is said to implement **io.Write**. For the purposes of this discussion, that means that a variable of type **io.Write** can hold any value whose type has a **Write** method.
>
>
>
Or, even *more* clearly and concisely:
>
> Any type that (implement)s (a (**Read**|**Write**) method) with this signature is said to \g-3 **io\.**\g-1\. For the purposes of this discussion, that means that a variable of type **io\.**\g-1 can hold any value whose type has \g-2\.
>
>
>
References
----------
“[Regular expression](http://en.wikipedia.org/wiki/Regular_expression)”, *Wikipedia*
“[Perl regular expressions](http://perldoc.perl.org/perlre.html)”, *perldoc.perl.org*
“[Source of the famous ‘Now you have two problems’ quote](http://regex.info/blog/2006-09-15/247)”, *Jeffrey Friedl’s Blog*
:-) | In addition to brevity, which you mentioned as a reason for preferring the first version of your example to the second, there is another advantage, namely that the first version makes it immediately obvious that the statements about "Read" and about "Write" are exactly parallel. In the second version, one can see the exact parallelism by comparing the first two statements word for word (and "Also" also helps), but after doing the comparison I might have a nagging feeling that I overlooked some subtle difference between the two. That nagging feeling would be worse in the case of longer blocks of text. So I would be inclined to write your first example, perhaps with "respectively" in place of "or". Alternatively, I might write the whole story about "read" and then write "The same goes for 'Write' and 'Writer' in place of 'Read' and 'Reader'." The main point is that I would want to avoid repeating large blocks of text with only very minor changes, partly because of the resulting excess length but mainly because it's better for the reader to be told explicitly what is being changed and what is unchanged. |
Question: I read a lot of technical documentation, especially in the computer programming space. Today I was reading the following paragraph:
>
> Any type that implements a Read (or Write) method with this signature is said to implement io.Reader (or io.Writer). For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
[http://golang.org/doc/articles/laws\_of\_reflection.html]
This paragraph could be re-written like this:
>
> Any type that implements a Read method with this signature is said to implement io.Reader. *Also, any type that implements a Write method with this signature is said to implement io.Writer.* For the purposes of this discussion, that means that a variable of type io.Reader can hold any value whose type has a Read method:
>
>
>
I'm guessing the first paragraph was used instead of something like the second because the second (my) example is longer.
This all got me thinking, "It'd be great if there were a way to write this paragraph that is clear and concise". Something like:
>
> Any type that implements a [Read|Write] method with this signature is said to implement io.[Reader|Writer]. For the purposes of this discussion, that means that a variable of type io.[Reader|Writer] can hold any value whose type has a [Read|Write] method:
>
>
>
In my example above, one could imagine the square brackets allowing for a choice of words with the options separated by a pipe.
Before going down the road of thinking more about what a language like this may look like, I'm wondering if this has already been done? I.e. is there a language or writing style that addresses these concerns? ("These concerns" being how to be clear and concise in technical writing.)
One specific issue I'd like to address is have a construct for xor.
Answer: | *Of course* there is a language which addresses your concerns! In technical writing, use *regular expressions* to identify sets of related sentences that you want to express clearly and concisely. For example, the regular expression
>
> Any type that implements a (**Read**|**Write**) method with this signature is said to implement **io**\**.**\g-1\. For the purposes of this discussion, that means that a variable of type **io**\**.**\g-1 can hold any value whose type has a \g-1 method\.
>
>
>
clearly and concisely identifies the following two sentences:
>
> 1. Any type that implements a **Read** method with this signature is said to implement **io.Read**. For the purposes of this discussion, that means that a variable of type **io.Read** can hold any value whose type has a **Read** method.
> 2. Any type that implements a **Write** method with this signature is said to implement **io.Write**. For the purposes of this discussion, that means that a variable of type **io.Write** can hold any value whose type has a **Write** method.
>
>
>
Or, even *more* clearly and concisely:
>
> Any type that (implement)s (a (**Read**|**Write**) method) with this signature is said to \g-3 **io\.**\g-1\. For the purposes of this discussion, that means that a variable of type **io\.**\g-1 can hold any value whose type has \g-2\.
>
>
>
References
----------
“[Regular expression](http://en.wikipedia.org/wiki/Regular_expression)”, *Wikipedia*
“[Perl regular expressions](http://perldoc.perl.org/perlre.html)”, *perldoc.perl.org*
“[Source of the famous ‘Now you have two problems’ quote](http://regex.info/blog/2006-09-15/247)”, *Jeffrey Friedl’s Blog*
:-) | I found an interesting "language" (discussed on Slate's Lexicon Valley podcast) called loglan (<http://en.wikipedia.org/wiki/Loglan>). This may be what I was looking for. |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | You have an endless loop: `for(j=0;j<5;k++)`, you have to write `for(j=0;j<5;j++)`
You increment `k` instead of `j` | You have an endless loop. You are incrementing `k` instead of `j`:
```
for(j=0;j<5;k++)
```
You should change it both times to
```
for(j=0;j<5;j++)
``` |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | You have an endless loop: `for(j=0;j<5;k++)`, you have to write `for(j=0;j<5;j++)`
You increment `k` instead of `j` | Here... this should work. Just change your sub-loops making it `j++` instead of `k++` both top and bottom
```
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
``` |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | You have an endless loop: `for(j=0;j<5;k++)`, you have to write `for(j=0;j<5;j++)`
You increment `k` instead of `j` | ```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
}
```
you can resolve this problem |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | You have an endless loop: `for(j=0;j<5;k++)`, you have to write `for(j=0;j<5;j++)`
You increment `k` instead of `j` | I think you've mixed up the k and j variables in the second for-loop "block". When I alter it to:
```
...
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++)
System.out.print(test[i][j] + " ");
System.out.println();
}
...
```
I get the following printed to my console:
```
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
```
Is it what you wanted? |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | You have an endless loop. You are incrementing `k` instead of `j`:
```
for(j=0;j<5;k++)
```
You should change it both times to
```
for(j=0;j<5;j++)
``` | ```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
}
```
you can resolve this problem |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | Here... this should work. Just change your sub-loops making it `j++` instead of `k++` both top and bottom
```
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
``` | ```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
}
```
you can resolve this problem |
Question: I want to create a two dimensional array. **I am able to compile but not able to run**
```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; k++)
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
```
Answer: | I think you've mixed up the k and j variables in the second for-loop "block". When I alter it to:
```
...
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++)
System.out.print(test[i][j] + " ");
System.out.println();
}
...
```
I get the following printed to my console:
```
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
```
Is it what you wanted? | ```
public class Arraytest1 {
public static void main(String[] args) {
int i, j, k = 0;
int test[][] = new int[4][5];
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
test[i][j] = k;
k++;
}
}
for (i = 0; i < 4; i++) {
for (j = 0; j < 5; j++) {
System.out.print(test[i][j] + " ");
System.out.println();
}
}
}
}
```
you can resolve this problem |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | There are tools out there, such as [this from Redgate](http://www.red-gate.com/products/sql-development/sql-source-control/), but I have always found that best is to save as SQL files, perhaps even in a Database Project (SSDT?) in your solution.
Along with this, I suggest the following guidelines:
* Always assume the SVN version as the "current" / "latest"
* Ensure that every script you run has an appropriate "`if exists then drop`" at the start
* Remember to script your permissions, if any
You can initially create these SQL files by scripting directly from SSMS, and you can set SSMS to script all your "`drop`" and "`create`" as well as your permissions. | I have tried both RedGate and Visual Studio's database project and I prefer the storing the database definition in the database project.
As soon as the database becomes part of the solution, you can use your preferred source control provider. Most have excellent Visual Studio integration.
With the SSDT tools you have the 'lastest version' of the database definition, allowing you to easily make schema comparisons and generate schema upgrade scripts.
That said, the schema is usually only a part of the equasion. In real life it turns out that databases allready have a lots of data. And my users tend to get rather disappointed when they loose it.
So as soon as I rolled out v1.0 the need arrises to maintain upgrade scripts. Sometimes these just contain schema changes, but many times I need to create defaults based on the content of some other table, need to release a particular constraint until I seeded the data etc. Usually simply upgrading the schema does not quite cut it. My preference is to have these upgrade script in a separate folder in the database project too. These would usually look like 'upgrade from v1.0 to v1.1'.
My databases always have a reference table that tells me the current version number, so I can block incompatible upgrades. The first statement in my upgrade scripts check the current version and bail out if it's different from what's expected.
Another benefit from the database projects is to be able to deploy different sets of data based on the same schema. I have a different datasets for development, the QA team, user acceptence test and for automated integration tests. Since a database project can have only 1 post-deploy script, the trick here is to make a new database project that references the 'master' project and to make the custom dataset part of the post deployment proces of that project.
These were my 2 cents, Whatever proces you come up, above all, it must fit you and your team and hopefully support you with most of the common tasks. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | Saving the SQL files in source control provides control over the SQL files only. It doesn't control the changes of the actual database objects, nor it prevents simultaneous changes of the same database object by multiple users (and I guess you would like to have that under control, too).
What we use is a 3rd party tool ([ApexSQL Version](http://www.apexsql.com/sql_tools_version.aspx)), it integrates both with SSMS and VS, you can chose whether to work with a database version of the object, or with a Source Control version. If you're editing a database version, it's automatically checked out only to you, so no one else can edit it (it doesn't merge changes from different users). Only when you check it in again, others can modify it. And you can have your SC version different from the version of a live object (I use that when I leave for the day and plan to finish the edits and test it on the next) | My company has just developed this [new tool](http://servantt.com) (**free**) that helps you to easily **extract scripts** for SQL databases, can do **comparison**, can **launch WinMerge** for quickly comparing scripts to live database, and can also **synch differences** both updating the scripts or applying the changes to the database (except for tables, which would involve more complexity and more risks).
**Servantt is the WinMerge for comparing SQL Server Databases to Version-Controlled Scripts.**
It supports and encourages best-practices in software development:
* Keeping Database objects under version-control (\*)
* Removing access rights from developers on production environments
* DBA review of changes in procedures/views for performance bottlenecks and naming standards
* Naming objects using fully qualified identifiers and bracketed delimiters (it fixes the CREATE PROCEDURE/VIEW/FUNCTION/etc scripts)
(\*) Scripts are saved into a local folder that can be a working copy of Git, Subversion, TFS, Source Safe, or any other VCS.
Free Download: <http://servantt.com>
The professional version (which is still under development) will be a completely different beast - it's targeted at deployment automation (release management), for automating tasks such as updating IIS, updating Windows Services, etc. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | There are tools out there, such as [this from Redgate](http://www.red-gate.com/products/sql-development/sql-source-control/), but I have always found that best is to save as SQL files, perhaps even in a Database Project (SSDT?) in your solution.
Along with this, I suggest the following guidelines:
* Always assume the SVN version as the "current" / "latest"
* Ensure that every script you run has an appropriate "`if exists then drop`" at the start
* Remember to script your permissions, if any
You can initially create these SQL files by scripting directly from SSMS, and you can set SSMS to script all your "`drop`" and "`create`" as well as your permissions. | Saving the SQL files in source control provides control over the SQL files only. It doesn't control the changes of the actual database objects, nor it prevents simultaneous changes of the same database object by multiple users (and I guess you would like to have that under control, too).
What we use is a 3rd party tool ([ApexSQL Version](http://www.apexsql.com/sql_tools_version.aspx)), it integrates both with SSMS and VS, you can chose whether to work with a database version of the object, or with a Source Control version. If you're editing a database version, it's automatically checked out only to you, so no one else can edit it (it doesn't merge changes from different users). Only when you check it in again, others can modify it. And you can have your SC version different from the version of a live object (I use that when I leave for the day and plan to finish the edits and test it on the next) |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | There are tools out there, such as [this from Redgate](http://www.red-gate.com/products/sql-development/sql-source-control/), but I have always found that best is to save as SQL files, perhaps even in a Database Project (SSDT?) in your solution.
Along with this, I suggest the following guidelines:
* Always assume the SVN version as the "current" / "latest"
* Ensure that every script you run has an appropriate "`if exists then drop`" at the start
* Remember to script your permissions, if any
You can initially create these SQL files by scripting directly from SSMS, and you can set SSMS to script all your "`drop`" and "`create`" as well as your permissions. | Use RedGate Source Control to hook it up to your source control.
<http://www.red-gate.com/products/sql-development/sql-source-control/>
It will hook your SSMS directly to your source control repository and even allows for checking in static data.
Works like a charm |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | I have tried both RedGate and Visual Studio's database project and I prefer the storing the database definition in the database project.
As soon as the database becomes part of the solution, you can use your preferred source control provider. Most have excellent Visual Studio integration.
With the SSDT tools you have the 'lastest version' of the database definition, allowing you to easily make schema comparisons and generate schema upgrade scripts.
That said, the schema is usually only a part of the equasion. In real life it turns out that databases allready have a lots of data. And my users tend to get rather disappointed when they loose it.
So as soon as I rolled out v1.0 the need arrises to maintain upgrade scripts. Sometimes these just contain schema changes, but many times I need to create defaults based on the content of some other table, need to release a particular constraint until I seeded the data etc. Usually simply upgrading the schema does not quite cut it. My preference is to have these upgrade script in a separate folder in the database project too. These would usually look like 'upgrade from v1.0 to v1.1'.
My databases always have a reference table that tells me the current version number, so I can block incompatible upgrades. The first statement in my upgrade scripts check the current version and bail out if it's different from what's expected.
Another benefit from the database projects is to be able to deploy different sets of data based on the same schema. I have a different datasets for development, the QA team, user acceptence test and for automated integration tests. Since a database project can have only 1 post-deploy script, the trick here is to make a new database project that references the 'master' project and to make the custom dataset part of the post deployment proces of that project.
These were my 2 cents, Whatever proces you come up, above all, it must fit you and your team and hopefully support you with most of the common tasks. | My company has just developed this [new tool](http://servantt.com) (**free**) that helps you to easily **extract scripts** for SQL databases, can do **comparison**, can **launch WinMerge** for quickly comparing scripts to live database, and can also **synch differences** both updating the scripts or applying the changes to the database (except for tables, which would involve more complexity and more risks).
**Servantt is the WinMerge for comparing SQL Server Databases to Version-Controlled Scripts.**
It supports and encourages best-practices in software development:
* Keeping Database objects under version-control (\*)
* Removing access rights from developers on production environments
* DBA review of changes in procedures/views for performance bottlenecks and naming standards
* Naming objects using fully qualified identifiers and bracketed delimiters (it fixes the CREATE PROCEDURE/VIEW/FUNCTION/etc scripts)
(\*) Scripts are saved into a local folder that can be a working copy of Git, Subversion, TFS, Source Safe, or any other VCS.
Free Download: <http://servantt.com>
The professional version (which is still under development) will be a completely different beast - it's targeted at deployment automation (release management), for automating tasks such as updating IIS, updating Windows Services, etc. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | Saving the SQL files in source control provides control over the SQL files only. It doesn't control the changes of the actual database objects, nor it prevents simultaneous changes of the same database object by multiple users (and I guess you would like to have that under control, too).
What we use is a 3rd party tool ([ApexSQL Version](http://www.apexsql.com/sql_tools_version.aspx)), it integrates both with SSMS and VS, you can chose whether to work with a database version of the object, or with a Source Control version. If you're editing a database version, it's automatically checked out only to you, so no one else can edit it (it doesn't merge changes from different users). Only when you check it in again, others can modify it. And you can have your SC version different from the version of a live object (I use that when I leave for the day and plan to finish the edits and test it on the next) | I ended up writing a tool myself.
It's available for free download -<http://www.gitsql.net>
I hope it helps other people who want to achieve the same end goal.
Here is an article which describes how to source control SQL Server. <http://gitsql.net/documentation-04_SQL_Server_and_GIT>
I've tried to make it as easy as possible. (3 screens)
* Connect to SQL Server
* Select objects
* Chose folder to export to /import from
I also - accidentally - added the feature of being able
to selectively chose individual objects to import - or export. Which
makes it much easy whilst developing.
I would usually make a change to a stored procedure and a table, and then export those two objects to a GIT directory.
Then i use Source Tree to visually see the changes and then commit them into bitbucket if i'm happy. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | There are tools out there, such as [this from Redgate](http://www.red-gate.com/products/sql-development/sql-source-control/), but I have always found that best is to save as SQL files, perhaps even in a Database Project (SSDT?) in your solution.
Along with this, I suggest the following guidelines:
* Always assume the SVN version as the "current" / "latest"
* Ensure that every script you run has an appropriate "`if exists then drop`" at the start
* Remember to script your permissions, if any
You can initially create these SQL files by scripting directly from SSMS, and you can set SSMS to script all your "`drop`" and "`create`" as well as your permissions. | My company has just developed this [new tool](http://servantt.com) (**free**) that helps you to easily **extract scripts** for SQL databases, can do **comparison**, can **launch WinMerge** for quickly comparing scripts to live database, and can also **synch differences** both updating the scripts or applying the changes to the database (except for tables, which would involve more complexity and more risks).
**Servantt is the WinMerge for comparing SQL Server Databases to Version-Controlled Scripts.**
It supports and encourages best-practices in software development:
* Keeping Database objects under version-control (\*)
* Removing access rights from developers on production environments
* DBA review of changes in procedures/views for performance bottlenecks and naming standards
* Naming objects using fully qualified identifiers and bracketed delimiters (it fixes the CREATE PROCEDURE/VIEW/FUNCTION/etc scripts)
(\*) Scripts are saved into a local folder that can be a working copy of Git, Subversion, TFS, Source Safe, or any other VCS.
Free Download: <http://servantt.com>
The professional version (which is still under development) will be a completely different beast - it's targeted at deployment automation (release management), for automating tasks such as updating IIS, updating Windows Services, etc. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | There are tools out there, such as [this from Redgate](http://www.red-gate.com/products/sql-development/sql-source-control/), but I have always found that best is to save as SQL files, perhaps even in a Database Project (SSDT?) in your solution.
Along with this, I suggest the following guidelines:
* Always assume the SVN version as the "current" / "latest"
* Ensure that every script you run has an appropriate "`if exists then drop`" at the start
* Remember to script your permissions, if any
You can initially create these SQL files by scripting directly from SSMS, and you can set SSMS to script all your "`drop`" and "`create`" as well as your permissions. | Try [Ankhsvn](http://ankhsvn.open.collab.net/), highly recommended and free.
From the homepage:
>
> AnkhSVN is a Subversion Source Control Provider for Microsoft Visual Studio 2005, 2008, 2010 **and 2012**.
>
>
> AnkhSVN provides Apache™ Subversion® source code management support to all project types supported by Visual Studio and allows you to perform the most common version control operations directly from inside the Microsoft Visual Studio IDE.
>
>
> The Pending Changes dashboard gives you a unique insight in your development process and provides easy access to the source code and issue management features. The deep source code control (SCC) integration allows you to focus on developing, while AnkhSVN keeps track of all your changes and provides you the tools to effectively handle your specific needs.
>
>
> |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | I have tried both RedGate and Visual Studio's database project and I prefer the storing the database definition in the database project.
As soon as the database becomes part of the solution, you can use your preferred source control provider. Most have excellent Visual Studio integration.
With the SSDT tools you have the 'lastest version' of the database definition, allowing you to easily make schema comparisons and generate schema upgrade scripts.
That said, the schema is usually only a part of the equasion. In real life it turns out that databases allready have a lots of data. And my users tend to get rather disappointed when they loose it.
So as soon as I rolled out v1.0 the need arrises to maintain upgrade scripts. Sometimes these just contain schema changes, but many times I need to create defaults based on the content of some other table, need to release a particular constraint until I seeded the data etc. Usually simply upgrading the schema does not quite cut it. My preference is to have these upgrade script in a separate folder in the database project too. These would usually look like 'upgrade from v1.0 to v1.1'.
My databases always have a reference table that tells me the current version number, so I can block incompatible upgrades. The first statement in my upgrade scripts check the current version and bail out if it's different from what's expected.
Another benefit from the database projects is to be able to deploy different sets of data based on the same schema. I have a different datasets for development, the QA team, user acceptence test and for automated integration tests. Since a database project can have only 1 post-deploy script, the trick here is to make a new database project that references the 'master' project and to make the custom dataset part of the post deployment proces of that project.
These were my 2 cents, Whatever proces you come up, above all, it must fit you and your team and hopefully support you with most of the common tasks. | I ended up writing a tool myself.
It's available for free download -<http://www.gitsql.net>
I hope it helps other people who want to achieve the same end goal.
Here is an article which describes how to source control SQL Server. <http://gitsql.net/documentation-04_SQL_Server_and_GIT>
I've tried to make it as easy as possible. (3 screens)
* Connect to SQL Server
* Select objects
* Chose folder to export to /import from
I also - accidentally - added the feature of being able
to selectively chose individual objects to import - or export. Which
makes it much easy whilst developing.
I would usually make a change to a stored procedure and a table, and then export those two objects to a GIT directory.
Then i use Source Tree to visually see the changes and then commit them into bitbucket if i'm happy. |
Question: I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).
What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?
Answer: | Try [Ankhsvn](http://ankhsvn.open.collab.net/), highly recommended and free.
From the homepage:
>
> AnkhSVN is a Subversion Source Control Provider for Microsoft Visual Studio 2005, 2008, 2010 **and 2012**.
>
>
> AnkhSVN provides Apache™ Subversion® source code management support to all project types supported by Visual Studio and allows you to perform the most common version control operations directly from inside the Microsoft Visual Studio IDE.
>
>
> The Pending Changes dashboard gives you a unique insight in your development process and provides easy access to the source code and issue management features. The deep source code control (SCC) integration allows you to focus on developing, while AnkhSVN keeps track of all your changes and provides you the tools to effectively handle your specific needs.
>
>
> | My company has just developed this [new tool](http://servantt.com) (**free**) that helps you to easily **extract scripts** for SQL databases, can do **comparison**, can **launch WinMerge** for quickly comparing scripts to live database, and can also **synch differences** both updating the scripts or applying the changes to the database (except for tables, which would involve more complexity and more risks).
**Servantt is the WinMerge for comparing SQL Server Databases to Version-Controlled Scripts.**
It supports and encourages best-practices in software development:
* Keeping Database objects under version-control (\*)
* Removing access rights from developers on production environments
* DBA review of changes in procedures/views for performance bottlenecks and naming standards
* Naming objects using fully qualified identifiers and bracketed delimiters (it fixes the CREATE PROCEDURE/VIEW/FUNCTION/etc scripts)
(\*) Scripts are saved into a local folder that can be a working copy of Git, Subversion, TFS, Source Safe, or any other VCS.
Free Download: <http://servantt.com>
The professional version (which is still under development) will be a completely different beast - it's targeted at deployment automation (release management), for automating tasks such as updating IIS, updating Windows Services, etc. |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | To prove
$$
| a + b | + |a-b| \ge|a| + |b|
$$
Square both the sides. This does not change inequality. We have
$$
| a + b |^2 + |a-b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
(|a|^2 + |b|^2 +2|a||b|cos\theta) + (|a|^2 + |b|^2 -2|a||b|cos\theta) + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$ where $\theta$ is angle between a and b
$$
2|a|^2 + 2|b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| \ge 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| - 2|a||b| \ge 0
$$
$$
(|a|-|b|)^2 + 2|a+b||a-b|\ge 0
$$
So on left hand side we have both terms which are always greater than 0, hence this inequality always holds
Equality exists when a=b
QED | Yet another solution:
\begin{eqnarray}
|a+b|+|a-b| &=& \max(a+b,-a-b)+\max(a-b,b-a) \\
&=& \max(2a, 2b, -2b, -2a) \\
&=& 2 \max(|a|,|b|) \\
&\ge& |a|+|b|
\end{eqnarray} |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Start with $|a|+|b|$ and rewrite $a$ and $b$ as $\frac{a+b}{2}+\frac{a-b}{2}$ and $\frac{a+b}{2}+\frac{b-a}{2}$ respectively. Use the triangle inequality. | Here's the way I see this geometrically in $\mathbb{C}$: let's say we have two complex numbers $a, b$ and consider the parallelogram formed by $0, a, a + b, b$. The midpoints of the diagonals coincide at the point $\frac{a + b}{2}$. These diagonals cut the parallelogram into four triangles, on each of which we can perform the triangle inequality. We get the following inequalities:
\begin{align\*}
|a - 0| &\le \left| a - \frac{a + b}{2} \right| + \left| \frac{a + b}{2} - 0 \right| \\
|(a + b) - a| &\le \left| (a + b) - \frac{a + b}{2} \right| + \left| \frac{a + b}{2} - a \right| \\
|b - (a + b)| &\le \left| b - \frac{a + b}{2} \right| + \left| \frac{a + b}{2} - (a + b) \right| \\
|0 - b| &\le \left| a - \frac{a + b}{2} \right| + \left| \frac{a + b}{2} - 0 \right|
\end{align\*}
Simplifying the above inequalities and summing them up yields the desired inequality. |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Using triangle inequality,
$$|a+b| + |a-b| \geqslant |(a+b) + (a - b)| = 2|a|$$
also as $|a-b| = |b-a|$,
$$|a+b| + |a-b| \geqslant |(a+b) + (b - a)| = 2|b|$$
Now add and conclude! | To prove
$$
| a + b | + |a-b| \ge|a| + |b|
$$
Square both the sides. This does not change inequality. We have
$$
| a + b |^2 + |a-b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
(|a|^2 + |b|^2 +2|a||b|cos\theta) + (|a|^2 + |b|^2 -2|a||b|cos\theta) + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$ where $\theta$ is angle between a and b
$$
2|a|^2 + 2|b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| \ge 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| - 2|a||b| \ge 0
$$
$$
(|a|-|b|)^2 + 2|a+b||a-b|\ge 0
$$
So on left hand side we have both terms which are always greater than 0, hence this inequality always holds
Equality exists when a=b
QED |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Using triangle inequality,
$$|a+b| + |a-b| \geqslant |(a+b) + (a - b)| = 2|a|$$
also as $|a-b| = |b-a|$,
$$|a+b| + |a-b| \geqslant |(a+b) + (b - a)| = 2|b|$$
Now add and conclude! | From triangular inequality we have
$$\left|u+v\right|+\left|u-v \right|\le |u|+|v|+|u|+|v|=2|u|+2|v|\\ \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|\le |u|+|v|\quad(\*)$$
set $a+b=u;\;a-b=v$
$$a=\frac{u+v}{2};\;b=\frac{u-v}{2}$$
$| a + b | + |a-b| \ge|a| + |b|$
$$\left|\frac{u+v}{2}+\frac{u-v}{2}\right|+\left|\frac{u+v}{2}-\frac{u-v}{2}\right|\ge \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|$$
$$|u|+|v|\ge \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|$$
which is true because of $(\*)$ |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Without loss of generality, we may assume that $|a|\geq |b|$. Since the terms are all non-negative, by squaring both sides, we obtain the equivalent inequality
$$(a + b)^2 + (a-b)^2 +2(a^2-b^2)\ge a^2+b^2+2|a||b|$$
that is
$$3a^2-b^2\ge 2|a||b|\Leftrightarrow (3|a|+|b|)(|a|-|b|)\geq 0$$
which holds. Therefore the given inequality is always true. | From triangular inequality we have
$$\left|u+v\right|+\left|u-v \right|\le |u|+|v|+|u|+|v|=2|u|+2|v|\\ \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|\le |u|+|v|\quad(\*)$$
set $a+b=u;\;a-b=v$
$$a=\frac{u+v}{2};\;b=\frac{u-v}{2}$$
$| a + b | + |a-b| \ge|a| + |b|$
$$\left|\frac{u+v}{2}+\frac{u-v}{2}\right|+\left|\frac{u+v}{2}-\frac{u-v}{2}\right|\ge \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|$$
$$|u|+|v|\ge \left|\frac{u+v}{2}\right|+\left|\frac{u-v}{2}\right|$$
which is true because of $(\*)$ |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Using triangle inequality,
$$|a+b| + |a-b| \geqslant |(a+b) + (a - b)| = 2|a|$$
also as $|a-b| = |b-a|$,
$$|a+b| + |a-b| \geqslant |(a+b) + (b - a)| = 2|b|$$
Now add and conclude! | My suggestions would be the following.
1. First observe that it is symmetric in $a$ and $b$. Moreover, it's certainly true when $a = b$.
2. Note also that if we replace $b$ with $-b$, then the claim is unchanged. So we may assume $b \ge 0$. The same holds for $a$.
3. It also shows us that, without loss of generality, we can assume that $a > b$. If $a > b$, then this tells us something useful: $|a - b| = a - b$.
So we need only consider $a > b \ge 0$. I leave the rest of the argument to you (although there's not much left). Hopefully this helps show how one might approach these questions, not just give you an almost-solution to one particular question :) |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | My suggestions would be the following.
1. First observe that it is symmetric in $a$ and $b$. Moreover, it's certainly true when $a = b$.
2. Note also that if we replace $b$ with $-b$, then the claim is unchanged. So we may assume $b \ge 0$. The same holds for $a$.
3. It also shows us that, without loss of generality, we can assume that $a > b$. If $a > b$, then this tells us something useful: $|a - b| = a - b$.
So we need only consider $a > b \ge 0$. I leave the rest of the argument to you (although there's not much left). Hopefully this helps show how one might approach these questions, not just give you an almost-solution to one particular question :) | Start with $|a|+|b|$ and rewrite $a$ and $b$ as $\frac{a+b}{2}+\frac{a-b}{2}$ and $\frac{a+b}{2}+\frac{b-a}{2}$ respectively. Use the triangle inequality. |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Without loss of generality, we may assume that $|a|\geq |b|$. Since the terms are all non-negative, by squaring both sides, we obtain the equivalent inequality
$$(a + b)^2 + (a-b)^2 +2(a^2-b^2)\ge a^2+b^2+2|a||b|$$
that is
$$3a^2-b^2\ge 2|a||b|\Leftrightarrow (3|a|+|b|)(|a|-|b|)\geq 0$$
which holds. Therefore the given inequality is always true. | **Very Simple Trick: We have that**
\begin{split} (|a|-|b|)^2 +2|a^2-b^2| \ge 0&\Longleftrightarrow & a^2+b^2 -2|a||b|+ 2|a +b||a-b| \ge 0\\
&\Longleftrightarrow & a^2+b^2 + 2|a +b||a-b|\ge 2|a||b|\\
&\Longleftrightarrow& \color{red}{2a^2+2b^2} + 2|a +b||a-b|\ge \color{red}{a^2+b^2}+2|a||b|\\
&\Longleftrightarrow& (|a +b|+|a-b|)^2 \ge (|a|+|b|)^2\\
&\Longleftrightarrow& |a +b|+|a-b| \ge |a|+|b|\end{split}
Given that $$\color{red}{ (|a +b|+|a-b|)^2 = 2a^2+2b^2 + 2|a +b||a-b|}$$ |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Start with $|a|+|b|$ and rewrite $a$ and $b$ as $\frac{a+b}{2}+\frac{a-b}{2}$ and $\frac{a+b}{2}+\frac{b-a}{2}$ respectively. Use the triangle inequality. | Without loss of generality, we may assume that $|a|\geq |b|$. Since the terms are all non-negative, by squaring both sides, we obtain the equivalent inequality
$$(a + b)^2 + (a-b)^2 +2(a^2-b^2)\ge a^2+b^2+2|a||b|$$
that is
$$3a^2-b^2\ge 2|a||b|\Leftrightarrow (3|a|+|b|)(|a|-|b|)\geq 0$$
which holds. Therefore the given inequality is always true. |
Question: I am trying to prove this for nearly an hour now:
>
> $$
> \tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
> $$
>
>
>
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
Answer: | Using triangle inequality,
$$|a+b| + |a-b| \geqslant |(a+b) + (a - b)| = 2|a|$$
also as $|a-b| = |b-a|$,
$$|a+b| + |a-b| \geqslant |(a+b) + (b - a)| = 2|b|$$
Now add and conclude! | **Very Simple Trick: We have that**
\begin{split} (|a|-|b|)^2 +2|a^2-b^2| \ge 0&\Longleftrightarrow & a^2+b^2 -2|a||b|+ 2|a +b||a-b| \ge 0\\
&\Longleftrightarrow & a^2+b^2 + 2|a +b||a-b|\ge 2|a||b|\\
&\Longleftrightarrow& \color{red}{2a^2+2b^2} + 2|a +b||a-b|\ge \color{red}{a^2+b^2}+2|a||b|\\
&\Longleftrightarrow& (|a +b|+|a-b|)^2 \ge (|a|+|b|)^2\\
&\Longleftrightarrow& |a +b|+|a-b| \ge |a|+|b|\end{split}
Given that $$\color{red}{ (|a +b|+|a-b|)^2 = 2a^2+2b^2 + 2|a +b||a-b|}$$ |
Question: I have two objects of equal length (one is a list produced by parsing JSON, and another is a slice of multi-dimensional of an array), e.g.:
```
library(rjson)
library(foreach)
iter1<-iter( fromJSON(file=jsonfilename)$someJSONarray )
iter2<-iter( myarr, by="row" )
```
I need to be able to do the following:
```
out=foreach(x=zipiter(iter1,iter2),combine=list) %do%
{
#Do stuff with elements from both iterators accessed by e.g. x[[1]] and x[[2]]
}
```
Is there any standard way of doing it (like in Python/C++ with boost Zip iterator)?
Answer: | There is an `izip` function in the `itertools` package that does what you describe:
```
library(itertools)
out <- foreach(x=izip(iter1, iter2)) %do% {
# x[[1]] contains a value from iter1
# x[[2]] contains a value from iter2
}
```
But I prefer to specify multiple loop variables to `foreach`:
```
out <- foreach(x=iter1, y=iter2) %do% {
# x contains a value from iter1
# y contains a value from iter2
}
```
Both solutions iterate over values from the iterators in parallel. If you want the two arguments in a list, then `izip` is more convenient. | This might not be exactly what you need, but hopefully it might at least get you on the right track:
```
library(foreach)
X = 1:10
Y = 11:20
out = foreach(x=t(data.frame(X, Y))) %do% {x[1,]*x[2,]}
```
Yes, this is weird, but it properly pairs up the data. If you pass in a dataframe, `foreach` iterates over the columns instead of the rows (this isn't entirely surprising since `lapply` does the same thing). So transposing the dataframe results in iteration over the rows, but then the `x` object is a column vector, so we need to index the rows instead of the columns like we'd expect. |
Question: I have two objects of equal length (one is a list produced by parsing JSON, and another is a slice of multi-dimensional of an array), e.g.:
```
library(rjson)
library(foreach)
iter1<-iter( fromJSON(file=jsonfilename)$someJSONarray )
iter2<-iter( myarr, by="row" )
```
I need to be able to do the following:
```
out=foreach(x=zipiter(iter1,iter2),combine=list) %do%
{
#Do stuff with elements from both iterators accessed by e.g. x[[1]] and x[[2]]
}
```
Is there any standard way of doing it (like in Python/C++ with boost Zip iterator)?
Answer: | This might not be exactly what you need, but hopefully it might at least get you on the right track:
```
library(foreach)
X = 1:10
Y = 11:20
out = foreach(x=t(data.frame(X, Y))) %do% {x[1,]*x[2,]}
```
Yes, this is weird, but it properly pairs up the data. If you pass in a dataframe, `foreach` iterates over the columns instead of the rows (this isn't entirely surprising since `lapply` does the same thing). So transposing the dataframe results in iteration over the rows, but then the `x` object is a column vector, so we need to index the rows instead of the columns like we'd expect. | After Steve Weston's answer there is nothing really to add; I'd post my own working version of the zip iterator, so the people learning how to live with the iterators in R would benefit. The iterator is named `izipiter` to avoid name clash with the existing `izip` iterator.
```
library(foreach)
izipiter<- function(iters)
{
nextEl<-function()
{
tryCatch(
{
foreach(it=iters,.combine=list) %do%
nextElem(it)
}, error=function(e) {stop("StopIteration")})
}
obj<-list(nextElem=nextEl)
class(obj) <- c('izipiter','abstractiter','iter')
obj
}
```
And this is how to use it:
```
it1=iter(c(3,5,15))
it2=iter(list(x="a",y="b",z=c("one","two","three")))
myit=izipiter(iters=list(it1,it2))
foreach(it=myit) %do%
print(it)
``` |
Question: I have two objects of equal length (one is a list produced by parsing JSON, and another is a slice of multi-dimensional of an array), e.g.:
```
library(rjson)
library(foreach)
iter1<-iter( fromJSON(file=jsonfilename)$someJSONarray )
iter2<-iter( myarr, by="row" )
```
I need to be able to do the following:
```
out=foreach(x=zipiter(iter1,iter2),combine=list) %do%
{
#Do stuff with elements from both iterators accessed by e.g. x[[1]] and x[[2]]
}
```
Is there any standard way of doing it (like in Python/C++ with boost Zip iterator)?
Answer: | There is an `izip` function in the `itertools` package that does what you describe:
```
library(itertools)
out <- foreach(x=izip(iter1, iter2)) %do% {
# x[[1]] contains a value from iter1
# x[[2]] contains a value from iter2
}
```
But I prefer to specify multiple loop variables to `foreach`:
```
out <- foreach(x=iter1, y=iter2) %do% {
# x contains a value from iter1
# y contains a value from iter2
}
```
Both solutions iterate over values from the iterators in parallel. If you want the two arguments in a list, then `izip` is more convenient. | After Steve Weston's answer there is nothing really to add; I'd post my own working version of the zip iterator, so the people learning how to live with the iterators in R would benefit. The iterator is named `izipiter` to avoid name clash with the existing `izip` iterator.
```
library(foreach)
izipiter<- function(iters)
{
nextEl<-function()
{
tryCatch(
{
foreach(it=iters,.combine=list) %do%
nextElem(it)
}, error=function(e) {stop("StopIteration")})
}
obj<-list(nextElem=nextEl)
class(obj) <- c('izipiter','abstractiter','iter')
obj
}
```
And this is how to use it:
```
it1=iter(c(3,5,15))
it2=iter(list(x="a",y="b",z=c("one","two","three")))
myit=izipiter(iters=list(it1,it2))
foreach(it=myit) %do%
print(it)
``` |
Question: I understand that handling strings in hdf5 seems to be tricky - I am looking for a correct way to set attributes to a dataset where the attribute value is in the form of a tuple, (float/number/numpyarray, string).
Furthermore I need it to be the same when read back as inputted, as I then compare the dataset attributes to an ordered dictionary of desired attributes.
What is the correct way to handle this?
I have so far to set the attributes using
```
def setallattributes(dataset, dictattributes):
for key, value in dictattributes.items():
tup0 = value[0]
tup1 = value[1].encode('utf-8')
value = (tup0, tup1)
dataset.attrs[key] = value
```
and I am trying to check the attributes match the desired attributes using
```
for datasetname in list(group.keys()):
dataset = f[datasetname]
if dataset.size != 0:
saved_attributes = dataset.attrs.items() #Get (name, value) tuples for all attributes attached to this object. On Py3, it’s a collection or set-like object.
if dict(saved_attributes) == input_attributes: #check attributes match -- both dicts, one ordered one not
datasetnamelist.append(datasetname)
```
This currently results in trying to compare things like
```
{'Rmax': array([b'200.0', b'ld'], dtype='|S32'), 'fracinc': array([b'0.5', b'$\\pi$'], dtype='|S32')} == OrderedDict([('Rmin', (0, 'ld')), ('Rmax',(1, 'ld')), ('fracinc',(0.5, r'$\pi$'))])
```
which returns False.
Answer: | The best way to find out is looking at Kera's [code](https://github.com/keras-team/keras/blob/master/keras/datasets/mnist.py#L11):
```
def load_data(path='mnist.npz'):
path = get_file(path, origin='https://s3.amazonaws.com/img-datasets/mnist.npz', file_hash='8a61469f7ea1b51cbae51d4f78837e45')
with np.load(path, allow_pickle=True) as f:
x_train, y_train = f['x_train'], f['y_train']
x_test, y_test = f['x_test'], f['y_test']
return (x_train, y_train), (x_test, y_test)
```
You can see basically is downloading a file which contains the dataset, which is already separated in train and test data.
The only parameter (`path`) is basically where to store the downloaded dataset. | For Keras source-stuff, I recommend searching the Github repository - e.g., Google "keras mnist github". From the [source code](https://github.com/keras-team/keras/blob/master/keras/datasets/mnist.py), `mnist.load_data()` *unpacks* a dataset that was specifically *pickled* into a format that allows extracting the data as shown in the source code (also pre-sorted into train vs test, pre-shuffled, etc).
Keras then returns the unpacked data in the form you used above. |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | Depending on the dbms used, that single and double quotes may be interpreted when the sql statement is run. What dbms are you using? Your statement now doesn't select for the value in the variable, but for any user whose lastname is "lastname". It should be something like:
```
lastname like '%#lastname#'
``` |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | There's a response here from Adam Cameron, which was apparently deleted by an overzealous mod.
Rather than repeat what he says, I've just copied and pasted (with emphasis added to the key parts):
---
Just to clarify that **the syntax you tried in your first example *does* work**. That is the correct approach here. To clarify / explain:
The `<cfquery>` version of the example you have would be along the lines of:
```
<cfqueryparam value="%foo">
```
So in the function version, the param would be `?` or `:paramName` and the value of the param would continue to be `"%foo"`.
The `%` is part of the param value, not the SQL string.
So given that "doesn't work" for you, **it would help if you posted the error, or whatever it is that causes you to think it's not working** (what your expectation is, and what the actual results are). Then we can deal with the actual cause of your problem, which is not what you think it is, I think.
Does the query work fine as a `<cfquery>`? |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | I would suggest using the `CFQuery` tag instead of attempting to run queries within `CFScript`. Unless you REALLY know what you are doing. I say this because the `CFQuery` tag has some built-in functionality that not only makes building queries easier for you but may also protect you from unforeseen attacks (the SQL injection type). For example, when using `CFQuery` it will automatically escape single-quotes for you so that inserting things like `'well isn't that a mess'` will not blow up on you. You also have the benefit of being able to use the `CFQueryParam` tag to further battle against SQL injection attacks. While you may be able to use the `CFQueryParam` functionality within `CFScript` it is not as straight forward (at least not for me).
[See this blog post from Ben Nadel talking about some of this.](http://www.bennadel.com/blog/1680-Learning-ColdFusion-9-Using-CFQuery-In-CFScript-Can-Enable-SQL-Injection-Attacks.htm)
So in `CFQuery` tags your query would look something like this:
```
<cfquery name="myQuery" datasource="#variables.dsn#">
SELECT name
FROM users
WHERE lastname LIKE <cfqueryparam cfsqltype="cf_sql_varchar" value="%:#arguments.lastname#" maxlength="256" />
</cfquery>
``` |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | Just remember that you ultimately need to see what CF gives the DB server. In this instance, you can try this mockup to get close and find the same error in SSMS by messing with the quotes/value in the param declaration:
```
declare @param1 varchar(max) = '%Eisenlohr';
SELECT name FROM users WHERE lastname LIKE @param1
``` |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | I just ran into the same problem as the original poster where it "wasn't working" and I didn't get any results from the query of queries.
The problem for me is that the wildcard search is case-sensitive.
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & LCase(arguments.lastname), cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE LOWER(lastname) LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
So what I did was made sure the incoming argument was lower case and made sure the comparing field in the SQL was lower case as well and it worked. |
Question: Is it possible to use a wildcard in a SQL LIKE statement within a ColdFusion cfscript query?
An example that doesn't work:
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name='lastname', value='%' & arguments.lastname, cfsqltype="cf_sql_varchar");
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
```
I also tried these, which didn't work:
```
local.qString = 'SELECT name FROM users WHERE lastname LIKE %:lastname';
local.qString = "SELECT name FROM users WHERE lastname LIKE '%:lastname'";
```
**UPDATE:**
I am using MS SQL Server 2008.
The query works fine within SQL Server Mgmt Studio... I think it has something to do with how to format the query within cfscript tags?
Answer: | Yes, it is possible. You're setting it in the param, which is correct. I'm not sure why it's not working with you.
I did the following and it worked.
```
var qryArgsCol = {};
qryArgsCol.datasource = variables.datasource;
qryArgsCol.SQL = "
SELECT ID
FROM Users
WHERE LastName LIKE :searchStringParam
";
var qryGetID = new query(argumentCollection=qryArgsCol);
qryGetID.addParam(name="searchStringParam", value="%" & searchString, cfsqltype="cf_sql_varchar");
qryGetIDResult = qryGetID.execute().getResult();
``` | Use like this.
```
local.q = new Query();
local.q.setDatasource(variables.dsn);
local.q.addParam(name="lastname", cfsqltype="cf_sql_varchar",value='%ARGUMENTS.lastname' );
local.qString = 'SELECT name FROM users WHERE lastname LIKE :lastname';
local.q.setSQL(local.qString);
local.result = local.q.execute().getResult();
``` |
Question: Since I don't have the answer to this one, I want to make sure I've done this correctly.
>
> Show that
> $$\arctan x = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)$$
>
>
>
Since
$$\arctan x = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)$$
we have
$$\tan\left(\arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)\right) = x$$
Applying the Pythagorean theorem, we learn that $b = 1$, so we have $\tan(x/1) = x$.
It may be a bit short, but I really want to be sure I have this right before I continue on. :)
Answer: | The connection is way more *revealing* in its simplicty if you just use trigonometry. For $x$ positive take a right-angled triangle with sides $\overline{AB} = 1$ and $\overline{BC} = x$. Then by definition
$$\angle CAB = \arctan x.$$
The hypotenuse measures $\overline{AC} = \sqrt{1+x^2}$, by Pythagorean Theorem. So the same angle can be also defined as
$$\angle CAB = \arcsin \left(\frac{x}{\sqrt{1+x^2}}\right).$$
For negative $x$ take $\overline{BC} = -x$ and recall the odd symmetry of both sine and tangent. As easy as that. | $\tan \arcsin \frac{x}{\sqrt{x^2+1}} = \frac{\sin}{\cos} \arcsin \frac{x}{\sqrt{1+x^2}} = \frac{\frac{x}{\sqrt{x^2+1}}}{\cos \arcsin \frac{x}{\sqrt{x^2+1}}}$ and we know that $\cos^2 x+\sin^2 x=1 $ so $\cos x= \sqrt{1-\sin^2 x}$
so $ \frac{\frac{x}{\sqrt{x^2+1}}}{\cos \arcsin \frac{x}{\sqrt{x^2+1}}} = \frac{\frac{x}{\sqrt{x^2+1}}}{\sqrt{1-\sin^2 \arcsin \frac{x}{\sqrt{1+x^2}}}}=\frac{\frac{x}{\sqrt{x^2+1}}}{\sqrt{1-\frac{x^2}{1+x^2}}} = \frac{x}{\sqrt{1+x^2} \sqrt{\frac{1}{x^2+1}}} = x$ |
Question: Since I don't have the answer to this one, I want to make sure I've done this correctly.
>
> Show that
> $$\arctan x = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)$$
>
>
>
Since
$$\arctan x = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)$$
we have
$$\tan\left(\arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)\right) = x$$
Applying the Pythagorean theorem, we learn that $b = 1$, so we have $\tan(x/1) = x$.
It may be a bit short, but I really want to be sure I have this right before I continue on. :)
Answer: | Let's consider the definitions of arcsine and arctangent.
Let $\arctan x = \theta$. Then $\theta$ is the unique angle in the interval $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ such that $\tan\theta = x$.
Let
$$\arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right) = \varphi$$
Then $\varphi$ is the unique angle in the interval $\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ such that
$$\sin\varphi = \frac{x}{\sqrt{1 + x^2}}$$
We need to show that $\theta = \varphi$.
Observe that
$$\left(\frac{x}{\sqrt{1 + x^2}}\right)^2 = \frac{x^2}{1 + x^2} < 1$$
for every real number $x$ since
$$\frac{x^2}{1 + x^2} < 1 \iff x^2 < 1 + x^2 \iff 0 < 1$$
Hence,
$$-1 < \frac{x}{\sqrt{1 + x^2}} < 1 \implies -\frac{\pi}{2} < \varphi = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right) < \frac{\pi}{2}$$
By the Pythagorean identity $\sin^2\varphi + \cos^2\varphi = 1$,
\begin{align\*}
\cos^2\varphi & = 1 - \sin^2\varphi\\
& = 1 - \left(\frac{x}{\sqrt{1 + x^2}}\right)^2\\
& = 1 - \frac{x^2}{1 + x^2}\\
& = \frac{1 + x^2 - x^2}{1 + x^2}\\
& = \frac{1}{1 + x^2}
\end{align\*}
Since $-\frac{\pi}{2} < \varphi < \frac{\pi}{2}$, $\cos\varphi > 0$, so we take the positive square root. Thus,
$$\cos\varphi = \frac{1}{\sqrt{1 + x^2}}$$
Hence,
\begin{align\*}
\tan\varphi & = \frac{\sin\varphi}{\cos\varphi}\\
& = \frac{\frac{x}{\sqrt{1 + x^2}}}{\frac{1}{\sqrt{1 + x^2}}}\\
& = x\\
& = \tan\theta
\end{align\*}
Since $\theta$ is the unique angle in the interval $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ such that $\tan\theta = x$, $\tan\varphi = x$, and $-\frac{\pi}{2} < \varphi < \frac{\pi}{2}$, we may conclude that $\theta = \varphi$. | $\tan \arcsin \frac{x}{\sqrt{x^2+1}} = \frac{\sin}{\cos} \arcsin \frac{x}{\sqrt{1+x^2}} = \frac{\frac{x}{\sqrt{x^2+1}}}{\cos \arcsin \frac{x}{\sqrt{x^2+1}}}$ and we know that $\cos^2 x+\sin^2 x=1 $ so $\cos x= \sqrt{1-\sin^2 x}$
so $ \frac{\frac{x}{\sqrt{x^2+1}}}{\cos \arcsin \frac{x}{\sqrt{x^2+1}}} = \frac{\frac{x}{\sqrt{x^2+1}}}{\sqrt{1-\sin^2 \arcsin \frac{x}{\sqrt{1+x^2}}}}=\frac{\frac{x}{\sqrt{x^2+1}}}{\sqrt{1-\frac{x^2}{1+x^2}}} = \frac{x}{\sqrt{1+x^2} \sqrt{\frac{1}{x^2+1}}} = x$ |
Question: For example suppose that I would like to show the message "yes" if the current character is "a" or "d" or "x", ....
Of course I can write as below. But it is very long.
I am wondering if there is a more simple way to do it.
In this example the variable is `char-after` and the set is `{a,d,x}`. The problem can arise in other situation.
```
(defun foo () (interactive)
(if (or (eq (char-after) ?a ) (eq (char-after) ?d ) (eq (char-after) ?x ))
(message "yes") (message "no")))
```
Answer: | Here you go:
```
(defun foo ()
(interactive)
(if (memq (char-after) '(?a ?d ?x))
(message "yes")
(message "no")))
```
`memq` is possible because characters are numbers that can be compared with `eq`.
If you want strings or lists, use `member`, which uses `equal` for comparison. | I like `pcase` a lot, though it **may** be slower in some cases.
```
(defun foo ()
(interactive)
(message
(pcase (char-after)
((or ?a ?d ?x) "yes")
(t "no"))))
``` |
Question: For example suppose that I would like to show the message "yes" if the current character is "a" or "d" or "x", ....
Of course I can write as below. But it is very long.
I am wondering if there is a more simple way to do it.
In this example the variable is `char-after` and the set is `{a,d,x}`. The problem can arise in other situation.
```
(defun foo () (interactive)
(if (or (eq (char-after) ?a ) (eq (char-after) ?d ) (eq (char-after) ?x ))
(message "yes") (message "no")))
```
Answer: | Here you go:
```
(defun foo ()
(interactive)
(if (memq (char-after) '(?a ?d ?x))
(message "yes")
(message "no")))
```
`memq` is possible because characters are numbers that can be compared with `eq`.
If you want strings or lists, use `member`, which uses `equal` for comparison. | For the sake of alternatives, first of all, since you are looking to test against a number of different characters, it looks like a regular expression would do the job more succinctly and, perhaps even faster (really depends on how many characters there will be to test from etc.)
---
Another way is to use `char-table` - a built-in Emacs data-structure for working with characters. Below is an example followed by explanation:
```
(defun char-handler () (message "I am char-handler"))
(defun special-handler () (message "I am special-handler"))
(defvar test-char-table (make-char-table 'testing 'char-handler))
(set-char-table-range test-char-table ?a 'special-handler)
(funcall (char-table-range test-char-table '?a))
"I am special-handler"
(funcall (char-table-range test-char-table '?b))
"I am char-handler"
```
`make-char-table` creates a sparse array-like structure where keys are characters and the values are whatever you want, but usually symbols pointing to some function. This data-structure is specifically designed to handle text processing so it should be reasonably fast, but, most importantly, it scales better with more handlers. It's dynamic (which means that adding or removing handlers is possible while without changing the rest of the code which uses relies on the function using it.
When you call `set-char-table-range` you can also provide things other than single character: ranges of characters given by cons cell with `car` being the first character of the range and `cdr` being the last character. This helps when you have many characters which have to invoke the same handler. It also accepts `nil` meaning "all characters in the table".
For more info, such as functions which inspect the table and iterate over its contents see: <https://www.gnu.org/software/emacs/manual/html_node/elisp/Char_002dTables.html#Char_002dTables> |
Question: Can you help me extract date and text after reason code in the below text? I can use the `patindex` for date but problem is my date can be dd/mm/yyyy or d/m/yyyy
Project rescheduled to 03/02/2017 with reason code: customer related-customer will not be available
Project rescheduled to 2/3/2017 with reason code: weather inclement
I do not have permissions to create functions, it has to be a SQL query.
Thanks,
Anu
Answer: | You probably should add validation for input text to contain both 'with' and 'to' or filter 'invalid' entries out. But this is basically what you are asking for:
```
declare @s varchar(255)
set @s = 'Project rescheduled to 2/3/2017 with reason code : Weather Inclement'
select RIGHT(LEFT(@s, CHARINDEX(' with', @s)-1), CHARINDEX(' with', @s) -CHARINDEX('to', @s) - 3)
``` | In SQL you cannot extract data that exist in the same column as an info. So if the info exits in the same column it cannot be extracted. Hence the info has to be in different tables in order to be extracted. |
Question: Can you help me extract date and text after reason code in the below text? I can use the `patindex` for date but problem is my date can be dd/mm/yyyy or d/m/yyyy
Project rescheduled to 03/02/2017 with reason code: customer related-customer will not be available
Project rescheduled to 2/3/2017 with reason code: weather inclement
I do not have permissions to create functions, it has to be a SQL query.
Thanks,
Anu
Answer: | ```
Declare @YourTable table (ID int,SomeCol varchar(500))
Insert Into @YourTable values
(1,'Project rescheduled to 03/02/2017 with reason code : Customer Related-Customer Will Not Be Available'),
(2,'Project rescheduled to 2/3/2017 with reason code : Weather Inclement')
Select A.ID
,Date = try_convert(date,right(B.dP,len(B.dp)-charindex(' ',B.dP)),103)
,Reason = right(SomeCol,len(SomeCol)-charindex(':',SomeCol)-1)
From @YourTable A
Cross Apply (
Select dP=substring(A.SomeCol,patindex('%/[0-9][0-9][0-9][0-9]%',A.SomeCol)-6,11)
) B
```
Returns
```
ID Date Reason
1 2017-02-03 Customer Related-Customer Will Not Be Available
2 2017-03-02 Weather Inclement
``` | In SQL you cannot extract data that exist in the same column as an info. So if the info exits in the same column it cannot be extracted. Hence the info has to be in different tables in order to be extracted. |
Question: On my view, I used to have few buttons and each button had an action associated with it.
```
UIButton *testButton = [[UIButton alloc] initWithFrame:CGRectMake(120,300,90,90)];
[testButton setBackgroundImage:[UIImage imageNamed:@"test.jpg"] forState:UIControlStateNormal];
[testButton addTarget:self.view action:@selector(gotoProd:) forControlEvents:UIControlEventTouchUpInside];
[testButton addt
[scrollView testButton];
```
But now I am trying to replace all those buttons with the tableview with rows. I was able to populate the rows and I know one needs to use didSelectRowAtIndexPath for handling on select event of the cell. But how can I implement action:@selector(gotoProd:) in tableviews ?? Any help will be greatly appreciated.
Answer: | The most straight-forward way would look like this:
```
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
switch (indexPath.row) {
case 0:
[self doRow0Action];
break;
case 1:
[self doRow1Action];
break;
// etc...
default:
break;
}
}
```
If you wanted to instead, you could initialize an array with SEL types:
```
[actionArray addObject:@selector(doRowNAction)];
```
then access it like this:
```
[self performSelector:[actionArray objectAtIndex:indexPath.row] withObject:nil];
``` | Call
```
[self gotoProd:indexPath.row];
```
from `didSelectRowAtIndexPath`
and
```
- (void)gotoProd:(int)rowSelected {
//Check row index here and do it accordingly
}
``` |
Question: How would you disable the link when clicked on this code I made for an image fade in using hoverintent. I'm using an named anchor now but it jumps so I'd like to disable the click.
```
<A class="next2 nextbuttonB" href="#top">INSTALL</A>
<A class="next2 nextbuttonA" href="#top">ESTIMATE</A>
```
and the jquery
```
$('#A,#B,').addClass('nextHide');
$('.nextbuttonA').hoverIntent(function() {
$('#A').fadeIn("slow");$('#B').fadeOut();
}, function() {
$('#B').hide();
});
$('.nextbuttonB').hoverIntent(function() {
$('#B').fadeIn("slow");$('#A').fadeOut();
}, function() {
$('#A').hide();
});
$('.nextbutton').hoverIntent(function() {
$('#A,#B').fadeOut();
}, function() {
$('#A,#B').hide();
});
$('#A,#B').mouseleave(function(){
$('#A,#B').fadeOut();
});
```
Answer: | (This is an old question, but I'm replying anyway)
There's no such way to in Obj-C. Obj-C is dynamic enough that any of these methods can be trapped and intercepted. Do not ship anything in a application that absolutely needs to be secret. If your application is run on a jailbroken phone, or if it is made available on piracy sites, than it has already been exposed and it's memory contents dumped. All these above methods copy the decoded data to main memory where it is exposed.
See:
<https://www.youtube.com/watch?v=Ii-02vhsdVk>
None of these methods above is actually secure. Again, do not embed these sorts of things in your applications with an assurance they are actually secure. | What I have done in the past to obfuscate a string was something to this extent:
```
-(NSString*)myString {
NSString *string = nil;
string = [@"ozzzzzzzzzzzzhazzzzzzzizzzzzz" stringByReplacingOccurrencesOfString:@"z" withString:@""];
return string;
}
```
What it would do is remove all the occurences of the letter `z`, leaving you with `ohai` as a string. Not sure if this will suffice for your case, but it has worked for me.
Hope this helps! |
Question: I have been looking at examples of Twitter Api and none of the onces I've been able to find use the user's username and password.
Basically, I want to be able to tell the code the account that I want to log into in order to post a tweet.
So basically I want the user to be able to post into different accounts by changing the username and password.
Doe anyone know if the api settings oauth\_access\_token, etc can carry a username and password so it tells the api what account to post to?
Otherwise, does anyone have any idea on where to start?
Answer: | >
> I have been looking at examples of Twitter Api and none of the onces I've been able to find use the user's username and password.
>
>
>
Collecting a user's password is a violation of the Twitter TOS.
You should be using [their OAuth authentication](https://dev.twitter.com/oauth) to get an access token, which can be used essentially like a username and password to access that user's account. | In addition to OAuth discussed in @ceejayoz's reply, the Twitter API also provides support for **xAuth.** xAuth provides a way for desktop and mobile applications to exchange a username and password for an OAuth access token. Once the access token is retrieved, xAuth-enabled developers should dispose of the login and password corresponding to the user.
xAuth access is restricted to approved applications. If your application is a desktop or mobile application that has no other recourse but to utilize xAuth, send a detailed request to <https://support.twitter.com/forms/platform>. Include the name of your application, the consumer key, the application ID (if available), and a summary of how xAuth is best-suited for your application.
Find [more information about xAuth](https://dev.twitter.com/oauth/xauth) in the Twitter API docs. |
Question: So I'm trying to add a new task, Azure File Copy, to my release pipeline. The file copy is pulling a single file from a new Azure Repository I created in Azure DevOps recently and putting it into a specific blob container. However, I seem to be running into an error
`[error]AADSTS7000222: The provided client secret keys are expired. Visit the Azure Portal to create new keys for your app, or consider using certificate credentials for added security: https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-certificate-credentials`
I tried looking for possible solutions for this, but considering this is a new repository, I'm not sure what I need to do. With my current existing app, I do have access to Microsoft Azure portal. With the link that's given in the error, it talks about updating the certificate, but it never had one to begin with.
Answer: | **Edit:**
I mean its probably possible but I wouldn't recommend doing something that way. You could theoretically do a database query that gets all products that have the category\_id of the the category you just deleted, then update them to null as long as your column accounts for null values. You would simply call this (or some variation of this) whenever you're deleting a category. This may have major performance impacts based on the size of the table you're doing this on.
ie. (using model):
```
Products::where('category_id', '=', $id_i_deleted)->update(['category_id' => null]);
```
or (using DB):
```
DB::table('products')->where('category_id', '=', $id_i_deleted)->update(['category_id' => null]);
```
But I would highly recommend just biting the bullet and altering your table structure. It'll save you infinite headaches in the long run.
**Original:**
Yes, it is definitely possible! Here is how you should set up your foreign key if you want it to allow for a null value:
```
Schema::table('products', function(Blueprint $table) {
$table->integer('category_id')->unsigned()->nullable();
$table->foreign('category_id')->references('id')->on('categories')->onDelete('set null');
});
``` | Shorter way using a helper:
```php
Schema::table('products', function(Blueprint $table) {
$table->foreignId('category_id')->nullable()->constrained()->on('categories')->nullOnDelete();
});
``` |
Question: If I have a complex fraction $\dfrac{a+bi}{c+di}$ and I want the magnitude, then will it be $\left|\dfrac{a+bi}{c+di}\right|=\dfrac{|a+bi|}{|c+di|}$?
Scratch that ... I just found the answer on another page; however, I'm still unclear *why* it's true?
Answer: | A simpler approach:
Let $z\_1=a+bi$ and $z\_2=c+di$. Since by properties of absolute value we have $|z\_1z\_2|=|z\_1||z\_2|,$ and the fact that $z\_2(\frac{z\_1}{z\_2})=z\_1$ then we have that $$\left|z\_2\frac{z\_1}{z\_2}\right|=|z\_1|\implies|z\_2|\bigg|\frac{z\_1}{z\_2}\bigg|=|z\_1|\implies \bigg|\frac{z\_1}{z\_2}\bigg|=\frac{|z\_1|}{|z\_2|}$$ | $\frac{a+bi}{c+di} = \frac{a+bi}{c+di} \* \frac{c-di}{c-di} = i (\frac{b c}{c^2+d^2}-\frac{a d}{c^2+d^2})+\frac{a c}{c^2+d^2}+\frac{b d}{c^2+d^2}$. At this point, you should be able to get the magnitude easily. Yes, it'll be cumbersome computation wise, but that should be it.
Suppose $e = \frac{b c}{c^2+d^2}-\frac{a d}{c^2+d^2}$ and $f = \frac{a c}{c^2+d^2}+\frac{b d}{c^2+d^2}$
Then, $\|f + ei\| = \sqrt{f^2+e^2} = \sqrt{\frac{(bc-ad)^2}{(c^2+d^2)^2} + \frac{(ac+bd)^2}{(c^2+d^2)^2}} = \sqrt{\frac{2(a^2d^2+b^2c^2)}{(c^2+d^2)^2}}$ and you could take it from there. |
Question: If I have a complex fraction $\dfrac{a+bi}{c+di}$ and I want the magnitude, then will it be $\left|\dfrac{a+bi}{c+di}\right|=\dfrac{|a+bi|}{|c+di|}$?
Scratch that ... I just found the answer on another page; however, I'm still unclear *why* it's true?
Answer: | You can make use of complex exponents.
$$\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}=\frac{\rho\_1e^{\mathrm{i} \varphi\_1}}{\rho\_2e^{\mathrm{i} \varphi\_2}}=\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)}$$
where $\rho\_1=\sqrt{a^2+b^2}, \rho\_2=\sqrt{c^2+d^2}$ are the magnitudes and $\varphi\_1=\arg\{a+\mathrm{i} \ b\},\varphi\_2=\arg\{c+\mathrm{i} \ d\}$ are phases of $a+\mathrm{i} \ b$ and $c+\mathrm{i} \ d$ respectively.
Then since $\rho\_1, \rho\_2$ are real (and positive) and the absolute value of complex exponent is $1$:
$$\left| \dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right|=\left|\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)} \right|=\left|\frac{\rho\_1}{\rho\_2}\right|\left|e^{\mathrm{i}(\varphi\_1-\varphi\_2)} \right|=\left|\frac{\rho\_1}{\rho\_2}\right|=\frac{\left|\rho\_1\right|}{\left|\rho\_2\right|}=\frac{\left|a+\mathrm{i} \ b\right|}{\left|c+\mathrm{i} \ d\right|}.$$
Moreover, using complex exponents it is easy to show that $$\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{a+\mathrm{i} \ b\right\}-\arg\left\{c+\mathrm{i} \ d\right\}.$$
That is true, since $\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)}\right\}=\varphi\_1-\varphi\_2$. | $\frac{a+bi}{c+di} = \frac{a+bi}{c+di} \* \frac{c-di}{c-di} = i (\frac{b c}{c^2+d^2}-\frac{a d}{c^2+d^2})+\frac{a c}{c^2+d^2}+\frac{b d}{c^2+d^2}$. At this point, you should be able to get the magnitude easily. Yes, it'll be cumbersome computation wise, but that should be it.
Suppose $e = \frac{b c}{c^2+d^2}-\frac{a d}{c^2+d^2}$ and $f = \frac{a c}{c^2+d^2}+\frac{b d}{c^2+d^2}$
Then, $\|f + ei\| = \sqrt{f^2+e^2} = \sqrt{\frac{(bc-ad)^2}{(c^2+d^2)^2} + \frac{(ac+bd)^2}{(c^2+d^2)^2}} = \sqrt{\frac{2(a^2d^2+b^2c^2)}{(c^2+d^2)^2}}$ and you could take it from there. |
Question: If I have a complex fraction $\dfrac{a+bi}{c+di}$ and I want the magnitude, then will it be $\left|\dfrac{a+bi}{c+di}\right|=\dfrac{|a+bi|}{|c+di|}$?
Scratch that ... I just found the answer on another page; however, I'm still unclear *why* it's true?
Answer: | You can make use of complex exponents.
$$\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}=\frac{\rho\_1e^{\mathrm{i} \varphi\_1}}{\rho\_2e^{\mathrm{i} \varphi\_2}}=\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)}$$
where $\rho\_1=\sqrt{a^2+b^2}, \rho\_2=\sqrt{c^2+d^2}$ are the magnitudes and $\varphi\_1=\arg\{a+\mathrm{i} \ b\},\varphi\_2=\arg\{c+\mathrm{i} \ d\}$ are phases of $a+\mathrm{i} \ b$ and $c+\mathrm{i} \ d$ respectively.
Then since $\rho\_1, \rho\_2$ are real (and positive) and the absolute value of complex exponent is $1$:
$$\left| \dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right|=\left|\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)} \right|=\left|\frac{\rho\_1}{\rho\_2}\right|\left|e^{\mathrm{i}(\varphi\_1-\varphi\_2)} \right|=\left|\frac{\rho\_1}{\rho\_2}\right|=\frac{\left|\rho\_1\right|}{\left|\rho\_2\right|}=\frac{\left|a+\mathrm{i} \ b\right|}{\left|c+\mathrm{i} \ d\right|}.$$
Moreover, using complex exponents it is easy to show that $$\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{a+\mathrm{i} \ b\right\}-\arg\left\{c+\mathrm{i} \ d\right\}.$$
That is true, since $\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{\frac{\rho\_1}{\rho\_2}e^{\mathrm{i}(\varphi\_1-\varphi\_2)}\right\}=\varphi\_1-\varphi\_2$. | A simpler approach:
Let $z\_1=a+bi$ and $z\_2=c+di$. Since by properties of absolute value we have $|z\_1z\_2|=|z\_1||z\_2|,$ and the fact that $z\_2(\frac{z\_1}{z\_2})=z\_1$ then we have that $$\left|z\_2\frac{z\_1}{z\_2}\right|=|z\_1|\implies|z\_2|\bigg|\frac{z\_1}{z\_2}\bigg|=|z\_1|\implies \bigg|\frac{z\_1}{z\_2}\bigg|=\frac{|z\_1|}{|z\_2|}$$ |
Question: Here i am trying to execute linux command from a variable in file.sh.
test.sh
>
> OUT= "date";
>
> echo $OUT;
>
>
> Output:
>
> It is Working perfectly.
>
>
>
But when i try to execute the command pgrep vpnc
>
> OUT= "pgrep vpnc";
>
> echo $OUT;
>
>
> Output
>
> test.sh: 1: test.sh: pgrep vpnc: not found
>
>
>
my expectation when the above file is executed,it returns pid.
I also did tried by eval.
>
> OUT= "pgrep vpnc";
>
> $ eval $OUT;
>
>
> Output:
>
> test.sh: 1: test.sh: pgrep vpnc: not found
>
> test.sh: 2: test.sh: $: not found
>
>
>
Can any one help me how to run command and store its value in a variable.
Any help is highly appreciated.
Answer: | it shoud be ` instead of "
```
OUT=`pgrep process`;
echo $OUT;
```
display the pid of process. | Just using
```
$ $OUT
```
should run the command |
Question: I have a table with following two fields
```
jobid | statusCode
```
Now here is what I want to do, I want to "delete the reocrd with given jobid if statusCode is less than some value, otherwise update this record with new statusCode"
My question is is there some single SQL operation to perform above mentioned job or do I have to use multiple SQL "if - else" sort of queries. My DB is SQlite3
Regards,
Farrukh Arshad.
Answer: | SQLite has a command [UPDATE OR REPLACE](http://www.sqlite.org/lang_update.html) that combines updating and inserting, but this is only for the common case where you want to avoid inserting a duplicate.
The [DELETE](http://www.sqlite.org/lang_delete.html) command does not have the functionality you want.
SQLite does not have any control flow statements, so your best bet is to execute both DELETE/UPDATE statements:
```
DELETE FROM MyTable
WHERE jobid = ?
AND statusCode < ?;
UPDATE MyTable
SET statusCode = ?
WHERE jobid = ?; -- does nothing if record was deleted
``` | Something like this should help:
```
CASE WHEN status_code < some_value
THEN
DELETE FROM MyTable
WHERE jobid = ?
ELSE
UPDATE MyTable
SET statusCode = ?
WHERE jobid = ?
END
``` |
Question: <http://codepad.viper-7.com/ezvlkQ>
So, I'm trying to figure out:
```
...?php
$object = new A();
class A
{
static public $foo = 'bar';
function displayFoo()
{
echo $this->$foo;
}
}
A::displayFoo();
A->displayFoo();
?>
```
About this, how many errors can you find? Can you tell me what they are in real human terms? I can't really interpret what is and what is not okay from the validator that codepad uses...
Answer: | I’ve updated your code here <http://codepad.viper-7.com/UaUE4g>
Error 1:
```
echo $this->$foo;
```
This should read:
```
echo self::$foo;
```
.. as it is static.
Error 2:
```
A::displayFoo();
```
The method is an instance method `::` is used for access to static methods.
Error 3:
```
A->displayFoo();
```
This is an error because `A` is undefined and if it was it should have read `$A`. This would be okay:
```
$object->displayFoo();
```
.. as $object is an instance of class A.
Next step, consult the manual on the topic *static*. | You can read up on static class members in the manual here:
<http://php.net/static>
Pay close attention to the examples. |
Question: <http://codepad.viper-7.com/ezvlkQ>
So, I'm trying to figure out:
```
...?php
$object = new A();
class A
{
static public $foo = 'bar';
function displayFoo()
{
echo $this->$foo;
}
}
A::displayFoo();
A->displayFoo();
?>
```
About this, how many errors can you find? Can you tell me what they are in real human terms? I can't really interpret what is and what is not okay from the validator that codepad uses...
Answer: | Not sure where to start. Static methods belong to the class, normal methods belong to an object, an instantiation of that class. For example, you can have:
```
Class A {
static public $foo = 'WOOHOOO';
static function displayFoo() {
echo self::$foo;
}
}
echo A::displayFoo();
```
This works because you're calling the `displayFoo` method belonging to class `A`. Or you can do this:
```
Class A {
public $foo = "WOOHOO";
public function displayFoo() {
echo $this->foo;
}
}
$obj = new A();
$obj->displayFoo();
```
Now you're creating an object based on the class of `A`. That object can call its methods. But the *object* doesn't have static methods. If you were to declare the function static, it would not be available to `$obj`.
You can't do:
```
A->displayFoo()
```
at all, under any circumstances, ever. The `->` operator assumes an object, and `A` can't be an object because its not a variable. | You can read up on static class members in the manual here:
<http://php.net/static>
Pay close attention to the examples. |
Question: <http://codepad.viper-7.com/ezvlkQ>
So, I'm trying to figure out:
```
...?php
$object = new A();
class A
{
static public $foo = 'bar';
function displayFoo()
{
echo $this->$foo;
}
}
A::displayFoo();
A->displayFoo();
?>
```
About this, how many errors can you find? Can you tell me what they are in real human terms? I can't really interpret what is and what is not okay from the validator that codepad uses...
Answer: | I’ve updated your code here <http://codepad.viper-7.com/UaUE4g>
Error 1:
```
echo $this->$foo;
```
This should read:
```
echo self::$foo;
```
.. as it is static.
Error 2:
```
A::displayFoo();
```
The method is an instance method `::` is used for access to static methods.
Error 3:
```
A->displayFoo();
```
This is an error because `A` is undefined and if it was it should have read `$A`. This would be okay:
```
$object->displayFoo();
```
.. as $object is an instance of class A.
Next step, consult the manual on the topic *static*. | Not sure where to start. Static methods belong to the class, normal methods belong to an object, an instantiation of that class. For example, you can have:
```
Class A {
static public $foo = 'WOOHOOO';
static function displayFoo() {
echo self::$foo;
}
}
echo A::displayFoo();
```
This works because you're calling the `displayFoo` method belonging to class `A`. Or you can do this:
```
Class A {
public $foo = "WOOHOO";
public function displayFoo() {
echo $this->foo;
}
}
$obj = new A();
$obj->displayFoo();
```
Now you're creating an object based on the class of `A`. That object can call its methods. But the *object* doesn't have static methods. If you were to declare the function static, it would not be available to `$obj`.
You can't do:
```
A->displayFoo()
```
at all, under any circumstances, ever. The `->` operator assumes an object, and `A` can't be an object because its not a variable. |
Question: I have a sample code for a microcontroller.
There is a structure `typedef`d as shown below.
```
typedef struct _AT91S_SYS {
AT91_REG AIC_SMR[32]; // Source Mode Register
AT91_REG AIC_SVR[32]; // Source Vector Register
AT91_REG AIC_IVR; // IRQ Vector Register
...
} AT91S_SYS, *AT91PS_SYS;
```
I have used `typedef` with structs like `} AT91S_SYS;`.
What does this additional part does? `*AT91PS_SYS;` in `} AT91S_SYS, *AT91PS_SYS;`
Is it a pointer to the `struct _AT91S_SYS` type?
`AT91_REG` is a `typedef` of `volatile unsigned int`
Answer: | Yes, you are right, the syntax is equivalent to this:
```
typedef struct _AT91S_SYS AT91S_SYS;
typedef struct _AT91S_SYS *AT91PS_SYS;
```
So `AT91PS_SYS` is a pointer type of `AT91S_SYS`. | This just defines the type `AT91PS_SYS` as a pointer to `AT91S_SYS`.
---
The easiest way to understand `typedef`, by the way, is to read the rest of the declaration as if it were just a variable declaration. But, instead of defining variables, you're defining types using whatever type the variable *would* have had.
So, for example,
```
int x, *y, z[5];
```
defines three variables, `int x`, `int *y` and `int z[5]`.
Therefore,
```
typedef int x, *y, z[5];
```
defines two types, `x == int`, `y == int *` and `z == int[5]`. Simple! |
Question: I have a "Dataset(Row)" as below
```
+-----+--------------+
|val | history |
+-----+--------------+
|500 |[a=456, a=500]|
|800 |[a=456, a=500]|
|784 |[a=456, a=500]|
+-----+--------------+
```
Here val is "String" and history is an "string array". I'm trying to add the content in val column to the history column, so that my dataset looks like :
```
+-----+---------------------+
|val | history |
+-----+---------------------+
|500 |[a=456, b=500, c=500]|
|800 |[a=456, b=500, c=800]|
|784 |[a=456, b=500, c=784]|
+-----+---------------------+
```
A similar question is discussed here <https://stackoverflow.com/a/49685271/2316771> , but I don't know scala and couldn't create a similar java solution.
Please help me to achieve this in java
Answer: | In Spark 2.4 (not before), you can use the `concat` function to concat two arrays. In your case, you could do something like:
```java
df.withColumn("val2", concat(lit("c="), col("val")))
.select(concat(col("history"), array(col("val2")));
```
NB: the first time I use `concat` is to concat strings, the second time, to concat arrays. `array(col("val2"))` creates an array of one element. | I coded a solution but I'm not sure if it can be further optimized
```
dataset.map(row -> {
Seq<String> seq = row.getAs("history");
ArrayList<String> list = new ArrayList<>(JavaConversions.seqAsJavaList(seq));
list.add("c="+row.getAs("val"));
return RowFactory.create(row.getAs("val"),list.toArray(new String[0]));},schema);
``` |
Question: I need some help at this problem:
**Delete a row from a `UITableView` after the audio of that row has finished playing.**
I have a UITableView with some cells. Every cell have a unique sound. If you select a row the audio of the cell plays.
I want that the row that is selected will be deleted after the audio is finished.
Here you can see when the audio finished with playing.
```
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Done"
message:@"Player finish."
delegate:nil
cancelButtonTitle:@"OK"
otherButtonTitles:nil];
[alert show];
}
```
Answer: | As I noted in a comment, OCaml's `float` are boxed, which puts OCaml to a disadvantage compared to Clang.
However, I may be noticing another typical rough edge trying OCaml after Haskell:
if I see what your program is doing, you are creating a list of stuff, to then map a function on that list and finally fold it into a result.
In Haskell, you could more or less expect such a program to be automatically “[deforested](http://homepages.inf.ed.ac.uk/wadler/papers/deforest/deforest.ps)” at compile-time, so that the resulting generated code was an efficient implementation of the task at hand.
In OCaml, the fact that functions can have side-effects, and in particular functions passed to high-order functions such as map and fold, means that it would be much harder for the compiler to deforest automatically. The programmer has to do it by hand.
In other words: stop building huge short-lived data structures such as `0 -- n` and `(efficient_map summand (0 -- n))`. When your program decides to tackle a new summand, make it do all it wants to do with that summand in a single pass. You can see this as an exercise in applying the principles in Wadler's article (again, by hand, because for various reasons the compiler will not do it for you despite your program being pure).
---
Here are some results:
```
$ ocamlopt v2.ml
$ time ./a.out 1000000
3.14159165359
real 0m0.020s
user 0m0.013s
sys 0m0.003s
$ ocamlopt v1.ml
$ time ./a.out 1000000
3.14159365359
real 0m0.238s
user 0m0.204s
sys 0m0.029s
```
v1.ml is your version. v2.ml is what you might consider an idiomatic OCaml version:
```
let rec q_pi_approx p n acc =
if n = p
then acc
else q_pi_approx (succ p) n (acc +. (summand p))
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx 0 n 0.));;
```
(reusing `summand` from your code)
It might be more accurate to sum from the last terms to the first, instead of from the first to the last. This is orthogonal to your question, but you may consider it as an exercise in modifying a function that has been forcefully made tail-recursive. Besides, the `(-1.) ** m` expression in `summand` is mapped by the compiler to a call to the `pow()` function on the host, and that's [a bag of hurt](http://blog.frama-c.com/index.php?post/2013/04/06/Non-experts-need-accurate-floating-point-the-most) you may want to avoid. | I would like to add that although floats are boxed in OCaml, **float arrays** are unboxed. Here is a program that builds a float array corresponding to the Leibnitz sequence and uses it to approximate π:
```
open Array
let q_pi_approx n =
let summand n =
let m = float_of_int n
in (-1.) ** m /. (2. *. m +. 1.) in
let a = Array.init n summand in
Array.fold_left (+.) 0. a
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx n));;
```
Obviously, it is still slower than a code that doesn't build any data structure at all. Execution times (the version with array is the last one):
```
time ./v1 10000000
3.14159275359
real 0m2.479s
user 0m2.380s
sys 0m0.104s
time ./v2 10000000
3.14159255359
real 0m0.402s
user 0m0.400s
sys 0m0.000s
time ./a 10000000
3.14159255359
real 0m0.453s
user 0m0.432s
sys 0m0.020s
``` |
Question: I need some help at this problem:
**Delete a row from a `UITableView` after the audio of that row has finished playing.**
I have a UITableView with some cells. Every cell have a unique sound. If you select a row the audio of the cell plays.
I want that the row that is selected will be deleted after the audio is finished.
Here you can see when the audio finished with playing.
```
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Done"
message:@"Player finish."
delegate:nil
cancelButtonTitle:@"OK"
otherButtonTitles:nil];
[alert show];
}
```
Answer: | As I noted in a comment, OCaml's `float` are boxed, which puts OCaml to a disadvantage compared to Clang.
However, I may be noticing another typical rough edge trying OCaml after Haskell:
if I see what your program is doing, you are creating a list of stuff, to then map a function on that list and finally fold it into a result.
In Haskell, you could more or less expect such a program to be automatically “[deforested](http://homepages.inf.ed.ac.uk/wadler/papers/deforest/deforest.ps)” at compile-time, so that the resulting generated code was an efficient implementation of the task at hand.
In OCaml, the fact that functions can have side-effects, and in particular functions passed to high-order functions such as map and fold, means that it would be much harder for the compiler to deforest automatically. The programmer has to do it by hand.
In other words: stop building huge short-lived data structures such as `0 -- n` and `(efficient_map summand (0 -- n))`. When your program decides to tackle a new summand, make it do all it wants to do with that summand in a single pass. You can see this as an exercise in applying the principles in Wadler's article (again, by hand, because for various reasons the compiler will not do it for you despite your program being pure).
---
Here are some results:
```
$ ocamlopt v2.ml
$ time ./a.out 1000000
3.14159165359
real 0m0.020s
user 0m0.013s
sys 0m0.003s
$ ocamlopt v1.ml
$ time ./a.out 1000000
3.14159365359
real 0m0.238s
user 0m0.204s
sys 0m0.029s
```
v1.ml is your version. v2.ml is what you might consider an idiomatic OCaml version:
```
let rec q_pi_approx p n acc =
if n = p
then acc
else q_pi_approx (succ p) n (acc +. (summand p))
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx 0 n 0.));;
```
(reusing `summand` from your code)
It might be more accurate to sum from the last terms to the first, instead of from the first to the last. This is orthogonal to your question, but you may consider it as an exercise in modifying a function that has been forcefully made tail-recursive. Besides, the `(-1.) ** m` expression in `summand` is mapped by the compiler to a call to the `pow()` function on the host, and that's [a bag of hurt](http://blog.frama-c.com/index.php?post/2013/04/06/Non-experts-need-accurate-floating-point-the-most) you may want to avoid. | I've also tried several variants, here are my conclusions:
1. Using arrays
2. Using recursion
3. Using imperative loop
Recursive function is about 30% more effective than array implementation. Imperative loop is approximately as much effective as a recursion (maybe even little slower).
Here're my implementations:
Array:
------
```
open Core.Std
let pi_approx n =
let f m = (-1.) ** m /. (2. *. m +. 1.) in
let qpi = Array.init n ~f:Float.of_int |>
Array.map ~f |>
Array.reduce_exn ~f:(+.) in
qpi *. 4.0
```
Recursion:
----------
```
let pi_approx n =
let rec loop n acc m =
if m = n
then acc *. 4.0
else
let acc = acc +. (-1.) ** m /. (2. *. m +. 1.) in
loop n acc (m +. 1.0) in
let n = float_of_int n in
loop n 0.0 0.0
```
This can be further optimized, by moving local function `loop` outside, so that compiler can inline it.
Imperative loop:
----------------
```
let pi_approx n =
let sum = ref 0. in
for m = 0 to n -1 do
let m = float_of_int m in
sum := !sum +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. !sum
```
But, in the code above creating a `ref` to the `sum` will incur boxing/unboxing on each step, that we can further optimize this code by using `float_ref` [trick](https://janestreet.github.io/ocaml-perf-notes.html):
```
type float_ref = { mutable value : float}
let pi_approx n =
let sum = {value = 0.} in
for m = 0 to n - 1 do
let m = float_of_int m in
sum.value <- sum.value +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. sum.value
```
Scoreboard
----------
```
for-loop (with float_ref) : 1.0
non-local recursion : 0.89
local recursion : 0.86
Pascal's version : 0.77
for-loop (with float ref) : 0.62
array : 0.47
original : 0.08
```
Update
------
I've updated the answer, as I've found a way to give 40% speedup (or 33% in comparison with @Pascal's answer. |
Question: I need some help at this problem:
**Delete a row from a `UITableView` after the audio of that row has finished playing.**
I have a UITableView with some cells. Every cell have a unique sound. If you select a row the audio of the cell plays.
I want that the row that is selected will be deleted after the audio is finished.
Here you can see when the audio finished with playing.
```
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Done"
message:@"Player finish."
delegate:nil
cancelButtonTitle:@"OK"
otherButtonTitles:nil];
[alert show];
}
```
Answer: | I've also tried several variants, here are my conclusions:
1. Using arrays
2. Using recursion
3. Using imperative loop
Recursive function is about 30% more effective than array implementation. Imperative loop is approximately as much effective as a recursion (maybe even little slower).
Here're my implementations:
Array:
------
```
open Core.Std
let pi_approx n =
let f m = (-1.) ** m /. (2. *. m +. 1.) in
let qpi = Array.init n ~f:Float.of_int |>
Array.map ~f |>
Array.reduce_exn ~f:(+.) in
qpi *. 4.0
```
Recursion:
----------
```
let pi_approx n =
let rec loop n acc m =
if m = n
then acc *. 4.0
else
let acc = acc +. (-1.) ** m /. (2. *. m +. 1.) in
loop n acc (m +. 1.0) in
let n = float_of_int n in
loop n 0.0 0.0
```
This can be further optimized, by moving local function `loop` outside, so that compiler can inline it.
Imperative loop:
----------------
```
let pi_approx n =
let sum = ref 0. in
for m = 0 to n -1 do
let m = float_of_int m in
sum := !sum +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. !sum
```
But, in the code above creating a `ref` to the `sum` will incur boxing/unboxing on each step, that we can further optimize this code by using `float_ref` [trick](https://janestreet.github.io/ocaml-perf-notes.html):
```
type float_ref = { mutable value : float}
let pi_approx n =
let sum = {value = 0.} in
for m = 0 to n - 1 do
let m = float_of_int m in
sum.value <- sum.value +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. sum.value
```
Scoreboard
----------
```
for-loop (with float_ref) : 1.0
non-local recursion : 0.89
local recursion : 0.86
Pascal's version : 0.77
for-loop (with float ref) : 0.62
array : 0.47
original : 0.08
```
Update
------
I've updated the answer, as I've found a way to give 40% speedup (or 33% in comparison with @Pascal's answer. | I would like to add that although floats are boxed in OCaml, **float arrays** are unboxed. Here is a program that builds a float array corresponding to the Leibnitz sequence and uses it to approximate π:
```
open Array
let q_pi_approx n =
let summand n =
let m = float_of_int n
in (-1.) ** m /. (2. *. m +. 1.) in
let a = Array.init n summand in
Array.fold_left (+.) 0. a
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx n));;
```
Obviously, it is still slower than a code that doesn't build any data structure at all. Execution times (the version with array is the last one):
```
time ./v1 10000000
3.14159275359
real 0m2.479s
user 0m2.380s
sys 0m0.104s
time ./v2 10000000
3.14159255359
real 0m0.402s
user 0m0.400s
sys 0m0.000s
time ./a 10000000
3.14159255359
real 0m0.453s
user 0m0.432s
sys 0m0.020s
``` |
Question: I have a procedure that is valid and has in it an insert..select statement. Now there is a case where execution of this procedure produces "ORA-00904: : invalid identifier" error from this statement. How is that even theoretically possible? There are no triggers or dynamic SQL.
Also, the ORA-00904 text in sqlerrm is without pointer to any specific identifier that Oracle considers invalid.
Oracle version 9.2.0.8
edit2:
Turns out there was a problem with a function that was called from within that select (replaced it with constants and everything worked). Probably that was the reason that ORA-00904 did not give an identifier. Still, the question remains - how can that be that precompiled code with no dynamic sql gives this error?
Answer: | I think this kind of error might happen when you access a package where the package is valid but the body needs compilation and throws the exception.
Another reason might be code with authid current\_user it runs with the privileges of the current user (not as normal with the privileges of the owning user). Such a procedure might fail when called with one and succeed when executed with another user. | Since you already found the solution, this wasn't your problem. But I wanted to add a note, that you get this error if the package function has a body, but there's no function signature in the Spec sheet. |
Question: I have a Web application which will be deployed to Windows Azure and I'm looking for alternatives to generate Excel spreadsheets.
Can I use VSTO to programatically generate an Excel spreadsheet in a Web Role running on Windows Azure?... If yes, how should I deploy the application to Windows Azure? What assemblies should I include?
Answer: | I tested this and apparently it won't work, VSTO requires Office to be installed. | Joni and Joe are correct. VSTO will not run on Azure.
I believe you're looking for the [Open XML SDK](http://msdn.microsoft.com/en-us/library/bb448854.aspx). That lets you create Excel or other Office files from .NET without using Office automation.
**Edit:** Here's one option I have considered for those times when the Open XML SDK just doesn't have the functionality I can get from accessing an Office app's object model directly. Set up a machine outside of Azure that operates just as an Azure worker role would by processing messages from an Azure Queue. Since the app on that machine could be setup to execute tasks sequentially, you should be able to get away with things that wouldn't be advisable if you were trying to execute an Office app from a web role. This machine could be in your data center, or you could use an Azure VM so that you could install Office. If that VM was creating and/or reading Excel documents, then just use Azure Blob storage to store the documents.
If the machine running Office is outside of Azure, you will incur additional bandwidth costs for all the data coming in and out of Azure. |
Question: I have a Web application which will be deployed to Windows Azure and I'm looking for alternatives to generate Excel spreadsheets.
Can I use VSTO to programatically generate an Excel spreadsheet in a Web Role running on Windows Azure?... If yes, how should I deploy the application to Windows Azure? What assemblies should I include?
Answer: | I tested this and apparently it won't work, VSTO requires Office to be installed. | I've been successful with generating **Excel Spreadsheets in Azure using [EPPlus open source](http://epplus.codeplex.com) project**. It builds on the OpenXML SDK, but is much simpler to use.
I've been deploying the code to Worker Role instead of Web Role (as per [Lokad.CQRS for Azure](http://code.google.com/p/lokad-cqrs/) architecture) in order to pregenerate reports, so that the application would be more scalable. And **the only required assembly was the Epplus.dll**. |
Question: I have trouble in understanding the proof of a theorem on compact sets and limit points in Rudin's book.
>
> Theorem: If $E$ is an infinite subset of a compact set $K$, then $E$ has a limit point in $K$.
>
>
> Proof: If no point of $K$ were a limit point of $E$, then each $q\in K$ would have a neighborhood $V\_q$ which contains at most one point of $E$. It is clear that no finite subcollection of $\{V\_q\}$ can cover $E$, and thus $K$. This contradicts the compactness of $K$.
>
>
>
My question is: How do we know the collection $\{V\_q\}$ is an open cover of $K$, given $V\_q\cap E=\emptyset$ or $q\in E$? I mean, the radius of each open neighborhood around $q$ is fixed, because each punctured neighborhood has to satisfy certain conditions $V\_q^\* \cap E=\emptyset$, where $\*$ denotes "punctured." In this case, how can we ensure the collection of such neighborhoods still covers $K$?
Answer: | Since $V\_q$ is a neighborhood of $q$, $q\in V\_q$. And so$$K=\bigcup\_{q\in K}\{q\}\subset\bigcup\_{q\in K}V\_q.$$ | Correct me if wrong.
A bit of context:
Let $X$ be a metric space.
$E \subset K \subset X$, $K$ compact, $E$ infinite.
Definition:
$p \in K$ is a limit point of $E$ if every neighbourhood of $p$ contains a point $q \not = p$ where $q \in E$.
Negation: $p$ is not a limit point of $E$:
There is a neighbourhood of $p$ that does not contain a $q \not = p$, $q \in E$.
Either:
1) $p \not \in E$ , then there is a $V\_p$ such that
$V\_p \cap E =\emptyset.$
Or
2) $p \in E$, then $V\_p \cap E = {p}$.
Recall $E \subset K =\bigcup\_{p} V\_p$, $p \in K$.
Since $E$ is infinite, no finite subcollection can cover $E$, contradiction to $K$ compact.
P.S. Short proof of the above statement.
Since $K$ compact there is a finite subcollection
$E \subset K \subset \bigcup\_{i} V\_{p\_i}$ , $i=1,2,...n.$
Recall; For every $i=1,2,...,n$, $V\_{p\_i}$ has at most $1$ element of $E$.
Hence a contradiction to $E$ is infinite. |
Question: VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
Answer: | No there is no way to add a global error handler in VB6. However, you do not need to add an error handler in every method. You only really need to add an error handler in every event handler. E.g. Every click event,load event, etc | on error resume next - is kinda close but its been a while.
you might want to look up any caveats |
Question: VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
Answer: | No there is no way to add a global error handler in VB6. However, you do not need to add an error handler in every method. You only really need to add an error handler in every event handler. E.g. Every click event,load event, etc | Also: errors do propagate upwards: if method X calls methods Y and Z, a single error handler in method X will cover all three methods. |
Question: VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
Answer: | No there is no way to add a global error handler in VB6. However, you do not need to add an error handler in every method. You only really need to add an error handler in every event handler. E.g. Every click event,load event, etc | While errors do propogate upwards, VB6 has no way to do a stack trace, so you never know which method raised the error. Unfortunately, if you need this information, you have to add a handler to each method just to log where you were. |
Question: VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
Answer: | No there is no way to add a global error handler in VB6. However, you do not need to add an error handler in every method. You only really need to add an error handler in every event handler. E.g. Every click event,load event, etc | I discovered this tool yesterday:
<http://www.everythingaccess.com/simplyvba-global-error-handler.htm>
It is a commercial product that enables global error handling in VB6 and VBA applications.
It has its cost but does its job perfectly. I have seen other tools (free though) helping in this VB6 mangle, but none can cover a true real global error handling like "SimplyVB6 Global Error Handler for VB6" does.
With "SimplyVB6 Global Error Handler for VB6", there is no need to change any line of existing code, and no need to number the lines of code (via a plug-in or something).
Just enable Global error handling (one line of code in the main module) and you are all set.
"SimplyVB6 Global Error Handler for VB6":
* can show the call stack with real module and function names, as well as display the source code line.
* Works only with P-Code compiled VB6 programs.
* can work via early or late binding (no DLL Hell).
I am not in any way affiliated to www.everythingaccess.com, just happy to have found it yesterday afternoon, was kind of looking at this problem again as one of my customers was having bugs in our VB6 application. I was able to test the tool yesterday afternoon, exchanging emails with the www.everythingaccess.com support and getting the evaluation product per mail.
Their web side does not allow yet to download the evaluation version of the VB6 product, you have to email them but they are answering in less than an hour. |
Question: VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
Answer: | Also: errors do propagate upwards: if method X calls methods Y and Z, a single error handler in method X will cover all three methods. | on error resume next - is kinda close but its been a while.
you might want to look up any caveats |
Subsets and Splits