text
stringlengths 8
267k
| meta
dict |
---|---|
Q: What is the best method for checking if a file exists from a SQL Server 2005 stored procedure? We used the "undocumented" xp_fileexist stored procedure for years in SQL Server 2000 and had no trouble with it. In 2005, it seems that they modified the behavior slightly to always return a 0 if the executing user account is not a sysadmin. It also seems to return a zero if the SQL Server service is running under the LocalSystem account and you are trying to check a file on the network.
I'd like to get away from xp_fileexist. Does anyone have a better way to check for the existence of a file at a network location from inside of a stored procedure?
A: You will have to mark the CLR as EXTERNAL_ACCESS in order to get access to the System.IO namespace, however as things go that is not a bad way to go about it.
SAFE is the default permission set, but it’s highly restrictive. With the SAFE setting, you can access only data from a local database to perform computational logic on that data.
EXTERNAL_ACCESS is the next step in the permissions hierarchy. This setting lets you access external resources such as the file system, Windows Event Viewer, and Web services. This type of resource access isn’t possible in SQL Server 2000 and earlier. This permission set also restricts operations such as pointer access that affect the robustness of your assembly.
The UNSAFE permission set assumes full trust of the assembly and thus imposes no "Code Access Security" limitations. This setting is comparable to the way extended stored procedures function—you assume all the code is safe. However, this setting does restrict the creation of unsafe assemblies to users who have sysadmin permissions. Microsoft recommends that you avoid creating unsafe assemblies as much as possible.
A: Maybe a CLR stored procedure is what you are looking for. These are generally used when you need to interact with the system in some way.
A: I still believe that a CLR procedure might be the best bet. So, I'm accepting that answer. However, either I'm not that bright or it's extremely difficult to implement. Our SQL Server service is running under a local account because, according to Mircosoft, that's the only way to get an iSeries linked server working from a 64-bit SQL Server 2005 instance. When we change the SQL Server service to run with a domain account, the xp_fileexist command works fine for files located on the network.
I created this CLR stored procedure and built it with the permission level set to External and signed it:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Security.Principal;
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void FileExists(SqlString fileName, out SqlInt32 returnValue)
{
WindowsImpersonationContext originalContext = null;
try
{
WindowsIdentity callerIdentity = SqlContext.WindowsIdentity;
originalContext = callerIdentity.Impersonate();
if (System.IO.File.Exists(Convert.ToString(fileName)))
{
returnValue = 1;
}
else
{
returnValue = 0;
}
}
catch (Exception)
{
returnValue = -1;
}
finally
{
if (originalContext != null)
{
originalContext.Undo();
}
}
}
}
Then I ran these TSQL commands:
USE master
GO
CREATE ASYMMETRIC KEY FileUtilitiesKey FROM EXECUTABLE FILE = 'J:\FileUtilities.dll'
CREATE LOGIN CLRLogin FROM ASYMMETRIC KEY FileUtilitiesKey
GRANT EXTERNAL ACCESS ASSEMBLY TO CLRLogin
ALTER DATABASE database SET TRUSTWORTHY ON;
Then I deployed CLR stored proc to my target database from Visual Studio and used this TSQL to execute from SSMS logged in with windows authentication:
DECLARE @i INT
--EXEC FileExists '\\\\server\\share\\folder\\file.dat', @i OUT
EXEC FileExists 'j:\\file.dat', @i OUT
SELECT @i
Whether I try a local file or a network file, I always get a 0. I may try again later, but for now, I'm going to try to go down a different road. If anyone has some light to shed, it would be much appreciated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Google Maps API - Problems with class GLatLngBounds I am having some trouble with the Google Maps API. I have an array which holds a ojbect I created to store points.
My array and class:
var tPoints = [];
function tPoint(name) {
var id = name;
var points = [];
var pointsCount = 0;
...
this.getHeadPoint = function() { return points[pointsCount-1]; }
}
tPoint holds an array of GLatLng points. I want to write a function to return a GLatLngBounds object which is extended from the current map bounds to show all the HeadPoints.
Heres what I have so far..
function getBounds() {
var mBound = map.getBounds();
for (var i = 0; i < tPoints.length; i++) {
alert(mBound.getSouthWest().lat() + "," + mBound.getSouthWest().lng());
alert(mBound.getNorthEast().lat() + "," + mBound.getNorthEast().lng());
currPoint = trackMarkers[i].getHeadPoint();
if (!mBound.containsLatLng(currPoint)) {
mBound.extend(currPoint);
}
}
return mBound;
}
Which returns these values for the alert. (Generally over the US)
"19.64258,NaN" "52.69636,NaN" "i=0"
"19.64258,NaN" "52.69636,-117.20701" "i=1"
I don't know why I am getting NaN back.
When I use the bounds to get a zoom level I think the NaN value is causing the map.getBoundsZoomLevel(bounds) to return 0 which is incorrect. Am I using GLatLngBounds incorrectly?
A: The google maps sample is using this code...
var bounds = map.getBounds();
var southWest = bounds.getSouthWest();
var northEast = bounds.getNorthEast();
var lngSpan = northEast.lng() - southWest.lng();
var latSpan = northEast.lat() - southWest.lat();
...which is putting the SouthWest/NorthEast bounds into a variable before attempting to get the individual lng/lat coordinates. Maybe there is something with the "nested" evaluations causing problems. Have tried the granular approach to see if you get the data you need?
A: I found that example through my Google searches too and did play with it. That wasn't the problem.
I found my bug. No one would have been able to solve the problem. It turns out that right before I test my bounds I had centered my map with bad data. I did something like the lngSpan = northEast.lng() - southWest.lng(); however JavaScript interpreted my var as a string. So (maxLng-minLng)/2 + minLng returns something like "20.456-116.1178" as the lng. I centered my map on var centerPoint = new GLatLng(setLat, setLng); and after that the maps API gets a little strange ;)
Thanks for the help though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Creating temporary folders I am working on a program that needs to create a multiple temporary folders for the application. These will not be seen by the user. The app is written in VB.net. I can think of a few ways to do it such as incremental folder name or random numbered folder names, but I was wondering, how other people solve this problem?
A: Just to clarify:
System.IO.Path.GetTempPath()
returns just the folder path to the temp folder.
System.IO.Path.GetTempFileName()
returns the fully qualified file name (including the path) so this:
System.IO.Path.Combine(System.IO.Path.GetTempPath(), System.IO.Path.GetTempFileName())
is redundant.
A: There's a possible race condition when:
*
*creating a temp file with GetTempFileName(), deleting it, and making a folder with the same name, or
*using GetRandomFileName() or Guid.NewGuid.ToString to name a folder and creating the folder later
With GetTempFileName() after the delete occurs, another application could successfully create a temp file with the same name. The CreateDirectory() would then fail.
Similarly, between calling GetRandomFileName() and creating the directory another process could create a file or directory with the same name, again resulting in CreateDirectory() failing.
For most applications it's OK for a temp directory to fail due to a race condition. It's extremely rare after all. For them, these races can often be ignored.
In the Unix shell scripting world, creating temp files and directories in a safe race-free way is a big deal. Many machines have multiple (hostile) users -- think shared web host -- and many scripts and applications need to safely create temp files and directories in the shared /tmp directory. See Safely Creating Temporary Files in Shell Scripts for a discussion on how to safely create temp directories from shell scripts.
A: As @JonathanWright pointed out, race conditions exist for the solutions:
*
*Create a temporary file with GetTempFileName(), delete it, and create a folder with the same name
*Use GetRandomFileName() or Guid.NewGuid.ToString to create a random folder name, check whether it exists, and create it if not.
It is possible, however, to create a unique temporary directory atomically by utilizing the Transactional NTFS (TxF) API.
TxF has a CreateDirectoryTransacted() function that can be invoked via Platform Invoke. To do this, I adapted Mohammad Elsheimy's code for calling CreateFileTransacted():
// using System.ComponentModel;
// using System.Runtime.InteropServices;
// using System.Transactions;
[ComImport]
[Guid("79427a2b-f895-40e0-be79-b57dc82ed231")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IKernelTransaction
{
void GetHandle(out IntPtr pHandle);
}
// 2.2 Win32 Error Codes <http://msdn.microsoft.com/en-us/library/cc231199.aspx>
public const int ERROR_PATH_NOT_FOUND = 0x3;
public const int ERROR_ALREADY_EXISTS = 0xb7;
public const int ERROR_EFS_NOT_ALLOWED_IN_TRANSACTION = 0x1aaf;
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Auto)]
public static extern bool CreateDirectoryTransacted(string lpTemplateDirectory, string lpNewDirectory, IntPtr lpSecurityAttributes, IntPtr hTransaction);
/// <summary>
/// Creates a uniquely-named directory in the directory named by <paramref name="tempPath"/> and returns the path to it.
/// </summary>
/// <param name="tempPath">Path of a directory in which the temporary directory will be created.</param>
/// <returns>The path of the newly-created temporary directory within <paramref name="tempPath"/>.</returns>
public static string GetTempDirectoryName(string tempPath)
{
string retPath;
using (TransactionScope transactionScope = new TransactionScope())
{
IKernelTransaction kernelTransaction = (IKernelTransaction)TransactionInterop.GetDtcTransaction(Transaction.Current);
IntPtr hTransaction;
kernelTransaction.GetHandle(out hTransaction);
while (!CreateDirectoryTransacted(null, retPath = Path.Combine(tempPath, Path.GetRandomFileName()), IntPtr.Zero, hTransaction))
{
int lastWin32Error = Marshal.GetLastWin32Error();
switch (lastWin32Error)
{
case ERROR_ALREADY_EXISTS:
break;
default:
throw new Win32Exception(lastWin32Error);
}
}
transactionScope.Complete();
}
return retPath;
}
/// <summary>
/// Equivalent to <c>GetTempDirectoryName(Path.GetTempPath())</c>.
/// </summary>
/// <seealso cref="GetTempDirectoryName(string)"/>
public static string GetTempDirectoryName()
{
return GetTempDirectoryName(Path.GetTempPath());
}
A: Something like...
using System.IO;
string path = Path.GetTempPath() + Path.GetRandomFileName();
while (Directory.Exists(path))
path = Path.GetTempPath() + Path.GetRandomFileName();
Directory.CreateDirectory(path);
A: Update: Added File.Exists check per comment (2012-Jun-19)
Here's what I've used in VB.NET. Essentially the same as presented, except I usually didn't want to create the folder immediately.
The advantage to use GetRandomFilename is that it doesn't create a file, so you don't have to clean up if your using the name for something other than a file. Like using it for folder name.
Private Function GetTempFolder() As String
Dim folder As String = Path.Combine(Path.GetTempPath, Path.GetRandomFileName)
Do While Directory.Exists(folder) or File.Exists(folder)
folder = Path.Combine(Path.GetTempPath, Path.GetRandomFileName)
Loop
Return folder
End Function
Random Filename Example:
C:\Documents and Settings\username\Local Settings\Temp\u3z5e0co.tvq
Here's a variation using a Guid to get the temp folder name.
Private Function GetTempFolderGuid() As String
Dim folder As String = Path.Combine(Path.GetTempPath, Guid.NewGuid.ToString)
Do While Directory.Exists(folder) or File.Exists(folder)
folder = Path.Combine(Path.GetTempPath, Guid.NewGuid.ToString)
Loop
Return folder
End Function
guid Example:
C:\Documents and Settings\username\Local Settings\Temp\2dbc6db7-2d45-4b75-b27f-0bd492c60496
A: You could generate a GUID for your temporary folder names.
A: You have to use System.IO.Path.GetTempFileName()
Creates a uniquely named, zero-byte temporary file on disk and returns the full path of that file.
You can use System.IO.Path.GetDirectoryName(System.IO.Path.GetTempFileName()) to get only the temp folder information, and create your folders in there
They are created in the windows temp folder and that's consider a best practice
A: As long as the name of the folder doesn't need to be meaningful, how about using a GUID for them?
A: You can use GetTempFileName to create a temporary file, then delete and re-create this file as a directory instead.
Note: link didn't work, copy/paste from: http://msdn.microsoft.com/en-us/library/aa364991(VS.85).aspx
A: Combined answers from @adam-wright and pix0r will work the best IMHO:
using System.IO;
string path = Path.GetTempPath() + Path.GetRandomFileName();
while (Directory.Exists(path))
path = Path.GetTempPath() + Path.GetRandomFileName();
File.Delete(path);
Directory.CreateDirectory(path);
A: The advantage to using System.IO.Path.GetTempFileName is that it will be a file in the user's local (i.e., non-roaming) path. This is exactly where you would want it for permissions and security reasons.
A: Dim NewFolder = System.IO.Directory.CreateDirectory(IO.Path.Combine(IO.Path.GetTempPath, Guid.NewGuid.ToString))
A: @JonathanWright suggests CreateDirectory will fail when there is already a folder. If I read Directory.CreateDirectory it says 'This object is returned regardless of whether a directory at the specified path already exists.' Meaning you do not detect a folder created between check exists and actually creating.
I like the CreateDirectoryTransacted() suggested by @DanielTrebbien but this function is deprecated.
The only solution I see that is left is to use the c api and call the 'CreateDirectory' there as it does error if the folder exists if you really need to be sure to cover the whole race condition.
That would result in something like this:
Private Function GetTempFolder() As String
Dim folder As String
Dim succes as Boolean = false
Do While not succes
folder = Path.Combine(Path.GetTempPath, Path.GetRandomFileName)
success = c_api_create_directory(folder)
Loop
Return folder
End Function
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: How to affordably release a Web App I am a broke college student. I have built a small web app in PHP5 and MySQL, and I already have a domain. What is an affordable way to get it online? A few people have suggested amazon's cloud services, but that seems equivalent to slitting my wrists and watching money slowly trickle out. So suggestions? Hosting companies, CIA drop sites, anything?
Update: A lot of suggestions have been for Dreamhost. Their plan allows for 5TB of bandwidth. Could anyone put this in perspective? For instance, how much bandwidth does a site with the kind of traffic StackOverflow get?
A: I say pay the 50-80 bucks for a real host. The classic "you get what you pay for" is very true for hosting. This will save you time, time you can spend getting those $80.
A: I use and recommend DreamHost for both their prices and customer service. I've hosted several sites here and performance has always been good. $5.95 a month for their basic package.
A: I highly recommend HostRocket. I have been with them for about 6 or 7 years now with multiple domains and have found uptime and database availability flawless. The only reason I'm leaving them is because I'm doing some .NET web apps now and HostRocket is purely LAMP based.
But without making things an ongoing ad. I will put in two "gotchas" that you'll want to be wary of when searching:
*
*"Free" hosting services. Most of these will make you subdomain on them and worse, they'll put a header and a footer on your page (sometimes in gaudy frame format) that they advertise heavily on. I don't care how poor you are, this will not help attract traffic to your app.
*A lot of the cheaper rates depend on pre-payment. HostRocket will give you $4.99 a month in hosting, but you have to pre-pay for 3 years. If you go month to month, it is $8.99. There are definitely advantages to the pre-payment, but you don't want to get caught with close to twice the monthly payment if you weren't expecting it.
I recently found a site called WebHostingStuff that seems to have a decent list of hosts and folks that put in their reviews. While I wouldn't consider it "the final authority" I have been using it as of late for some ideas when looking for a new host.
I hope this helps and happy hunting!
A: I have no specific sites to suggest, but a typical hosting company will charge you less than $10 per month for service. A simple Google search will turn up lots of results for "comparison of web hosts": http://www.google.com/search?hl=en&q=comparison+of+web+hosts&btnG=Google+Search
A: Well, Amazon EC2 is only as bad as the amount of traffic you get. So the ideal situation is to monetize your site (ads, affiliate programs, etc) so that that more traffic you get, the more you pay Amazon, but the more you make...in theory of course.
As for a budget of nothing...there's not really much you can do...hosting typically always costs something, but since you are using the LAMP stack, it's pretty cheap.
For example, hosting on GoDaddy.com for 1year can be about $50-60 which is not too bad.
I use dreamhost which costs about $80 per year, but I get MUCH more storage and bandwidth.
A: I agree with pix0r. With your requirements of php5 and mysql it seems that for starting out Dreamhost would be a good recommendation. You can always move it over pretty easily to ec2 if it takes off.
Dreamhost is great and cheap for a php5 mysql setup that gives you command line access. The problems come if you want to use some other web language/framework like RoR or Python/Django/Pylons. I know there are hacks to get things working, but last time I tried they were spotty at best and not supported by Dreamhost.
A: It may be helpful to know what kind of app we are talking about. Also what sort of traffic do you expect and to echo Adam's note what sort of business model (if any) do you have?
A: I've been at HostingMatters for years. They're relatively cheap, and their service is awesome. <12 hours for any support ticket I've ever had.
Additionally, since I've been with them for about ten years, they bumped me to an unmetered plan for no cost (at the same $10/month I was paying.) ....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Internationalization in SSRS What's the best way to handle translations for stock text in a SSRS. For instance - if I have a report that shows a grid of contents what's the best way to have the correct translation for the header of that grid show up, assuming the culture of the report is set correctly.
Put another way - is it possible to do resources in a SSRS report, or am I stuck with storing all that text in the database and querying for it?
A: AS far as I know, there is no way to localize a report (meaning automating the translation of string litterals)...
Like you said,you basically have to use the User!Language global variable to catch the user's settings and then use that to retrieve the appropriate strings from the DB...
However, you can adapt the display of currency/numeric/date fields according to the user locale. Also possible is changing the interface of the Report Viewer to match your user's langage.
Here are a few links giving tips on how to adapt the locale:
http://www.ssw.com.au/Ssw/Standards/Rules/RulesToBetterSQLReportingServices.aspx#LanguageSetting
Langage pack for Report Viewer:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=e3d3071b-d919-4ff9-9696-c11d312a36a0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Java and C# interoperability I have two programs. One is in C# and another one in Java.
Those programs will, most probably, always run on the same machine.
What would be the best way to let them talk to each other?
So, to clarify the problem:
This is a personal project (so professional/costly libraries are a no go).
The message volume is low, there will be about 1 to 2 messages per second.
The messages are small, a few primitive types should do the trick.
I would like to keep the complexity low.
The java application is deployed as a single jar as a plugin for another application. So the less external libraries I have to merge, the better.
I have total control over the C# application.
As said earlier, both application have to run on the same computer.
Right now, my solution would be to use sockets with some sort of csv-like format.
A: Kyle has the right approach in asking about the interaction. There is no "correct" answer without knowing what the usage patterns are likely to be.
Any architectural decision -- especially at this level -- is a trade-off.
You must ask yourself:
*
*What kind of messages need to be passed between the systems?
*What types of data need to be shared?
*Is there an important requirement to support complex model objects or will primitives + arrays do?
*what is the volume of the data?
*How frequently will the interactions occur?
*What is the acceptable communication latency?
Until you have an understanding of the answers, or potential answers, to those questions, it will be difficult to choose an implementation architecture. Once we know which factors are important, it will be far easier to choose the more suitable implementation candidates that reflect the requirements of the running system.
A: I've heard good things about IKVM, the JVM that's made with .NET.
A: Ice from ZeroC is a really high performance "enterprisey" interop layer that supports Java and .net amongst others. I think of it as an updated Corba - it even has its own object oriented interface definition language called Slice (like Corba's IDL, but actually quite readable).
The feature set is extensive, with far more on offer than web services, but clearly it isn't an open standard, so not a decision to make lightly. The generated code it spits out is somewhat ugly too...
A: I realize you're talking about programs on the same machine, but I've always liked the idea of passing messages in XML over HTTP.
Your server could be a web server that's ready to accept an XML payload. Your client can send HTTP messages with XML in the body, and receive an HTTP response with XML in it.
One reason I like this is that HTTP is such a widely used protocol that it's easy to accept or create HTTP POST or GET requests in any language (in the event that you decide to change either the client or server language in the future). HTTP and XML have been around for a while, so I think they're here to stay.
Another reason I like it is that your server could be used by other clients, too, as long as they know HTTP and XML.
A: I used JNBridge (http://www.jnbridge.com/jnbpro.htm) on a relatively simple project where we had a .NET client app using a relatively significant jar file full of business object logic that we didn't want to port. It worked quite nicely, but I wouldn't say we fully exercised the capabilities of JNBridge.
A: I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
A: I am a big fan of Thrift an interoperability stack from Facebook. You said they code will probably run on the same machine so it could be overkill but you can still use it.
A: If they are separate programs and running as independent applications,you may use sockets. I know it's bit complex to define communication protocol but it'll be quite straight-forward.
However if you have just two separate programs but want to run them as single application, then I guess IKVM is a better approach as suggested by marxidad.
A: It appears a very similar question has been asked before here on stack overflow (I was searching Google for java windows shared memory):
Efficient data transfer from Java to C++ on windows
From the answer I would suggest you to investigate:
"Your fastest solution will be memory
mapping a shared segment of memory,
and them implementing a ring-buffer or
other message passing mechanism. In
C++ this is straight forward, and in
Java you have the FileChannel.map
method which makes it possible."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Map VS2008 keyboard shortcuts to Eclipse
Possible Duplicate:
Configure Eclipse to use VS.Net shortcuts?
I mostly work in VS2008 but I need to do some java work in Eclipse. Is there an easy and fast way to map the VS2008 keyboard shortcuts to Eclipse?
For example, I want to map F11 in Eclipse to "step info" instead of its default of F5 but don't want to have to map each and every shortcut manually...
A: How are the Eclipse settings saved? Perhaps you could simply adapt this macro and load the resulting file into Eclipse?
A: Doesn't Eclipse have a predefined keyboard setup for Visual Studio?
A: The easiest way to do this is to install the CDT for eclipse (the standard C/C++ plugin).
Then when you go to Preferences->General->Keys you will have a "Microsoft Visual Studio" option in the dropdown.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: OpenID authentication in ASP.NET? I am starting to build a new web application that will require user accounts. Now that I have an OpenID that I am using for this site I thought it would be cool if I could use OpenID for authentication in my application. Are there any good tutorials on how to integrate OpenID with an ASP.NET site?
A: DotNetOpenId available at http://code.google.com/p/dotnetopenid
A:
Are there any good tutorials on how to integrate OpenId with an ASP.NET site?
Andrew Arnott's post titled "How to add OpenID to your ASP.NET web site (in C# or VB.NET)"
A: I'm considering the same thing. On the Open ID site, there's a link 'For Developers' @ http://openid.net/developers/ and from there is a link to 'Open Libraries' @ http://wiki.openid.net/Libraries and finally from there is one called 'DotNetOpenID' @ http://dotnetopenid.googlecode.com/ which is probably what you're looking for.
Good luck.
A: See Scott Hanselman's post on using DotNetOpenID in ASP.NET. Andrew Arnott's blog is full of samples on using DotNetOpenID with ASP.NET, including ASP.NET MVC.
I recently hooked up DotNetOpenID for the Subtext 2.0 release. It went really smoothly - the code samples included with the DotNetOpenID download are pretty helpful. The one thing I'd recommend is that you just use the library and avoid the ASP.NET control. It uses table based layout (hardcoded) and is pretty difficult to restyle.
A: DotNetNuke may not be a good current example. When we did the integration, DotNetOpenID was not currently supporting OpenID 2.0 spec. I hacked together a fork to get the 2.0 support and have not had a chance to rip it back out for the official DotNetOpenID 2.0 release.
A: You should check out the DotNetNuke codebase as well, they have been using OpenID for the last several revisions, and you'll find working code for implementing it there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: how to get locale information on a GWT application In GWT I have to specify what locales are supported in my application. The code get compiled in various files, one for each locale (beside other versions), but I have to give my clients one only URL. This URL is supposed to be a page that should be displayed according to the locale preferred by the browser.
I dont't want to have an HTTP parameter for the locale since I want to forse the locale preferred by the browser.
How can this be coded in GWT?
Should I try to to this using apache rewrite rules? I thied it, but I think I cannot access such parameter easely in a rewrite rule.
Thanks a lot,
Giuseppe
A: GWT has good support for internationalization. See this link. The i18nCreator command can help you to set up the internationalization infrastructure for similar to the way projectCreator and applicationCreator set up the GWT application.
If you have static strings (i.e. Invalid Entry!) that need to be internationalized, you don't need any additional flag to i18nCreator command to create the properties files and infrastructure.
If you have strings that need to accept parameters (i.e. Hello {0}), you need to pass the -createMessages flag to i18nCreator command to create the properties files and infrastructure.
Now your module needs to include the i18n module in your MyApplication.gwt.xml:
<inherits name="com.google.gwt.i18n.I18N"/>
Define a Java interface in the same package as your property files that extends Constants or Messages and defines methods (name matches the property entries) that all return string.
MyConstants.properties contains:
errorMessage=Invalid Entry!
MyConstants.java contains:
import com.google.gwt.i18n.client.Constants;
public interface myConstants extends Constants {
String errorMessage();
}
Now to access these internationalized Strings from you application:
public class MyApplication implements EntryPoint {
private static final MyConstants constants = (MyConstants)GWT.create(MyConstants.class);
public void onModuleLoad() {
final Label errorMessage = new Label(constants.errorMessage);
}
}
GWT implements the interface for you automagically.
You can get messages in a similar way.
Hopefully this can help you get started.
A: Unless I am reading the documentation incorrectly I don't think you have to do anything.
GWT and Locale
By making locale a client property, the standard startup process in gwt.js chooses the appropriate localized version of an application, providing ease of use (it's easier than it might sound!), optimized performance, and minimum script size.
The way I read it, as long as your module has added all the locale choices to it, it should be automatic?
A: Check this com.google.gwt.i18n.client.LocaleInfo.getCurrentLocale()
A: <inherits name="com.google.gwt.i18n.I18N"/>
<!-- Use browser-specified locale for i18n -->
<set-configuration-property name="locale.useragent" value="Y"/>
<!-- Specify locales your application support -->
<extend-property name="locale" values="en"/>
<extend-property name="locale" values="de_DE"/>
<extend-property name="locale" values="ru_RU"/>
A: I had the same problem as you, but as I really need to know the current locale (I'm requesting a second server for data that I want to be localizable) I found this class:
com.google.gwt.i18n.client.LocaleInfo#getCurrentLocale(). That should give you what GWT uses currently.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What's the best way to implement field validation using ASP.NET MVC? I am building a public website using ASP.NET, as part of the deliverable I need to do an Admin Site for data entry of the stuff shown in the public site, I was wondering what techniques or procedures are people using to validate entries using ASP.NET MVC.
A: IMO using xVal with jQuery and DataAnnotationsModelBinder is the best combination.
Sometimes however, there are validation rules which cannot be checked entirely on the client side, so you need to use remote client-side validation.
I figured out way to generically implement remote client-side validation with xVal / jQuery.validate so that
*
*Validation rules remain solely in your ASP.NET MVC model
*You write each validation rule just once, and only in easily testable C# code. There is no JavaScript or other client-side counterpart .
*There is no need to branch or otherwise modify xVal or jquery.validate
*All you have to do for each new remote form validation rule is to derive from the base class shown in this article.
I wrote a blog article on this describing all the details.
A: Take a look at the JQuery Validation plugin this plugin is amazing,it's clean to implement and has all the features you could ever need, including remote validation via AJAX.
Also a sample MVC controller method can be found here which basically uses the JsonResult action type like:
public JsonResult CheckUserName(string username)
{
return Json(CheckValidUsername(username));
}
A: My favorite way it perform both client and server validation using model-based attributes. I wrote a short post about this and released the source code as well, that will basically allow you to create a class like this
class User {
[Required]
public string Name{get;set;}
[Email][Required]
public string Email {get;set;}
}
And the appropriate javascript code will be generated to perform client validation as well as server-side validation runner will be validate your submitted form.
Read the post over here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: How do you use back-references to PCREs in PHP? I read this PHP RegEx page, but either I'm missing something, misreading something, or it doesn't work the way they say. I'm guessing it's one of the first two.
$str = preg_replace("([|]\d*)", "\1;", $str);
A: Your regular expression should follow Perl syntax, meaning it has to start and end with the same character (with some exceptions). Also, the back reference should start with a double slash, to get around PHPs double escaping. This should work (with a quick test):
$str = "asdfasdf |123123 asdf iakds |302 asdf |11";
$str = preg_replace("/([|]\d*)/", "\\1;", $str);
echo $str; // prints "asdfasdf |123123; asdf iakds |302; asdf |11;"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Are there any tools for merging CSS? I have a couple CSS files with overlapping CSS selectors that I'd like to programmatically merge (as in not just appending one file to the end of the other). Is there any tool to do this online? or a Firefox extension perhaps?
A: I found Factor CSS - complete with source code, but I think it does way more than I'd need. I really just want to combine CSS blocks that have the same selectors. I'll check out the source code and see if it can be converted to something usable as a TextMate bundle. That is, unless someone else manages to get to it before me.
EDIT: Even better - here's a list of web-based tools for checking/formatting/optimizing css.
A: No I wish there was but the programming effort seems too much since there are multiple ways to reference a single element. The best that you can do is use a runtime like FireBug to find duplicates.
A: I wrote a Perl utility to do this several years ago.
As well as merging one or more stylesheets into a single coherent sorted output (complete with comments to show which file(s) each property appeared in, and warnings when a property has conflicting values), you can also selectively search or merge based on the selector, the property or both.
These are handled intelligently so that, for example, if you search for the property font you also get font-size, font-weight etc (still presented inside CSS blocks with the relevant selectors that they were taken from). Likewise, selector searching tries to Do The Right (ie generally most useful) Thing. If you search for, say, the element a, it will match any block whose selector is a, a:hover, a.extlink, a#mylink, .foo a, #bar a, p a, pre > a, a + p, a img... (the last two don't directly affect the styling of the a itself but of an adjacent or descendent element, which it is often useful to know about in such a search), without matching #a, .a, etc. Of course this behaviour is optional, you can also search for an exact selector. Or a regex.
Apart from perl itself the only dependency is CSS::Tiny
It's free software, and you can get it here: cssmerge
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Haskell's algebraic data types I'm trying to fully understand all of Haskell's concepts.
In what ways are algebraic data types similar to generic types, e.g., in C# and Java? And how are they different? What's so algebraic about them anyway?
I'm familiar with universal algebra and its rings and fields, but I only have a vague idea of how Haskell's types work.
A: Haskell's datatypes are called "algebraic" because of their connection to categorical initial algebras. But that way lies madness.
@olliej: ADTs are actually "sum" types. Tuples are products.
A: @Timbo:
You are basically right about it being sort of like an abstract Tree class with three derived classes (Empty, Leaf, and Node), but you would also need to enforce the guarantee that some one using your Tree class can never add any new derived classes, since the strategy for using the Tree datat type is to write code that switches at runtime based on the type of each element in the tree (and adding new derived types would break existing code). You can sort of imagine this getting nasty in C# or C++, but in Haskell, ML, and OCaml, this is central to the language design and syntax so coding style supports it in a much more convenient manner, via pattern matching.
ADT (sum types) are also sort of like tagged unions or variant types in C or C++.
A: "Algebraic Data Types" in Haskell support full parametric polymorphism, which is the more technically correct name for generics, as a simple example the list data type:
data List a = Cons a (List a) | Nil
Is equivalent (as much as is possible, and ignoring non-strict evaluation, etc) to
class List<a> {
class Cons : List<a> {
a head;
List<a> tail;
}
class Nil : List<a> {}
}
Of course Haskell's type system allows more ... interesting use of type parameters but this is just a simple example. With regards to the "Algebraic Type" name, i've honestly never been entirely sure of the exact reason for them being named that, but have assumed that it's due the mathematical underpinnings of the type system. I believe that the reason boils down to the theoretical definition of an ADT being the "product of a set of constructors", however it's been a couple of years since i escaped university so i can no longer remember the specifics.
[Edit: Thanks to Chris Conway for pointing out my foolish error, ADT are of course sum types, the constructors providing the product/tuple of fields]
A: In universal algebra
an algebra consists of some sets of elements
(think of each set as the set of values of a type)
and some operations, which map elements to elements.
For example, suppose you have a type of "list elements" and a
type of "lists". As operations you have the "empty list", which is a 0-argument
function returning a "list", and a "cons" function which takes two arguments,
a "list element" and a "list", and produce a "list".
At this point there are many algebras that fit the description,
as two undesirable things may happen:
*
*There could be elements in the "list" set which cannot be built
from the "empty list" and the "cons operation", so-called "junk".
This could be lists starting from some element that fell from the sky,
or loops without a beginning, or infinite lists.
*The results of "cons" applied to different arguments could be equal,
e.g. consing an element to a non-empty list
could be equal to the empty list. This is sometimes called "confusion".
An algebra which has neither of these undesirable properties is called
initial, and this is the intended meaning of the abstract data type.
The name initial derives from the property that there is exactly
one homomorphism from the initial algebra to any given algebra.
Essentially you can evaluate the value of a list by applying the operations
in the other algebra, and the result is well-defined.
It gets more complicated for polymorphic types ...
A: old question, but no one's mentioned nullability, which is an important aspect of Algebraic Data Types, perhaps the most important aspect. Since each value most be one of alternatives, exhaustive case-based pattern matching is possible.
A: A simple reason why they are called algebraic; there are both sum (logical disjunction) and product (logical conjunction) types. A sum type is a discriminated union, e.g:
data Bool = False | True
A product type is a type with multiple parameters:
data Pair a b = Pair a b
In O'Caml "product" is made more explicit:
type 'a 'b pair = Pair of 'a * 'b
A: Haskell's algebraic data types are named such since they correspond to an initial algebra in category theory, giving us some laws, some operations and some symbols to manipulate. We may even use algebraic notation for describing regular data structures, where:
*
*+ represents sum types (disjoint unions, e.g. Either).
*• represents product types (e.g. structs or tuples)
*X for the singleton type (e.g. data X a = X a)
*1 for the unit type ()
*and μ for the least fixed point (e.g. recursive types), usually implicit.
with some additional notation:
*
*X² for X•X
In fact, you might say (following Brent Yorgey) that a Haskell data type is regular if it can be expressed in terms of 1, X, +, •, and a least fixed point.
With this notation, we can concisely describe many regular data structures:
*
*Units: data () = ()
1
*Options: data Maybe a = Nothing | Just a
1 + X
*Lists: data [a] = [] | a : [a]
L = 1+X•L
*Binary trees: data BTree a = Empty | Node a (BTree a) (BTree a)
B = 1 + X•B²
Other operations hold (taken from Brent Yorgey's paper, listed in the references):
*
*Expansion: unfolding the fix point can be helpful for thinking about lists. L = 1 + X + X² + X³ + ... (that is, lists are either empty, or they have one element, or two elements, or three, or ...)
*Composition, ◦, given types F and G, the composition F ◦ G is a type which builds “F-structures made out of G-structures” (e.g. R = X • (L ◦ R) ,where L is lists, is a rose tree.
*Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus:
1′ = 0
X′ = 1
(F + G)′ = F' + G′
(F • G)′ = F • G′ + F′ • G
(F ◦ G)′ = (F′ ◦ G) • G′
References:
*
*Species and Functors and Types, Oh My!, Brent A. Yorgey, Haskell’10, September 30, 2010, Baltimore, Maryland, USA
*Clowns to the left of me, jokers to the right (Dissecting Data Structures), Conor McBride POPL 2008
A: For me, the concept of Haskell's algebraic data types always looked like polymorphism in OO-languages like C#.
Look at the example from http://en.wikipedia.org/wiki/Algebraic_data_types:
data Tree = Empty
| Leaf Int
| Node Tree Tree
This could be implemented in C# as a TreeNode base class, with a derived Leaf class and a derived TreeNodeWithChildren class, and if you want even a derived EmptyNode class.
(OK I know, nobody would ever do that, but at least you could do it.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: SQL Server Maximum row size Came across this error today. Wondering if anyone can tell me what it means:
Cannot sort a row of size 9522, which is greater than the allowable maximum of 8094.
Is that 8094 bytes? Characters? Fields? Is this a problem joining multiple tables that are exceeding some limit?
A: The problem that seems to catch a lot of people, is that you can create a table that by definition would hold more than 8K of data, and it will accept it just fine. And the table will work fine, up until the point you actually try to insert more than 8K of data into the table.
So, let's say you create a table with an integer field, for the primary key, and 10 varchar(1000) fields. The table would work fine most of the time, as the number of times you would fill up all 10 of your varchar(1000) fields would be very few. Howerver, in the even that you tried to put 1000 characters in each of your fields, it would give you the error mentioned in this question.
A: FYI, running this SQL command on your DB can fix the problem if it is caused by space that needs to be reclaimed after dropping variable length columns:
DBCC CLEANTABLE (0,[dbo.TableName])
See: http://msdn.microsoft.com/en-us/library/ms174418.aspx
A: In SQL 2000, the row limit is 8K bytes, which is the same size as a page in memory.
[Edit]
In 2005, the page size is the same (8K), but the database uses pointers on the row in the page to point to other pages that contain larger fields. This allows 2005 to overcome the 8K row size limitation.
A: That used to be a problem in SQL 2000, but I thought that was fixed in 2005.
A: 8094 bytes.
If you list some more information about what you are doing it might help us to figure out the actual cause.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: PHPs htmlspecialcharacters equivalent in .NET? PHP has a great function called htmlspecialcharacters() where you pass it a string and it replaces all of HTML's special characters with their safe equivalents, it's almost a one stop shop for sanitizing input. Very nice right?
Well is there an equivalent in any of the .NET libraries?
If not, can anyone link to any code samples or libraries that do this well?
A: Don't know if there's an exact replacement, but there is a method HtmlUtility.HtmlEncode that replaces special characters with their HTML equivalents. A close cousin is HtmlUtility.UrlEncode for rendering URL's. You could also use validator controls like RegularExpressionValidator, RangeValidator, and System.Text.RegularExpression.Regex to make sure you're getting what you want.
A: Actually, you might want to try this method:
HttpUtility.HtmlAttributeEncode()
Why? Citing the HtmlAttributeEncode page at MSDN docs:
The HtmlAttributeEncode method converts only quotation marks ("), ampersands (&), and left angle brackets (<) to equivalent character entities. It is considerably faster than the HtmlEncode method.
A: Try this.
var encodedHtml = HttpContext.Current.Server.HtmlEncode(...);
A: System.Web.HttpUtility.HtmlEncode(string)
A: In an addition to the given answers:
When using Razor view engine (which is the default view engine in ASP.NET), using the '@' character to display values will automatically encode the displayed value. This means that you don't have to use encoding.
On the other hand, when you don't want the text being encoded, you have to specify that explicitly (by using @Html.Raw). Which is, in my opinion, a good thing from a security point of view.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Enterprise Reporting Solutions What options are there in the industry for enterprise reporting? I'm currently using SSRS 2005, and know that there is another version coming out with the new release of MSSQL.
But, it seems like it might also be a good time to investigate the market to see what else is out there.
What have you encountered? Do you like it/dislike it? Why?
Thank you.
A: Having experiences with both (CR and SSRS) here is the lowdown of what I think:
CR lets you develop a report very fast. As long as its simple. If it gets slightly complicated, it gets fishy trying to make it do what you want. Per example you are limited to a max hierarchy of 2 subreports. It gets weird when you have subreports that need parameters that must be altered in a main report, etc. Plenty of workarounds but sometime they simply suck.
Also the report layout is basically fixed; you have to put your data and info into the specific sections (Page Header/Footer,Details/Report Footer/Header). This is rather helpful as it helps you correctly display data that spans on multiple pages.
Also it has a fairly complete set of functions that can be used to manipulate financial data and etc.
SSRS is more flexible around the report editing. Its report wizard allows you to basically create a report in a WYSIWYG environnement, it allows you multiple subreports so you can easily display multiple datasets in one page. It allows you to connect .NET assemblies to do complicated data manipulation/calculation. However, it can get hard to properly display your reports in a fixed way, you often have to struggle to get everything displayed as you want it.
Crystal Reports is $$$.
SSRS, if I remember correctly is now bundled "free" in the SQL Server Enterprise edition. Of course you probably pay for it in the price of the whole package, I guess it's MS way to try and push it in corporate land.
A: I've been using SSRS for a while now... and coworkers who look over my shoulder say it looks to be MUCH easier to do the SSRS thing than the Crystal. I've never used Crystal, so I can't tell you which is better, but I get the distinct impression that MS tried to rush SSRS out the door.
Largest weaknesses:
*
*Sharing Datasets. I work in a DoD environment. 90% of my reports use a Service parameter. I get sick of typing the same query over and over again.
*Skinning. If you do the report wizard you can skin your report, but not if you do
it manually? huh? I can "skin"things by selecting all the affectedfields and then setting back colors,fore colors, etc. But nowhere (atleast no where I can find) can youskin something with 1 click.
*No custom skinning. Report wizard/
manual, there's no where I can find
to implement a custom skin. Would
be nice to just set up something
(like CSS for HTML) and then just
link to it. Tools should help you by reducing your effort rather than add to said effort.
*Matrixes need better documentation. I can do VERY simple things, but once I try to get into fun/difficult things, books/the internet seem to let me down. Tables don't have this issue.
Strengths:
*
*Very simple for an old SQL developer to get good reports that at least look better than the drek that dumping a restlt set to Excel provides.
*Custom sorting (use on most reports)
*Handles SP and Straight SQL. Love that I'm not locked into 1 path or the other (I've used both depending on circumstances).
*Price... once you've paid for Visual Studio/SQL Server... it's a freebie.
My 2 cents, hope this helps you.
A: A "pure Java" solution is i-net Clear Reports (aka i-net Crystal-Clear).
*
*Supports Crystal Reports templates as well as any JDBC data source.
*Comes with a FREE visual report designer.
*Good price for what it does, especially in comparison to some of the "pricier" alternatives.
*The latest version includes a web-based configuration tool as well as an ad-hoc report creation tool.
*Has a .NET port (with extensive API)
A: There are a number of really great solutions out there for Enterprise Reporting. Within the big four (BO/Crystal, MS SRSS, Cognos, Oracle) the basic reporting functions are all covered. You really need to evaluate what core functionality is most important to you and what the pre-dominant architecture in your environment is.
The consolidation within the BI market has made the environment issue all the more relevant. If you have an Oracle enterprise, you may as well use Oracle BI. The same applies for SAP/BO, IBM/Cognos, and Microsoft. Particularly if you are making a new BI decision.
Finally, there are a number of Open Source solutions (BIRT, Jasper, Pentaho) that make sense if you are an OSS shop or if you are looking to avoid some of the licensing fees associated with the major BI players.
A: You should try BIRT. BIRT is open source so you can start for free. It has a nice graphical designer. You can see some videos of how easy to design BIRT reports at http://www.birt-exchange.com. The BIRT project was sponsored by Actuate Corp who offers commercial servers for deploying BIRT to the Enterprise when you need scheduling, security integration, email notifications, etc. The commercial version also mixes AJAX with the BIRT viewer for more end-user interactivity and offers ad-hoc BIRT reporting through a browser.
A: I've used Cognos Series 7, Cognos Series 8, Crystal Reports, Business Objects XI R2 WebIntelligence, Reporting Services 2000, Reporting Services 2005, and Reporting Services 2008. Here's my feedback on what I've learned:
Reporting Services 2008/2005/2000
PROS
*
*Cost: Cheapest enterprise business intelligence solution if you are using MS SQL Server as a back-end. You also have a best-in-class ETL solution at no additional cost if you throw in SSIS.
*Most Flexible: Most flexible reporting solution I've ever used. It has always met all my business needs, particularly in its latest incarnation.
*Easily Scalable: We initially used this as a departmental solution supporting about 20 users. We eventually expanded it to cover a few thousand users. Despite having a really bad quality virtual server located in a remote data center, we were able to scale to about 50-100 concurrent user requests. On good hardware at a consulting gig, I was able to scale it to a larger set of concurrent users without any issues. I've also seen implementations where multiple SSRS servers were deployed in different countries and SSIS was used to synch the data in the back-ends. This allowed for solid performance in a distributed manner at almost no additional cost.
*Source Control Integration: This is CRITICAL to me when developing reports with my business intelligence teams. No other BI suite offers an out-of-box solution for this that I've ever used. Every other platform I used either required purchasing a 3rd party add-in or required you to promote reports between separate development, test, and production environments.
*Analysis Services: I like the tight integration with Analysis Services between SSRS and SSIS. I've read about instances where Oracle and DB2 quotes include installing a SQL Server 2005 Analysis Services server for OLAP cubes.
*Discoverability: No system has better discoverability than SSRS. There are more books, forums, articles, and code sites on SSRS than any other BI suite that I've ever used. If I needed to figuire out how to do something in SSRS, I could almost always find it with a few minutes or hours of work.
CONS
*
*IIS Required for SSRS 2005/2000: Older versions of SSRS required installing IIS on the database server. This was not permissible from an internal controls perspective when I worked at a large bank. We eventually implemented SSRS without authorized approval from IT operations and basically asked for forgiveness later. This is not an issue in SSRS 2008 since IIS is no longer required.
*Report Builder: The web-based report builder was non-existant in SSRS 2000. The web-based report builder in SSRS 2005 was difficult to use and did not have enough functionality. The web-based report builder in SSRS 2008 is definitely better, but it is still too difficult to use for most business users.
*Database Bias: It works best with Microsoft SQL Server. It isn't great with Oracle, DB2, and other back-ends.
Business Objects XI WebIntelligence
PROS
*
*Ease of Use: Easiest to use for your average non-BI end-user for developing ad hoc reports.
*Database Agnostic: Definitely a good solution if you expect to use Oracle, DB2, or another database back-end.
*Performant: Very fast performance since most of the page navigations are basically file-system operations instead of database-calls.
CONS
*
*Cost: Number one problem. If I want to scale up my implementation of Business Objects from 30 users to 1000 users, then SAP will make certain to charge you a few hundred thousands of dollars. And that's just for the Business Objects licenses. Add in the fact that you will also need database server licenses, you are now talking about a very expensive system. Of course, that could be the personal justification for getting Business Objects: if you can convince management to purchase a very expensive BI system, then you can probably convince management to pay for a large BI department.
*No Source Control: Lack of out-of-the-box source control integration leads to errors in accidentally modifying and deploying old report definitions by mistake. The "work-around" for this is promote reports between environments -- a process that I do NOT like to do since it slows down report development and introduces environmental differences variables.
*No HTML Email Support: You cannot send an HTML email via a schedule. I regularly do this in SSRS. You can buy an expensive 3rd party add-in to do this, but you shouldn't have to spend more money for this functionality.
*Model Bias: Report development requires universes -- basically a data model. That's fine for ad hoc report development, but I prefer to use stored procedures to have full control of performance. I also like to build flat tables that are then queried to avoid costly complex joins during report run-time. It is silly to have to build universes that just contain flat tables that are only used by one report. You shouldn't have to build a model just to query a table. Store procedure support is also not supported out of the box without hacking the SQL Overrides.
*Poor Parameter Support: Parameter support is terrible in BOXI WebIntelligence reports. Although I like the meta-data refresh options for general business users, it just isn't robust enough when trying to setup schedules. I almost always have to clone reports and alter the filters slightly which leads to unnecessary report definition duplication. SSRS beats this hands down, particularly since you can make the value and the label have different values -- unlike BOXI.
*Inadequate Report Linking Support: I wanted to store one report definition in a central folder and then create linked reports for other users. However, I quickly found out end-users needed to have full rights on the parent object to use the object in their own folder. This defeated the entire purpose of using a linked report object. Give me SSRS!
*Separate CMC: Why do you have to launch another application just to manage your object security? Worse, why isn't the functionality identical between CMC and InfoSys? For example, if you want to setup a scheduled report to retry on failed attempts, then you can specify the number of retries and the retry interval in CMC. However, you can't do this in InfoSys and you can't see the information either. InfoSys allows you to setup event-driven schedules and CMC does not support this feature.
*Java Version Dependency: BOXI works great on end-user machines so long as they are running the same version of java as the server. However, once a newer version of java is installed on your machine, things starts to break. We're running Java 1.5 on our BOXI R2 server (the default java client) and almost everyone in the company is on Java 1.6. If you use Java 1.6, then prompts can freeze your IE and FoxFire sessions or crash your report builder unexpectedly.
*Weak Discoverability: Aside from BOB (Business Objects Board), there isn't much out there on the Internet regarding troubleshooting Business Objects problems.
Cognos Series 8
PROS
*
*Ease of Use: Although BOXI is easier to use for writing simple reports for general business users, Cognos is a close 2nd in this area.
*Database Agnostic: Like BOXI this is definitely a good solution if you expect to use Oracle, DB2, or another database back-end.
*FrameWork Manager: This is definitely a best-in-class meta-data repository. BOXI's universe builder wishes it was half as good. This tool is well suited to promoting packages across Development, Test, and Production environments.
CONS
*
*Cost: Same issue as Business Objects. Similar cost structure. Similar database licensing requirements as well.
*No Source Control: Same issue as Business Objects. I'm not aware of any 3rd party tools that resolve this issue, but they might exist.
*Model Bias: Same issue as Business Objects. Has better support for stored procedures in FrameWork Manager, though.
*Poor Parameter Support: Same issue as Business Objects. Has better support for creating prompt-pages if you can code in Java. Buggy behavior, though, when users click the back-button to return to the prompt-page. SSRS beats this out hands-down.
*Inadequate Error Handling: Error messages in Cognos are nearly impossible to decipher. They generally give you a long negative number and a stack dump as part of the error message. I don't know how many times we "resolved" these error messages by rebuilding reports from scratch. For some reason, it is pretty easy to corrupt a report definition.
*No Discoverability: It is very hard to track down any answers on how to troubleshoot problems or to implement functionality in Cognos. There just isn't adequate community support in Internet facing websites for the products.
As you can guess from my answer, I believe Microsoft's BI suite is the best platform on the market. However, I must state that most articles I've read on comparisons of BI suites usually do not rate Microsoft's offering as well as SAP's Business Objects and Cognos's Series 8 products. Also, I've also seen Microsoft come out on the bottom in internal reviews of BI Suites in two separate companies after they were review by the reigning CIO's. In both instances, though, it seemed like it all boiled down to wanting to be perceived as a major department that justified a large operating budget.
A: We are in the middle implementing Cognos right now, and I really think it's a fairly robust tool. The ETL tool seems pretty straightforward and easy to use and the front end is fairly easy to administer and set up. I don't have much experience in the framework models and the data modeling stuff, but our report designer guy really seems to like it.
A: I'd like to make two contributions. One is very negative (CR is rubbish) and the other is very positive (SSRS is backing store independent and available at no cost).
On a side note, if you mod an answer down then add a comment explaining why you think the answer is wrong or counterproductive, unless someone else already said the same thing. Even then, a simple "as above" would be helpful.
Crystal Reports is rubbish
Crystal Reports is an insult to the development community. Simple dialog resize bugs that would be the work of moments to fix have remained uncorrected over ten years and six major releases, so I really doubt that any attempt is ever made to address the tough stuff. Crystal Reports is profoundly untrustworthy, as this SQL demonstrates.
SELECT COUNT(*) FROM sometable WHERE 1=0
This statement produces a result of one when it should produce zero. This is a repeatable off-by-one error in the heart of the Crystal Reports SQL engine.
The support for CR is equally dismal, having been moved offshore many years ago. If you cough up $200 for a support call, an unintelligible foreigner will misunderstand your question and insult your intelligence until you give up, at which point he will - because you have chosen to give up - declare the call resolved.
If it's really this bad why is it so popular? It isn't popular. It's very un popular. It gets a toe-hold via great marketing. Management types see glossy adverts promising much, and because CR has been around so long they assume it's all true. Much like bindis (Australian prickle weed) in your lawn, once installed it's nearly impossible to get rid of it. Admitting to incompetence is a bad career move for a manager. When managers lack the technical expertise to make a decision, rather than allow a technical person to make the decision they fall back on precedent and repeat the mistakes of their peers. They also fail to realise that if they want to actually use the web delivery stuff they are up for a server licence. Also, longevity means it's easy to find people with CR experience.
For the details and a good laugh I recommend these links.
*
*Clubbing the Crystal Dodo
*Crystal Reports "Sucks"
*[Crystal Reports Sucks Donkey Dork ] (dead link, still trying to find content) 3
Or just type "crystal reports sucks" into Google. For a balanced perspective, also try "crystal reports rocks". Don't worry, this won't take much of your time. There are no positive reviews outside their own marketing hype.
Now for something more positive.
SQL Reports is effectively free
You can install it at no charge as part of SQL Express with Advanced Services. You can also install .NET 2.x which brings with it ADO.NET drivers for major database providers as well as generic OLEDB and ODBC support.
Since SSRS uses ADO.NET, this means you can connect SSRS to anything to which you can connect ADO.NET, ie just about anything.
The terms of the licence applying to SSRS as supplied with SQL Express require it to be deployed and installed as part of SQL Express. They don't have anything to say about where reports get their data.
SQL Express is limited, but the accompanying SSRS has no such limitations. If your data is provided by another database engine you can support as many users as that engine is licensed to support. Don't get me wrong, at work we have dozens of licensed copies of MS SQL Server. I'm just saying that you can use SSRS against the backing store of your choice, without having to find or justify budget for it. What you will be missing is scheduling and subscription support. I speak from experience when I say that it is not profoundly difficult to write a service that fills the gap.
SSRS fulfils every promise that CR makes. Easy to use, good support for user DIY, has a schema abstraction tool conceptually similar to CR BO but which works properly, high performance, schedulable, easy to use, stable, flexible, easy to extend, can be controlled interactively or programmatically. In the 2008 edition they even support rich-formatted flow-based templates (mail merge for form letters).
It is the best reporting solution I have ever seen in twenty years of software development on platforms ranging from mainframes through minis to micros. It ticks every box I can think of and has only one profound weakness I can recall - the layout model doesn't support positioning relative to page bottom and the only workaround is positioning relative to page top on a known height page.
It does not address problems like heterogeneous data provision, but IMHO these can and should be addressed outside of the report proper. Plenty of data warehousing solutions (such as SSIS) provide tools for solving such problems, and it would be absurd to put a half-assed duplicate capability in the report engine.
Getting a sane decision out of your pointy-haired boss
Tell him you think that given its problematic history and unpopularity with developers, choosing Crystal Reports is a courageous move that marks him as a risk-taker.
Some bosses are so stupid they will think this is a good thing but with them you are doomed anyway.
A: One of the most comprehensive solutions is Cognos.
Dislike: You wouldn't believe how many CDs it ships in... its huge.
A: I'm suprised no-one has mentioned Microstrategy. We do quite a bit of datawarehouse (11TB) work and microstrategy does a great job or generating SQL so the business users can get the data without bothering us. However it is a very expensive solutuion. if you don't need ad-hoc abilities and decide on crystal i recommend lookin into their VS2005 or Eclipse plugins which are "fre for production use".
A: In his blog at SAP Community Website, Henry Nordstrom, has given a very good evaluation of various reporting tools available. Though he has done the same from SAP usage point of view, the facts are applicable to anything else also.
Henry's Blog on SAP Developer Network
A: I'm surprised nobody mentioned OpenReports with Jasper report templates. I know it's not quite enterprise level, but it's quite powerful and I think on par with Crystal Reports. I use iReport to create CR-like reports. OpenReports also supports JXLS which is very easy to use to create Excel-based reports.
http://oreports.com/
http://jasperforge.org/projects/ireport
A: Crystal Reports by Business Objects seems to be a popular choice.
I never wrote any reports in it myself, but others in my team who did sometimes struggled getting the more complex reports to work.
It also might be a bit pricey, depending on your budget.
A: If you want an enterprise-class report server that works with ANY report designer you want to use, check out Universal Report Server from VersaReports.com. Out-of-the-box it supports Crystal, DevExpress, Telerik, and ActiveReports, and provides an API if you want to support another report designer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Error viewing csproj property pages in VisualStudio2005 When I goto view the property page for my CSharp test application I get the following error.
"An error occurred trying to load the page. COM object that has been seperated from its underlying RCW cannot be used."
The only thing that seems to fix it is rebooting my PC!
A: This is usually caused by a 'rogue' add-in.
Try disabling them all, and then re-enabling them checking for the error - so that you can narrow down the culprit.
A: It seems Microsoft Style Cop was causing the issue.
It was not registered as an Add-in, but was integrated into VS2005 on some deeper level.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Accessing html parameter in PHP I'm trying to do a simple test php script for sessions. Basically it increments a counter (stored in $_SESSION) every time you refresh that page. That works, but I'm trying to have a link to destroy the session which reloads the page with the ?destroy=1 parameter. I've tried a couple of if statements to see if that parameter is set and if so to destroy the session but it doesn't seem to work.
I've even put an if statement in the main body to pop-up a message if the parameter is set - but it doesn't seem to be picked up.
I know I'm doing something silly (I'm a PHP newbie) but I can't seem to find what it is...
See code here:
<?php
if ($_POST['destroy']) {
session_destroy();
} else {
session_start();
}
?>
<html>
<head>
<title>Session test</title>
</head>
<body>
<?php
if (isset($_POST['destroy'])) {
echo "Destroy set";
}
$_SESSION['counter']++;
echo "You have visited this page " . $_SESSION['counter'] . " times" . "<BR>";
echo "I am tracking you using the session id " . session_id() . "<BR>";
echo "Click <a href=\"" . $_SERVER['PHP_SELF'] . "?destroy=1\">here</a> to destroy the session.";
?>
A: I think you put
$_POST['destroy']
Instead of
$_GET['destroy']
You need to use a form if you'd like to use a $_POST variable. $_GET variables are stored in the URL.
A: By the way you can use
$_REQUEST['destroy']
which would work regardless if the data is passed in a POST or a GET request.
A: In the PHP Manual it has code snippet for destroying a session.
session_start();
$_SESSION = array();
if (isset($_COOKIE[session_name()])) {
setcookie(session_name(), '', time()-42000, '/');
}
session_destroy();
A: Yeah, you're going to want to do
if( $_GET['destroy'] == 1 )
or
if( isset($_GET['destroy']) )
A:
I know I'm doing something silly (I'm a php newbie) but I can't seem to find what it is...
that is how you are going to learn a lot ;) enjoy it ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you do a case insensitive search using a pattern modifier using less? It seems like the only way to do this is to pass the -i parameter in when you initially run less. Does anyone know of some secret hack to make something like this work
/something to search for/i
A: You can also type command -I while less is running. It toggles case sensitivity for searches.
A: Add-on to what @Juha said: Actually -i turns on Case-insensitive with SmartCasing, i.e if your search contains an uppercase letter, then the search will be case-sensitive, otherwise, it will be case-insensitive. Think of it as :set smartcase in Vim.
E.g.: with -i, a search for 'log' in 'Log,..' will match, whereas 'Log' in 'log,..' will not match.
A: It appears that you can summon this feature on a per search basis like so:
less prompt> /search string/-i
This option is in less's interactive help which you access via h:
less prompt> h
...
-i ........ --ignore-case
Ignore case in searches that do not contain uppercase.
-I ........ --IGNORE-CASE
Ignore case in all searches.
...
I've not extensively checked but the help in less version 487 on MacOS as well as other Linux distros lists this option as being available.
On MacOS you can also install a newer version of less via brew:
$ brew install less
$ less --version
less 530 (POSIX regular expressions)
Copyright (C) 1984-2017 Mark Nudelman
References
*
*less is always case-insensitive
A: When using -i flag, be sure to enter the search string completely in lower case, because if any letter is upper case, then its an exact match.
See also: the -I (capital i) flag of less(1) to change this behavior.
A: You can also set the environment variable LESS
I use LESS=-Ri, so that I can pump colorized output from grep into it, and maintain the ANSI colour sequences.
Another little used feature of less that I found is starting it with +F as an argument (or hitting SHIFT+F while in less). This causes it to follow the file you've opened, in the same way that tail -f <file> will. Very handy if you're watching log files from an application, and are likely to want to page back up (if it's generating 100's of lines of logging every second, for instance).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "346"
} |
Q: Structure of Projects in Version Control I know there are at least 10 different ways to structure project in version control. I'm curious what some methods being used are and which ones work for you. I've worked with SVN, TFS, and currently/unfortunately VSS. I've seen version control implemented very poorly and just OK, but never great.
Just to get the ball rolling, here is a review of things I've seen.
This example is SVN-based, but applies to most VCS's (not so much to distributed version control).
*
*branch the individual projects that are part of site
/division/web/projectName/vb/src/[trunk|branches|tags]
*branch the whole site, in the case I've seen, the whole site except for core components was branched.
/division/[trunk|branches|tags]/web/projectName/vb/src/
*Use main-line a default, only branch when necessary for huge changes.
A: Example for SVN:
trunk/
branch/
tags/
The trunk should be kept at a point where you can always push a release from it. There should be no huge gaping bugs that you know about(of course there will be eventually but that is what you should strive for).
Every time you need to make a new feature, do a design change, whatever, branch. Tag that branch at the start. Then when you are finished with the branch tag it at the end. This helps out with merging back into trunk.
Every time you need to push a release, tag. This way if something goes horribly wrong you can rollback to the previous release.
This setup keeps trunk as clean as possible and allows you to make quick bug fixes and push them out while keeping the majority of your development in branches.
Edit: For 3rd party stuff it depends. If I can avoid it I do not have it under source control. I keep it in a directory outside source control and include it from there. For things like jquery, I do leave it under source control. The reason is it simplifies my script for pushing. I can simply have it do an svn export and rsync.
A: For my projects, I always use this structure.
*
*trunk
*
*config
*docs
*sql
*
*initial
*updates
*src
*
*app
*test
*thirdparty
*
*lib
*tools
*tags
*branches
*
*config - Used to store my application config templates. During the build process, I take these templates and replace token placeholders with actual values depending on what configuration I am making the build.
*docs - Any application documentation gets placed in here.
*sql - I break my sql scripts into two directories. One for the initial database setup for when you are starting fresh and another place for my update scripts which get ran based on the version number of the database.
*src - The application source files. In here I break source files based on application and tests.
*thirdparty - This is where I put my third party libraries that I reference inside of my application and not available in the GAC. I split these up based on lib and tools. The lib directory holds the libraries that need to be included with the actual application. The tools directory holds the libraries that my application references, but are only used for running unit tests and compiling the application.
My solution file gets placed right under the trunk directory along with my build files.
A: I can appreciate the logic of not putting binaries in the repository but I think there is a huge advantage too. If you want to be able to pull a specific revision out from the past (usually an older tag) I like being able to have everything I need come from the svn checkout. Of course this doesn't include Visual Studio or the .NET framework but having the right version of nant, nunit, log4net, etc. makes it really easy to go from checkout to build. This way getting started is as easy as "svn co project" followed by "nant build".
One thing we do is put ThirdParty binaries in a separate tree and use svn:external to bring it the version we need. To make life easy, we'll have a folder for each version that has been used. For example, we might bring in the ThirdParty/Castle/v1.0.3 folder to the current project. This way everything need to build/test the product is inside or below the project root. The tradeoff in disk space is well worth it in our experience.
A: As we have all the artifacts and construction in the same tree we have something like:
*
*Trunk
*
*Planning&Tracking
*Req
*Design
*Construction
*
*Bin
*Database
*Lib
*Source
*Deploy
*QA
*MA
A: I prefer fine-grained, very organized, self contained, structured repositories. There is a diagram illustrating general (ideal) approach of repository maintenance process. For example, my initial structure of repository (every project repository should have) is:
/project
/trunk
/tags
/builds
/PA
/A
/B
/releases
/AR
/BR
/RC
/ST
/branches
/experimental
/maintenance
/versions
/platforms
/releases
PA means pre-alpha
A means alpha
B means beta
AR means alpha-release
BR means beta-release
RC means release candidate
ST means stable
There are differences between builds and releases.
*
*Tags under builds folder have version number corresponding to a pattern N.x.K, where N and K are integers. Examples: 1.x.0, 5.x.1, 10.x.33
*Tags under releases folder have version number corresponding to a pattern N.M.K, where N, M and K are integers. Examples: 1.0.0, 5.3.1, 10.22.33.
Recently I have developed training dedicated to Software Configuration Management where I describe version numbering approach and why exactly this repository structure is the best. Here are presentation slides.
There is also my answer on the question about 'Multiple SVN Repositories vs single company repository'. It might be helpful as long as you address this aspect of repository structuring in your question.
A: We practice highly componentised development using Java, we have about 250 modules in trunk that have independent life cycles. Dependencies are managed through Maven (that's a best practice right there), every iteration (bi-weekly) actively developed modules get tagged with a new version. 3 digit version numbers with strict semantics (major.minor.build - major changes means backwards incompatible, minor changes mean backwards compatible and build number changes mean backwards and forwards compatible). Our ultimate software product is an assembly that pulls in dozens of individual modules, again as Maven dependencies.
We branch modules/assemblies when we need to make a bug fix or enhancement for a released version and we can not deliver the HEAD version. Having tagged all versions makes this easy to do but branches still incur a significant administrative overhead (specifically keeping branches in sync with certain HEAD changesets) that are partly caused by our tools, Subversion is sub-optimal for managing branches.
We find that a fairly flat and above all predictable tree structure in the repository is crucial. It has allowed us to build release tools that take away a lot of the pain and danger from a manual release process (updated release notes, project compiles, unit tests run through, tag is made, no SNAPSHOT dependencies, etc). Avoid putting too much categorization or other logic in your tree structure.
We roughly do something like the following:
svnrepo/
trunk/
modules/
m1/ --> will result in jar file
m2/
...
assemblies/
a1/
...
tags/
modules/
m1/
1.0.0/
1.0.1/
1.1.0/
m2/
...
assemblies/
a1/
iteration-55/
...
branches/
m1/
1.0/
...
For external dependencies, I can not overemphasize something like Maven: manage your dependencies as references to versioned, uniquely identified binary artifacts in a repository.
For intenal module/project structure: stick to a standard. Uniformity is key. Again, Maven can help here since it dictates a structure. Many structures are fine, as long as you stick to them.
A: I think the SCM policies and procedures a team adopts are going to be very dependent on the development process they are using. If you've got a team of 50 with several people working on major changes simultaneously and releases only occurring every 6 months, it makes a lot of sense for everyone to have his own branch where he can work in isolation and only merge in changes from other people when he wants them. On the other hand, if you're a team of 5 all sitting in the same room it makes sense to branch much less frequently.
Assuming you're working on a small team where communication and collaboration is good and releases are frequent, it makes very little sense to ever branch IMO. On one project we simply rolled the SVN revision number into the product version number for all our releases and we never even tagged. In the rare event that there was a critical bug found in prod we would simply branch straight from the revision that was released. But most of the time we simply fixed the bug in the branch and released from trunk at the end of the week as scheduled. If your releases are frequent enough you'll almost never run into a bug that can't wait until the next official release.
I've worked on other projects where we never could have gotten away with that, but due to the lightweight development process and low ceremony we were able to use a lightweight version control policy very effectively.
I'll also mention that everything I've written is coming from an enterprise IT context where there's only a single production instance of a given code base. If I was working on a product that was deployed at 100 different customer sites the branching and tagging practices would have to be a little more strenuous in order to manage all of the independent update cycles across all the instances.
A:
What about external dependencies such a the AJAXTookit or some other 3rd party extension that's used on several projects?
Source control is for source code, not binaries. Keep any 3rd party assemblies/jars in a separate repository. If you're working in the Java world try something like Maven or Ivy. For .Net projects a simple shared drive can work well as long as you have decent policies around how it's structured and updated.
A: We migrated from the bad world of VSS with one giant repository (over 4G) before switching to SVN. I really struggled with how to set up the new repository for our company. Our company is very "old" school. It's difficult to get change I'm one of the younger developers and I'm 45! I am part of a corporate development team that works on programs for a number of departments in our company. Anyway I set up our directories like this
+ devroot
+--Dept1
+--Dept1Proj1
+--Dept2Proj2
+--Dept2
+--Dept2Proj1
+--Tools
+--Purchase3rdPartyTools
+--NLog
+--CustomBuiltLibrary
I wanted to include the ability to branch, but quite honestly that's just too much at this point. Couple things we still struggle with using this scheme.
*
*It's hard to fix production problems if you are working on a major product upgrade (ie because we don't do branching)
*It's hard to manage the concept of promoting from "Dev" to "Prod". (Don't even ask about promoting to QA)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Lightweight source control I am looking for a lightweight source control system for use on "hobby" projects with only one person (myself) working on the project. Does anyone have any suggestions? Ideally it should interface with Visual Studio either naively or through another plug-in, outside of that, anything that works would be nice to be replace Gmail as source control.
A: Have a look at the Mercurial Project an open source distributed source control system. There is a tortise and an eclipse plugin but nothing for visual studio plugin that I know of.
you can see a demo on you tube . like GIT its one of a new breed of distributed source control systems, so no server setup required, and it has very fast HTTP based checkin's with advanced branching and merging facilities.
A: Git is very lightweight and is just as suitable for personal projects as it is for huge projects like the Linux kernel. There is lots of tutorial documentation available on its web site that will get you started. Example:
git init
git add .
git commit -m "my first commit!"
If you are keen on Visual Studio integration, I would probably recommend Subversion, as there are a number of plugins that may make your life easier. Also, TortoiseSVN is definitely worth installing.
A: Hobby or Serious project, SVN 1-Click Setup (download Svn1ClickSetup-1.3.3.exe) gives you all you need with ease :)
A: TortoiseSVN works great. You don't even need a Subversion server, you can create a local repository through the tool. Since it integrates right into Windows Explorer, it makes it easy to work with in a variety of scenarios. You also then have the option to work with remote Subversion servers or Team Foundation Servers (via SVNBridge).
A: I prefer distributed version control for personal projects, because they eliminate the need for a server. Mercurial is the one I try to use most of the time, but I've been hearing good things about git as well.
A: I can't comment on other source control software but after using VSS 6.0 , StarTeam, Vault and SVN I cannot rate SVN + Tortoise more highly. AnkhSVN is a free plug-in for Visual studio which I personally didn't warm to. Apparently Visual SVN is much better but costs money.
A: SVN with SmartSVN or tortoiseSVN ? not really all that lightweight, but good practice for the big bad world.
A: Pick your flavour of distributed version control. I like Mercurial, other folks swear by Git and Bazaar. There's no need to make a fake server to put a directory under version control, which, IMO, makes it very ideal for small projects.
I'm not sure if any of these have Visual Studio plugins, though.
A: If you have access to SQL Server, then SourceGear's Vault is free for a single user. If you want to go even further, Axosoft's OnTime issue tracking is also free for single user use. I use both at home (for free) and we also use both (licensed) at our company. Both integrate into Visual Studio, and OnTime also supports Vault integration.
A: I use Perforce at work and at home for hobby projects. It is easy enough to set up, and allows two users and five workspaces without having to pay for a license. Also has a Visual Studio integration plugin.
A: Lately I became a strong believer in Git and its interesting index pseudo repository. But if you do not need all the fancy rebase --interactive and stuff like content over file tracking - and as its Windows support is a weak point - Hg is a valid alternative. I am rather certain neither has a VS plug-in but with PoSH the command line is more fun anyway.
A: Thanks for all of the help so far, I have things up and running and right now I am working with Assembla as a Subversion server, TortoiseSVN for general Subversion access, and AnkhSVN for Visual Studio integration. Overall I am quite impressed with this particular configuration and I am already much more impressed with it than I have ever been with Visual Source Safe.
I have had a couple issues getting things up and running so I think it is best if I mention them in case anyone else ever runs into these problems -
*
*AnkhSVN doesn't give any useful error messages if it can not connect to the server due to a proxy being in the way and it doesn't use any of the Internet Explorer proxy settings so you have to configure it yourself. At the time of this post (2008-08-20) that information is in C:\Documents and Settings[USERNAME]\Application Data\Subversion\servers
*Assembla runs over HTTPS but shows the SVN URL as HTTP, you must be sure to change the HTTP to HTTPS yourself in the URLs or you get a "401 Not Implemented" error from TortoiseSVN and AnkhSVN.
A: For small and not-so-important project, Google Code Hosting is wonderful - it's Subversion, it's free and offers plenty of space.
I prefer Mercurial for my homebrewn projects. It's much easier than Git, and it works flawlessly under Windows.
A: I use VisualSVN Server (free) and Tortoise SVN (free) for school, work, hobbies, everything. If you want Visual Studio integration, you can use Visual SVN ($49) or AnkhSVN (free).
A: You can use assembla.com to host your project. They offer subversion, git and mercurial hosting. I personally use their subversion hosting for a free and private one-man project. As an added bonus, you also get a wiki and a ticketing system. Which can help you manage your stuff.
And the best thing is that you don't have to setup your subversion server and it is hosted off-site.
It's really good for a free service.
Personnaly, i use TortoiseSVN as my client but it isn't integrated in visual studio.
For the integration, you can try VisualSVN (not free) or AnkhSVN (free)
A: i will never use SVN again for a personal project - ya its great compared to CVS, but isn't even in the same class as the modern breed of distributed version control systems. GIT has been mentioned already but a) it has shaky windows support b) complicated learning curve. I now use BZR which "just works".
bzr vs git
bzr in 5 minutes
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How do you download and extract a gzipped file with C#? I need to periodically download, extract and save the contents of http://data.dot.state.mn.us/dds/det_sample.xml.gz to disk. Anyone have experience downloading gzipped files with C#?
A: You can use WebClient in System.Net to download:
WebClient Client = new WebClient ();
Client.DownloadFile("http://data.dot.state.mn.us/dds/det_sample.xml.gz", " C:\mygzipfile.gz");
then use #ziplib to extract
Edit: or GZipStream... forgot about that one
A: Try the SharpZipLib, a C# based library for compressing and uncompressing files using gzip/zip.
Sample usage can be found on this blog post:
using ICSharpCode.SharpZipLib.Zip;
FastZip fz = new FastZip();
fz.ExtractZip(zipFile, targetDirectory,"");
A: Just use the HttpWebRequest class in the System.Net namespace to request the file and download it. Then use GZipStream class in the System.IO.Compression namespace to extract the contents to the location you specify. They provide examples.
A: To compress:
using (FileStream fStream = new FileStream(@"C:\test.docx.gzip",
FileMode.Create, FileAccess.Write)) {
using (GZipStream zipStream = new GZipStream(fStream,
CompressionMode.Compress)) {
byte[] inputfile = File.ReadAllBytes(@"c:\test.docx");
zipStream.Write(inputfile, 0, inputfile.Length);
}
}
To Decompress:
using (FileStream fInStream = new FileStream(@"c:\test.docx.gz",
FileMode.Open, FileAccess.Read)) {
using (GZipStream zipStream = new GZipStream(fInStream, CompressionMode.Decompress)) {
using (FileStream fOutStream = new FileStream(@"c:\test1.docx",
FileMode.Create, FileAccess.Write)) {
byte[] tempBytes = new byte[4096];
int i;
while ((i = zipStream.Read(tempBytes, 0, tempBytes.Length)) != 0) {
fOutStream.Write(tempBytes, 0, i);
}
}
}
}
Taken from a post I wrote last year that shows how to decompress a gzip file using C# and the built-in GZipStream class.
http://blogs.msdn.com/miah/archive/2007/09/05/zipping-files.aspx
As for downloading it, you can use the standard WebRequest or WebClient classes in .NET.
A: The GZipStream class might be what you want.
A: You can use the HttpContext object to download a csv.gz file
Convert you DataTable into string using StringBuilder (inputString)
byte[] buffer = Encoding.ASCII.GetBytes(inputString.ToString());
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.Buffer = true;
HttpContext.Current.Response.ContentType = "application/zip";
HttpContext.Current.Response.AddHeader("Content-Disposition", string.Format("attachment;filename={0}.csv.gz", fileName));
HttpContext.Current.Response.Filter = new GZipStream(HttpContext.Current.Response.Filter, CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-Encoding", "gzip");
using (GZipStream zipStream = new GZipStream(HttpContext.Current.Response.OutputStream, CompressionMode.Compress))
{
zipStream.Write(buffer, 0, buffer.Length);
}
HttpContext.Current.Response.End();
You can extract this downloaded file using 7Zip
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Getting Started with Unit Testing
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
*
*Running the tests becomes automate-able and repeatable
*You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
A: The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.
A: A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing.
A: Ok here's some best practices from some one who doesn't unit test as much as he should...cough.
*
*Make sure your tests test one
thing and one thing only.
*Write unit tests as you go. Preferably before you write the code you are testing against.
*Do not unit test the GUI.
*Separate your concerns.
*Minimise the dependencies of your tests.
*Mock behviour with mocks.
A: You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:
Card #1. Uncle Bob’s Three Laws
*
*Write no production code except to pass a failing test.
*Write only enough of a test to demonstrate a failure.
*Write only enough production code to pass the test.
Card #2: FIRST Principles
*
*Fast: Mind-numbingly fast, as in hundreds or thousands per second.
*Isolated: The test isolates a fault clearly.
*Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
*Self-verifying: The Test is unambiguously pass-fail.
*Timely: Produced in lockstep with tiny code changes.
Card #3: Core of TDD
*
*Red: test fails
*Green: test passes
*Refactor: clean code and tests
A: The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.
One thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.
A: NUnit is a good tool for any of the .NET languages.
Unit tests can be used in a number of ways:
*
*Test Logic
*Increase separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.
*Drive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.
A: Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Sanitising user input using Python What is the best way to sanitize user input for a Python-based web application? Is there a single function to remove HTML characters and any other necessary characters combinations to prevent an XSS or SQL injection attack?
A: Jeff Atwood himself described how StackOverflow.com sanitizes user input (in non-language-specific terms) on the Stack Overflow blog: https://blog.stackoverflow.com/2008/06/safe-html-and-xss/
However, as Justin points out, if you use Django templates or something similar then they probably sanitize your HTML output anyway.
SQL injection also shouldn't be a concern. All of Python's database libraries (MySQLdb, cx_Oracle, etc) always sanitize the parameters you pass. These libraries are used by all of Python's object-relational mappers (such as Django models), so you don't need to worry about sanitation there either.
A: I don't do web development much any longer, but when I did, I did something like so:
When no parsing is supposed to happen, I usually just escape the data to not interfere with the database when I store it, and escape everything I read up from the database to not interfere with html when I display it (cgi.escape() in python).
Chances are, if someone tried to input html characters or stuff, they actually wanted that to be displayed as text anyway. If they didn't, well tough :)
In short always escape what can affect the current target for the data.
When I did need some parsing (markup or whatever) I usually tried to keep that language in a non-intersecting set with html so I could still just store it suitably escaped (after validating for syntax errors) and parse it to html when displaying without having to worry about the data the user put in there interfering with your html.
See also Escaping HTML
A: Here is a snippet that will remove all tags not on the white list, and all tag attributes not on the attribues whitelist (so you can't use onclick).
It is a modified version of http://www.djangosnippets.org/snippets/205/, with the regex on the attribute values to prevent people from using href="javascript:...", and other cases described at http://ha.ckers.org/xss.html.
(e.g. <a href="ja	vascript:alert('hi')"> or <a href="ja vascript:alert('hi')">, etc.)
As you can see, it uses the (awesome) BeautifulSoup library.
import re
from urlparse import urljoin
from BeautifulSoup import BeautifulSoup, Comment
def sanitizeHtml(value, base_url=None):
rjs = r'[\s]*(&#x.{1,7})?'.join(list('javascript:'))
rvb = r'[\s]*(&#x.{1,7})?'.join(list('vbscript:'))
re_scripts = re.compile('(%s)|(%s)' % (rjs, rvb), re.IGNORECASE)
validTags = 'p i strong b u a h1 h2 h3 pre br img'.split()
validAttrs = 'href src width height'.split()
urlAttrs = 'href src'.split() # Attributes which should have a URL
soup = BeautifulSoup(value)
for comment in soup.findAll(text=lambda text: isinstance(text, Comment)):
# Get rid of comments
comment.extract()
for tag in soup.findAll(True):
if tag.name not in validTags:
tag.hidden = True
attrs = tag.attrs
tag.attrs = []
for attr, val in attrs:
if attr in validAttrs:
val = re_scripts.sub('', val) # Remove scripts (vbs & js)
if attr in urlAttrs:
val = urljoin(base_url, val) # Calculate the absolute url
tag.attrs.append((attr, val))
return soup.renderContents().decode('utf8')
As the other posters have said, pretty much all Python db libraries take care of SQL injection, so this should pretty much cover you.
A: Edit: bleach is a wrapper around html5lib which makes it even easier to use as a whitelist-based sanitiser.
html5lib comes with a whitelist-based HTML sanitiser - it's easy to subclass it to restrict the tags and attributes users are allowed to use on your site, and it even attempts to sanitise CSS if you're allowing use of the style attribute.
Here's now I'm using it in my Stack Overflow clone's sanitize_html utility function:
http://code.google.com/p/soclone/source/browse/trunk/soclone/utils/html.py
I've thrown all the attacks listed in ha.ckers.org's XSS Cheatsheet (which are handily available in XML format at it after performing Markdown to HTML conversion using python-markdown2 and it seems to have held up ok.
The WMD editor component which Stackoverflow currently uses is a problem, though - I actually had to disable JavaScript in order to test the XSS Cheatsheet attacks, as pasting them all into WMD ended up giving me alert boxes and blanking out the page.
A: The best way to prevent XSS is not to try and filter everything, but rather to simply do HTML Entity encoding. For example, automatically turn < into <. This is the ideal solution assuming you don't need to accept any html input (outside of forum/comment areas where it is used as markup, it should be pretty rare to need to accept HTML); there are so many permutations via alternate encodings that anything but an ultra-restrictive whitelist (a-z,A-Z,0-9 for example) is going to let something through.
SQL Injection, contrary to other opinion, is still possible, if you are just building out a query string. For example, if you are just concatenating an incoming parameter onto a query string, you will have SQL Injection. The best way to protect against this is also not filtering, but rather to religiously use parameterized queries and NEVER concatenate user input.
This is not to say that filtering isn't still a best practice, but in terms of SQL Injection and XSS, you will be far more protected if you religiously use Parameterize Queries and HTML Entity Encoding.
A: If you are using a framework like django, the framework can easily do this for you using standard filters. In fact, I'm pretty sure django automatically does it unless you tell it not to.
Otherwise, I would recommend using some sort of regex validation before accepting inputs from forms. I don't think there's a silver bullet for your problem, but using the re module, you should be able to construct what you need.
A: To sanitize a string input which you want to store to the database (for example a customer name) you need either to escape it or plainly remove any quotes (', ") from it. This effectively prevents classical SQL injection which can happen if you are assembling an SQL query from strings passed by the user.
For example (if it is acceptable to remove quotes completely):
datasetName = datasetName.replace("'","").replace('"',"")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Choosing a multiplier for a (string) hash function Do you have any advice/rules on selecting a multiplier to use in a (multiplicative) hash function. The function is computing the hash value of a string.
A: You want to use something that is relatively prime to the size of your set. That way, when you loop around, you won't end up on the same numbers you just tried.
A: I had an interesting discussion with a coworker about hash function recently. Our conclusions were as follows:
If you really need to write a good hash function that minimizes collisions more than the default implementations available in the standard languages you need an advanced degree in mathematics.
If you're writing applications where a custom hash function will noticeably improve the performance of your application, you're Google and you've got plenty of Math PhDs to do the work.
Sorry to not directly answer your question, but the bottom line is that there's really no need to write your own hash function for String. What language are you working with? I'd imagine there's an easy way to compute a "good enough" hash code.
A: Historically 33 seems like a popular choice, and it tends to work pretty well. No one knows why though. For more details, look here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: VS 2008 - ctrl-tab behavior As you may know, in VS 2008 ctrl+tab brings up a nifty navigator window with a thumbnail of each file. I love it, but there is one tiny thing that is annoying to me about this feature: the window stays around after releasing the ctrl key. When doing an alt+tab in windows, you can hit tab to get to the item you want (while still holding down the alt key), and then when you find what you want, lifting up on the alt key selects that item.
I wish VS 2008 would do the same. For me, when I lift off of ctrl, the window is still there. I have to hit enter to actually select the item. I find this annoying.
Does anyone know how to make VS 2008 dismiss the window on the release of the ctrl key?
A: I found this behaviour when I was running VS2008 under windows 7 and had been using the magnifier app.
I suspect it would similarly occur under vista.
Basically I had zoomed all the way back out but not shut down the magnifier app. Once it was shut down, things returned to normal.
A: This bloody thing is haunting me as well. Visual Studio 2008 SP1 and Windows 7 64-bit. Setting the registry key mentioned in this thread won't help. Sara Ford brags that she knows the right key (http://blogs.msdn.com/saraford/archive/2008/01/04/did-you-know-use-ctrl-tab-to-bring-up-the-ide-navigator-to-get-a-bird-s-eye-view-and-navigation-of-all-open-files-and-tool-windows-in-visual-studio.aspx) but she won't tell. I guess the marvelous tip is too large to fit on the margin of the page or something.
Also turning all the narrator options off or on doesn't help (but it does make me and my co-workers crazy.) As a bonus starting the narrator.exe (WIN-R narrator ENTER) starts up the windows magnifier (magnifier.exe) and it immediately zooms into molecule level without giving way to zoom back (ctrl +/-, win -/+, win-mouse wheel, esc doesn't work.) Have to kill it from task manager, which is bloody easy when each pixel is the size of a pick-up truck. Magnifier never starts up when you need it (supposed to start up with win-+), but it does occasionally go into mode where it starts up ON EACH BLOODY LOGIN, remote desktop or not. And zooms into atom scale with no way to get back. The later (week or so) it goes away. The control panel setting doesn't seem to help.
Also, I've had this sticky ctrl-tab issue at least twice to three times (right now I'm having it) and it has gone away after few weeks without clear reason why. I've just bit my teeth and gone on. But now I've had enough.
Microsoft: I won't blame you on adding accessability features for disabled people, but for god's sake don't in-your-face them at me all the fscking time. Fix the bloody thing, all I'll make sure you're seriously need accessability features the rest of your life.
A: I worked around this issue reassigning the ctrl-tab shortcut key to Window.NextDocumentWindow instead of Window.NextDocumentWindowNav (IDE Navigator). None of the above workarounds fixed the problem for VS 2010 on Win 7.
MS needs to fix this issue!
A: You probably have the text-to-speech narrator enabled.
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&SiteID=1&mode=1
Just uncheck all checkboxes under
"Text-To-Speech" narrator software.
--> To open Narrator using the keyboard, press CTRL+ESC, press R,
type narrator, and then press Enter.
This one drove me crazy for several months until I found this posting.
A: Strange. My VS2008SP1 install exhibits your desired behavior (in a web application project). I do not recall making any explicit changes.
A: When "Windows Speech Recognition" is running (even though not listening to commands), the VS2010 exhibits this behavior. Exiting "Windows Speech Recognition" restores the default i.e. the selection can be changed by pressing Tab key again and again while keeping the Ctrl key pressed and document is selected as soon as Ctrl key is released.
A: Just in case anyone still needed a fix for this (I've encountered this behavior in VS2010) what you can do is:
*
*Close VS
*Enable sticky keys
*Reopen VS
*Disable sticky keys
This solved it for me.
A: Check the "narrator" answer. I'm pretty sure it's to allow time for the narrator to read the choices... you can then "enter" the selection when you're sure of your choice.
Otherwise, check your 'sticky keys' settings (control panel\accessibility options\keyboard) and uncheck the options.
A: Just turning off the narrator didn't work for me.
What I did (besides unchecking the narrator stuff) was go to the Control panel's Ease of Access Center on each of the Explore All Settings screens, unchecked any options which were still checked, and then clicked Apply.
Once I did this, things went back to working.
Even if none of the checkboxes are checked on one of the explore all settings screen make sure to still click the Apply button, as it seems that just unchecking the narrator stuff does not always work - but clicking on the Apply button on the various sections will effectively reset and apply the settings.
A: I came across same issue with VS2012 today; It was all good -- releasing Ctrl key will activate the doc that had focus on that nifty pop up window.
My cause was 'Inspect' that I started using this morning. Apparently it holds the pop up window in order to give you more time to play around.
Simply kill 'Inspect', all back normal.
I don't know the fix to have 'Inspect' running & normal behaviour at the same time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Delphi and COM: TLB and maintenance issues In the company that i work, we develop all the GUI in C#, but the application kernel is mainly developed in Delphi 5 (for historical reasons), with a lot of components made in COM+. Related to this very specific sort of application a I two questions:
*
*Experienced guys in Delphi and/or COM, do you have any workrounds to work with the buggy TLB interface ?
Some of the bugs are: IDE crashing during edition of a large TLB, lost of methods IDs, TLB corruption, etc.
Here, we haven't found any good solution. Actually we tried do upgrade do the new 2007 version. But the new IDE TLB interface has the same bugs that we found before.
*How do you control TLBs versions ? The TLB file is in a binary format and conflict resolutions are very hard to do. We tried to do it exporting the interfaces descriptions to IDL and commiting into CVS, but we didn't found any good way to generate TLBs from IDL using Delphi. Additionaly, the MIDL tool provided by Microsoft, didn't parse correctly the IDL files that we exported from delphi.
A: In the distant past (before I started working for CodeGear) I gave up on the odd Delphi-ized IDL language that the IDE presented, and wrote my own IDL and compiled it using MS midl. This largely worked; the only catch, IIRC, was making sure dispids (id attribute) were correct on automation interfaces (dispinterfaces) for property getters & setters - there was some invariant that tlibimp expected but midl didn't guarantee.
However, now that Delphi 2009 uses a safe subset of midl syntax, and includes a compiler for this midl in the box and integrated into the IDE, these problems should be a thing of the past.
A: We have also just installed Delphi 2009 and it does seem to have improved the support for Typelibraries. However I have worked with COM and type libraries for quite some time and here are my general gotchas that I have found over the years. I would agree its pretty buggy and is all the way up to Delphi 2006 (our version prior to using 2009).
*
*Always have every file writeable before opening. This may sound obvious, but when working with source control sometimes we forget to do this and try to remove readonly flag after opening a file - Delphi cant deal with this. Ensure tlb is writable before opening.
*If editing a standalone typelibrary you MUST have a project open. For some reason if you open a type library on its own it will not save. Create a blank project and then open your typelibrary. For some reason this allows the type library to be saved.
*If your type library is used by an application or COM+ ensure that application is shut down or COM+ disabled before opening the type library. Any open apps will prevent the type library from being saved.
However I think your best solution is probably an upgrade. You get Unicode support too.
A: Using Delphi 2009 has greatly taken much of the pain out of huge TLB files, and conversion of our existing objects was painless, but our com objects don't use any third party libraries.
We will be migrating our gui applications over once the library vendors release supported versions.
A: Same experience with the TLB interface here: we simply stopped using it.
We work with several separate IDL files (hand-build) for different parts of our framework, making use of the #include construct to include them into the IDL of the actual application, then generate the single tlb using MIDL and tlibimp it. If the application has no IDL of it's own, pre-compiled version of the different framework TLB files are available.
Whenever the framework enters a new version, a script is run to re-generate the GUIDS on all necessary interfaces in the IDL files.
This has served us well for many years, and for us to move over the new Delphi 2009 IDL/TLB toolset will have to be not only integrated into the IDE, but also versatile when it comes to automated builds and whatnot. Can't wait to get my hands dirty with some experiments!
A: I think you should have a good look at Delphi 2009.
Delphi 2009 has changes to the COM support, including a text-based replacement for the binary TLB files.
You can read more on Chris Bensen's blog.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Beginners Guide to Haskell? I've been looking for a decent guide to Haskell for some time, but haven't been able to find one that seems interesting enough to read through and/or makes sense.
I've had prior exposure to Haskell a few years back, but I can't remember much about it. I remember the "Aha!"-feeling was incredible when I finally got it, and it was actually fun to play with, so I'm looking to rediscover the lost art of Haskell.
I'm familiar with Ruby and its functional programming tricks, so I think I'm not completely in the dark. Any links?
A: This looks like it fits the bill in the style of Why's Poignant Guide to Ruby.
Learn You a Haskell for Great Good!
A: A rather late response but I thoroughly enjoyed reading from Learn You A Haskell available online as well as a book.
A: I've been told to look at
Programming in Haskell, from Graham Hutton
A: In addition to "Real World Haskell", find a copy of "Haskell: The Craft of Functional Programming". Great textbook.
A: Some good places to start are:
*
*The Gentle Introduction To Haskell
*Problem Solving in Haskell
*Happy Learn Haskell Tutorial
Other resources:
*
*Interesting blog entry on a Study plan for Haskell via the Wayback Machine
*HaskellWiki
*Generic Haskell User Guide (PDF)
A: I like Haskell Tutorial for C Programmers. Especially if you are coming from an imperative language background as I do.
A: I have downloaded 10 slides from this page http://www.cs.nott.ac.uk/~gmh/book.html and going through it for many times. It workz ;)
A: Strange that nobody suggested Real World Haskell. That's IMHO the best Haskell book you currently can get you can get it for on or offline reading.
A: One thing that is really unique about Haskell is that there is a mailing list exactly for beginners. Go to Haskell-Beginners.
Reading books is good, but having some humans to ask is always a great resource, too. Together, I think there is absolutely no reason to say "Haskell is hard to learn because there's no material on it."
You might also want to visit #haskell at irc.freenode.net.
A: There is also a nice lecture series from the RWTH Achen.
*
*here you will find exams and exercises (possibly in German)
*and here are the recordings of the solutions
I got all of this info from the Haskell Wiki's Video presentations page.
A: If you're like me, and like videos of presentations, than this is a good tutorial:
A Taste of Haskell
*
*Part 1
*Part 2
*Slides
It's a three-hour tutorial, that uses xmonad as a running example to explain Haskell to experienced (imperative) programmers.
The presentation is given by Simon Peyton-Jones who, besides being one of the top Haskell designers, is also a great speaker.
A: This is where I started.
haskell.org
A: Once you get past the beginning stages, I would highly recommend reading Real World Haskell.
A: The Haskell wikibook which includes the text from the great tutorial Yet Another Haskell Tutorial.
(The "Generic Haskell User Guide" paper is a fine paper, but I think it is a particularly bad recommendation for a beginning Haskell programmer, as it is more of an academic paper presenting extensions to Haskell and basically a different language "Generic Haskell" (i.e. Haskell with an old version of Generics) instead of standard Haskell 98. <irony>If you were looking for dense reading about Haskell, start with the Haskell 98 report.</irony>)
A: Real World Haskell is a really good book.
A: Yet Another Haskell Tutorial (PDF) worked for me.
Edit: Updike points out that the text of YAHT has been folded into the Haksell Wikibooks. The PDF is still useful if you (like me) prefer to print out and read on paper.
BTW I have also read A Gentle Introduction To Haskell (also available as PDF). I will definitely not recommend this for beginners. It is only gentle compared to the Haskell Report. However it is a good reference when you have a solid understanding of the language.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "122"
} |
Q: Tool for generating CSS skeleton? My HTML is all marked up, ready to make it rain CSS. The problem is that I have to go back and find out what all my id and class names are so I can get started. What I need is a tool that parses my HTML and spits out a stylesheet with all the possible elements ready to be styled (maybe even with some defaults). Does such a tool exist?
A: When I first saw this, I thought "Great question! Neat answer, danb!"
After a little thought, I'm not so sure this is a good idea. It's a little like generating event handlers for all controls in an ASP.NET page, or generating CRUD procedures for all tables in a database. I think it's better to create them as needed for two reasons:
*
*Less clutter from empty style declarations
*Less temptation to misuse (or underuse) CSS by writing everything at the class level rather than using descendant selectors like (#navigation ul li a).
A: http://lab.xms.pl/css-generator/ seems to fit the description.
A: I agree with Jon, but I don't see a problem* with doing what the OP wants. Using the script provided, you'd know all of your classes and ids. While working on your CSS, you should be deciding if you need to use each of them. At the end, or at the point that you feel like you have a good handle on what you're doing, run it through an optimizer / compressor so it removes unused ids and classes.
*Operating assumption: You either didn't write the original HTML or you wrote it and later decided that "gosh CSS would be really nice here now, I wish I would have started with it." :-)
A: Not that it isn't a sensible question with a sensible answer, but it implied to me the kind of unnecessarily marked-up HTML that people create when they don't understand positional selectors: the kind of code where everything has a class and an id.
<div id="nav">
<ul id="nav_list">
<li class="nav_list_item">
<a class="navlist_item_link" href="foo">foo</a>
</li>
<li class="nav_list_item">
<a class="navlist_item_link" href="bar">bar</a>
</li>
<li class="nav_list_item">
<a class="navlist_item_link" href="baz">baz</a>
</li>
</ul>
</div>
you can remove everything except the id on the div and still be able to style everything there by its position; and obviously, the script won't show you all those possible selectors, will it?
In other words, a narrow focus on CSS as something done to classes and ids is a concern.
A: I have a poor man's version of this I have used in the past... this requires jquery and firebug...
<script type="text/javascript">
$(document).ready(function() {
$('*[@id]').each(function() {
console.log('#' + this.id + ' {}');
});
$('*[@class]').each(function() {
$.each($(this).attr('class').split(" "), function() {
console.log('.' + this + ' {}');
});
});
});
</script>
it gives you something like this:
#spinner {}
#log {}
#area {}
.cards {}
.dialog {}
.controller {}
if you want them in "natural" page order instead...
<script type="text/javascript">
$(document).ready(function() {
$('*').each(function() {
if($(this).is('[@id]')) {
console.log('#' + this.id + ' {}');
}
if($(this).is('[@class]')) {
$.each($(this).attr('class').split(" "), function() {
console.log('.' + this + ' {}');
});
}
});
});
</script>
I just load the page with that script in there, then cut and paste the results out of firebug... then obviously, remove the script :)
you'll need to remove the dups manually or just toss in some simple dup checking logic with a map or array or something.. one for IDs and one for classes.
A: This blog entry references to something similar to what you need here.
It contains a link to a Perl script called 'stylizator.pl'. This script parses the html to look for possible CSS elements and outputs them to a file.
A: Another way to approach this is to standardise the id and class names you use in your HTML according to some sort of naming convention.
A: I disagree with Jon. While this solution can be used poorly in the way he describes, it does not necessarily mean it will. Any wise developer or designer is going to take the script generated css classes and pull only what is really needed into the css file.
The solution still solves the OP's question.
A: I've made a generator which makes the html and css for you = https://www.andy-howard.com/css-skeleton-screen-generator/
Not much else to say really, it utilises the :empty selector in css.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Ant's tasks throws StackOverflowException I'm trying to compile over 100 java classes from different packages from a clean
directory (no incremental compiles) using the following ant tasks:
<target name="-main-src-depend">
<depend srcdir="${src.dir}"
destdir="${bin.dir}"
cache="${cache.dir}"
closure="true"/>
</target>
<target name="compile" depends="-main-src-depend"
description="Compiles the project.">
<echo>Compiling</echo>
<javac target="${javac.target}"
source="${javac.source}"
debug="${javac.debug}"
srcdir="${src.dir}"
destdir="${bin.dir}">
<classpath>
<path refid="runtime.classpath"/>
<path refid="compile.classpath"/>
</classpath>
</javac>
</target>
However, the first time I run the compile task I always get a StackOverflowException. If I run the task again the compiler does an incremental build and everything works fine. This is undesirable since we are using CruiseControl to do an automatic daily build and this is causing false build failures.
As a quick-and-dirty solution I have created 2 separate tasks, compiling portions of the project in each. I really don't think this solution will hold as more classes are added in the future, and I don't want to be adding new compile tasks every time we hit the "compile limit".
A:
It will be nice to know; what can
cause or causes a StackOverflowError
during compilation of Java code?
It is probable that evaluating the long expression in your java file consumes lots of memory and because this is being done in conjunction with the compilation of other classes, the VM just runs out of stack space. Your generated class is perhaps pushing the legal limits for its contents. See chapter 4.10 Limitations of the Java Virtual Machine in The Java Virtual Machine Specification, Second Edition.
Fix 1: refactor the class
Since your class is being generated, this might not be an option. Still, it is worth looking at the options your class generation tool offers to see if it can produce something less troublesome.
Fix 2: increase the stack size
I think Kieron has one solution when he mentions the -Xss argument. javac takes a number of non-standard arguments that will vary between versions and compiler vendors.
My compiler:
$ javac -version
javac 1.6.0_05
To list all the options for it, I'd use these commands:
javac -help
javac -X
javac -J-X
I think the stack limit for javac is 512Kb by default. You can increase the stack size for this compiler to 10Mb with this command:
javac -J-Xss10M Foo.java
You might be able to pass this in an Ant file with a compilerarg element nested in your javac task.
<javac srcdir="gen" destdir="gen-bin" debug="on" fork="true">
<compilerarg value="-J-Xss10M" />
</javac>
A: <javac srcdir="gen" destdir="gen-bin" debug="on" fork="true">
<compilerarg value="-J-Xss10M" />
</javac>
from the comment above is incorrect. You need a space between the -J and -X, like so:
<javac srcdir="gen" destdir="gen-bin" debug="on" fork="true">
<compilerarg value="-J -Xss10M" />
</javac>
to avoid the following error:
[javac]
[javac] The ' characters around the executable and arguments are
[javac] not part of the command.
[javac] Files to be compiled:
...
[javac] javac: invalid flag: -J-Xss1m
[javac] Usage: javac
A: Does this happen when you run the javac command from the command line? You might want to try the fork attribute.
A: Try adding some variation of these attributes to the Ant javac task line:
memoryinitialsize="256M" memorymaximumsize="1024M"
You can also try fork="true", not sure if this allows you to set values for stack and heap (aka -Xm1024), but it may help (if it would work from the command line, but not in Ant).
[Edit]:
Added link -- the javac task page would seem to suggest that the parameters above require that you do also set fork="true".
A: That's quite odd, 100 classes really isn't that many. What is the compiler doing when the stack overflows? Is there a useful stack trace generated? What happens if you run javac directly on the command line instead of thorugh ant?
One possible workaround is to simply increase the size of the stack using the -Xss argument to the JVM; either to the JVM running ant or by setting fork="true" and a <compilerarg> on the <javac> task. Actually now that I think of it, does the problem go away just putting in the fork="true"?
A: Here is what I found.
After posting my question I went on and modified the compile task with the attributes fork="true", memoryinitialsize="256m" and memorymaximumsize="1024m" (a found today that this was suggested by Kieron and jmanning2k, thanks for your time). This didn't solve the problem nonetheless.
I decided to start removing classes from the source tree to see if a could pinpoint the problem. Turns out we had a Web Service client class for Axis 1.4 that was auto-generated from a WSDL file. Now, this class is a monster (as in Frankenstein), it has 167 field members (all of them of type String), 167 getter/setter pairs (1 for each field), a constructor that receives all 167 fields as parameters, an equals method that compares all 167 fields in a strange way. For each field the comparison goes like this:
(this.A == null && other.getA() == null) || (this.A != null && this.A.equals(other.getA()))
The result of this comparison is "anded" (&&) with the result of the comparison of the next field, and so on. The class goes on with a hashCode method that also uses all fields, some custom XML serialization methods and a method that returns a Axis-specific metadata object that describes the class and that also uses all field members.
This class is never modified, so I just put a compiled version in the application classpath and the project compiled without issues.
Now, I know that removing this single source file solved the problem. However, I have absolutely no idea as to why this particular class caused the problem. It will be nice to know; what can cause or causes a StackOverflowError during compilation of Java code? I think I'll post that question.
For those interested:
*
*Windows XP SP2
*SUN's JDK 1.4.2_17
*Ant 1.7.0
A: Some of the other answers mentioned fixes that require setting fork="true", but another option is to bump up the stack space of the underlying JVM created by ant, by setting the ANT_OPTS environment variable:
ANT_OPTS=-Xss10M ant
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Difference between a byte array and MemoryStream I am reading a binary file into a parsing program. I will need to iterate through the file and look for certain markers so I can split the file up and pass those parts into their respective object’s constructors.
Is there an advantage to holding the file as a stream, either MemoryStream or FileStream, or should it be converted into a byte[] array?
Keith
A: A byte[] or MemoryStream will both require bringing the entire file into memory. A MemoryStream is really a wrapper around an underlying byte array. The best approach is to have two FileStream (one for input and one for output). Read from the input stream looking for the pattern used to indicate the file should be separated while writing to the current output file.
You may want to consider wrapping the input and output files in a BinaryReader and BinaryWriter respectively if they add value to your scenario.
A: A MemoryStream is basically a byte array with a stream interface, e.g. sequential reading/writing and the concept of a current position.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: VS 2008 - Detachable code tabs Admittedly this might not be a problem on larger screens, but my employer is abit tight and refuses to buy monitors larger than 19inch, this means that I dont have much screen real estate to view all the Visual Studio windows and my code at the same time, or two pieces of code at once. Is there anything that allows me to detach the code panels so that I can view two different classes at once on each of my screens?
A: You can right click on the tab strip and insert a new vertical (or horizontal) tab group.
This allows you to view multiple tabs at the same time.
A: You could stretch visual studio across both monitors then put two code windows next to each other.
Basically, you are manually maximizing VS across both screens.
A: Visual Studio 2010 will support detachable code-panels from the main window but is not supported in Visual Studio 2008 and earlier.
Another option would be to open the same solution in a second instance of Visual Studio. When changes are made to a file, Visual studio will prompt you to reload the file (not ideal but might be better than manually resizing windows)
A: Hmm.. I don't think there is a way from within Visual Studio. For maximizing real estate and working on simultaneous files, I use that method plus viewing the files on Full Screen mode.
Do you multiple monitors?
A: Tools>Options>General>Multiple Documents
A: If you don't need to compile one of the code screens, have you thought about just opening Notepad++ or PSPad in your other monitor and viewing the second batch of code that way? They have context sensitive coloring that would assist in reading. I do this all the time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the best way to rename (move) file system branches in .NET? I would like to rename files and folders recursively by applying a string replacement operation.
E.g. The word "shark" in files and folders should be replaced by the word "orca".
C:\Program Files\Shark Tools\Wire Shark\Sharky 10\Shark.exe
should be moved to:
C:\Program Files\Orca Tools\Wire Orca\Orcay 10\Orca.exe
The same operation should be of course applied to each child object in each folder level as well.
I was experimenting with some of the members of the System.IO.FileInfo and System.IO.DirectoryInfo classes but didn't find an easy way to do it.
fi.MoveTo(fi.FullName.Replace("shark", "orca"));
Doesn't do the trick.
I was hoping there is some kind of "genius" way to perform this kind of operation.
A: So you would use recursion. Here is a powershell example that should be easy to convert to C#:
function Move-Stuff($folder)
{
foreach($sub in [System.IO.Directory]::GetDirectories($folder))
{
Move-Stuff $sub
}
$new = $folder.Replace("Shark", "Orca")
if(!(Test-Path($new)))
{
new-item -path $new -type directory
}
foreach($file in [System.IO.Directory]::GetFiles($folder))
{
$new = $file.Replace("Shark", "Orca")
move-item $file $new
}
}
Move-Stuff "C:\Temp\Test"
A: string oldPath = "\\shark.exe"
string newPath = oldPath.Replace("shark", "orca");
System.IO.File.Move(oldPath, newPath);
Fill in with your own full paths
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Who actually uses DataGrid/GridView/FormView/etc in production apps? Curious if others feel the same as me. To me, controls such as datagrid/gridview/formview/etc. are great for presentations or demo's only. To take the time and tweak this controls, override their default behavior (hooking into their silly events etc.) is a big headache. The only control that I use is the repeater, since it offers me the most flexibility over the others.
In short, they are pretty much bloatware.
I'd rather weave my own html/css, use my own custom paging queries.
Again, if you need to throw up a quick page these controls are great (especially if you are trying to woo people into the ease of .NET development).
I must be in the minority, otherwise MS wouldn't dedicated so much development time on these types of controls...
A: Anyone that thinks nobody uses *Grid controls has clearly never worked on an internal corporate webapp.
A: Every single app we development at my company has grids (the apps are all behind the firewall). That includes both web apps and Winform apps. For the web apps it's the good ole gridview with custom sorting for the winform apps we use Janus grid. I'm trying to get the developers/users to think of a better user interfaces but it's a tough to change. I gotta admit its still better than the alternative of the users building their "own" apps with Access that I would then need to support!
A: Using controls like the GridView are great for simple apps. Even if you are a server-side HTML bracket-twiddling ninja, they can make developing simple stuff much less time consuming. The problem is that they usually start to expose their shortcomings eventually, and you end up having to spend time tweaking them anyway. But at least you can get up and going quickly to start out with.
For example, the default paging in a GridView doesn't support paging in the database itself (you have to load all the rows before it will page them), so once you start feeling that pinch in performance, you may need to think about rolling your own or, perhaps better, find a more capable grid control.
Anyway, the point is that pre-built components are good. They help. But as usual, it depends on what you need them to do.
A: I've actually used GridView extensively for an adminsitrative console. I even created a custom DataFieldControl that sets the field's header text and sort expression based on data field, creates an Insert row in the bottom and automatically collects the values in the row and forwards them to the data source's insert method, and generates a list box if an additional list data source is specified. It's been really useful though a huge time investment to build.
I also have another control that will generate a new data form based on the fields' metadata when there are no records (in the EmptyDataTemplate).
<asp:GridView ...>
<Columns>
<my:AutoField HeaderText="Type"
DataField="TypeId"
ListDataSourceID="TypesDataSource"
ListDataTextField="TypeName" />
</Columns>
<EmptyDataTemplate>
<my:AutoEmptyData runat="server" />
</EmptyDataTemplate>
</asp:GridView>
A: I really like the telerik radgrid. Their product ain't cheap, but you get a lot of controls and features. And the data binding support is pretty good, both in a simple asp.net data source binding way and in a more custom handle-your-own-databinding-events kind of way.
A: At my company we use grids everywhere, mostly ComponentArt Grid (http://www.componentart.com/). Yeah it's bloatware but you get a lot of functionality that wouldn't be much fun to re-invent: sorting, paging, grouping, column reordering, inline editing, templating (server-side and client-side). The client-side APIs are nice too.
A: I like the GridView control and have used it in several custom DotNetNuke modules for my company's web site. For one thing, using the built-in controls means less dependencies to worry about. And once I had it set up how I wanted it, I basically copied the code to other pages and just had to do minor tweaks.
I've found that there are so many options with modern grid controls (Infragistics, Telerik, etc) that it takes longer to configure the grid than anything else. The MS controls are pretty simple yet they can do pretty much anything.
A: They are one of the benefits of asp.net. Up until just recently I hated them, but the more you use them the easier they become, once you learn what setting you must change for which instances. Mainly I like the form view and listview the gridview still needs some work.
A: For my corporate intranet projects, grids are indispensable. They are the foundation for easy reporting on the ASP.NET webforms platform.
Easy to Design
Paste the grid on the page. Insert BoundField objects for simple binding. asp:HyperlinkField for easy linking.
Binding
You can bind grids in a handful of ways:
*
*a collection of objects (List, ArrayList, Hashtable, or any simple collection)
*SqlDataReader in your code-behind (yikes, that would require SQL in your presentation tier)
*SqlDataSource (specify a stored proc. All the columns on the resultset map directly to the grid's columns. It's a very quick and dirty if the report doesn't mimic your domain object nicely. i.e. summations of different things.)
*objectDataSource (binding to a method on your BL)
For those who might call out SqlDataSource and ObjectDataSource, you don't always have to have them declared in your .aspx.cs or .aspx.vb . I am not advocating them here, just pointing out the possibilities.
I don't think you can discount the RAD benefits of the built-in GridView and other 3rd party grids. Management types love and want tabular data.
A: We use the Infragistics UltraWebGrid + LinqDataSource on our intranet apps.
It gives us ajax, sorting, filtering, paging all server side.
The "export to excel" also is a killer feature.
We have 5000+ users,lots of data, performance is excelent.
A: I largely abandoned grids once I started designing from user stories, rather than from database table requirements. And never editable grids. The old way was just how we coerced users into doing data entry/table maintenance for our systems, and it never matched their workflow - any real job ended up skipping from one master/child form to another.
And the users never figured it out - but they sure knew our applications were harder to use than they should be.
An exception is analytical applications. But there are relatively few of those, and they are largely read-only.
A: I too would like to see an expanded answer on why GridView et al are considered "bloatware." I have extensively used GridView as well as 3rd party products (Telerik, etc) and find that for the majority of internal and some external projects, they work great. They are fast, easy to use, customizable - and BEST - I can hand them over to someone who knows GridViews who can then easily pick up where I left off. If I were to hand-code all of the numerous apps/controls, the overhead in the next person figuring out what is going on would be enormous even under the best of circumstances.
For me, I can see some of the 3rd party products being bloatware (but still sometimes useful), but the bare-bones GridView I've found to be quite fast with moderate queries.
A: Components like the GridView/FormView/DataGrid follow the 80/20 rule.
This means that 80% of the time when you use them for simple purposes they get the job done and are extremely easy to implement.
But 20% of the time you will be trying to build something complex (or weird) and you will be forced to jump through a dozen hoops and bend the code in many ways to try to implement a solution.
The trick is to learn whether the problem is an 80 problem or 20 problem, if you can identify the 20 problem early you are much better off writing the code from scratch yourself and ditching the "time saving" one.
A: I use them extensively in the corporate environment I work in and I'm working with one right now. The people who don't use them remind me of all those "I built it with Notepad" developers of years past. What's the point of using asp.net if you're not going to take advantage of the time savings?
A: I'm pretty much writing my own HTML - I'm using the ListView and Masterpages, but not really using the controls much anymore. My ListView laughs at your silly old repeater, by the way.
However, bloatware isn't necessarily a bad thing. If I needed a low volume intranet application built, I'd much rather pay a less experienced developer to drag and drop controls than for an HTML twiddler (like you or me) to craft each tag. There's definitely a place for the quick, simple approach. What's the cost of "bloatware" in that scenario, as long as the control based code is written in a maintainable fashion? Often wiring controls together requires less custom code, which means simple maintenance.
The one place I have to disagree with you - pretty much regardless of the application - is in crafting your own paging queries. You may like to do that kind of thing, but there's absolutely no business value in it. There are several professional-grade DAL tools which will usually write more maintainable, faster queries than most developers. Even if you lovingly craft the perfect paging query, it won't keep up to date with changes to the schema unless you continue to throw hours after it. I think better use of those hours is to build a lightweight system and put those hours into monitoring and fixing specific bottlenecks, rather than immediately jumping to the "database assembly language" layer.
A: I've been reading your posts guys and it made me feel dumb.
I mean in every application I made where I work there is at least one datagrid/gridview in it. And I didn't have the feeling I am missing something.
Sure I find datagrid/gridview kinda bloated but are they that much disgusting to use?
A: I think you need to learn to use GridViews before you condemn them. I use them extensively. At first it was a bit challenging to figure out certain things, but now they are indispensible.
GridViews within UpdatePanel with AJAX CRUD and pagination are lightning fast. One of the larger systems set up this way (for internal/external application) has a moderately sized db in the backend. There are many nvarchar(2000) fields and the transitions and updates are great.
In any event, if you've written your own version of displaying data, you may want to continue using it if it works. (Same argument could be made for writing your own compiler, writing your own version of HTML, writing your own version of data access binaries...) The advantage of using GridView is that there are a lot of people who are familiar with it and that MSFT has abstracted/modeled the class to do a lot of things that we used to have to do manually.
A: I have never used it. I completely agree, it's bloatware. I usually end up using the repeater with custom controls that i made.
A: For anything long term I would try to avoid datagrid/gridview, it sometimes becomes too hacky making it do exactly what you want, after a certain number of these tweaks you start to realise its not saving time in the long run and you might not be getting the control over markup that you need.
However the built in paging and sorting functionality works well and in 2008 there is a new ListView control which aims to sort some of these problems out and give you tighter control of the html that is output.
A: I have wondered about this for a long time. There seems to be a consensus here that the grid controls are bloatware. But, can anyone definitively cite the cost of using these controls? Is there excessive HTML sent to the browser? Too much resource devoured on the server? Is generating the HTML table faster (assuming it's well-written)?
In addition to the bloatware issue, I have often run aground when UI requirements are enhanced to include features beyond the scope of the standard controls. For example, in early ASP.Net versions, I struggled with putting images in column headers. And, I believe it's still a challenge to add a second, top-level header row spanning multiple columns. At some point, it becomes really difficult to wrestle with the control to achieve the desired effect. And it's frustrating if you know the HTML you want, but you just can't make the control do it.
On one project, I finally gave up and wrote myself an HTML table class to generate a very complicated grid. It took a couple of days to get it just right. But, now I have the basic code, and it will be much more efficient to tweak that for future grids.
No doubt about it, though. It's hard to beat the fancy grid controls for speedy development, if you can just live within their limitations.
A: If you work with designers a lot on public facing web sites then you should ditch the GridViews and stick to repeaters. That's my opinion anyway - I've had to pull apart hundreds of GridViews and turn them into simple repeaters in order to fulfill the design requirements.
If you go near DataGrids or GridViews with a 10-foot pole on a public facing web site then you HAVE to use the CSS friendly Control Adapters. (At this point you might find it easier just to do it in the Repeater.) Prior to Control Adapters being available I would have considered these controls broken out of the box.
I find that too many .NET developers do not have a good understanding of design, accessibility, CSS, javascript, standards etc. which is why they succumb to GridViews, ObjectDataSources etc.
A: GridView is fine and very powerful control and works well with css or theme. The only thing that is annoying me is that VirtualCount property was dropped when old 1.1 DataGrid was replaced with GridView in asp.net 2.0 and it was useful for implementing custom paging. However same can be done via data adapters.
Though working with repeaters is maybe clearer and you have total control over rendered html still I wouldn't recommend going on that ways because is harder to implement and maintain.
A: I never really used the standard WinForms grid before but at my last job we used the ComponentOne FlexGrid extensively and it worked beautifully. There were still some annoyances with trying to get all the customization we wanted but overall it saved us a ton of time and produced beautiful results.
Currently I'm working with Silverlight 3 and RIA Services and I can't imagine trying to produce what we're doing without the DataGrid and DataForm controls. The time being saved far outweighs any of the overhead.
A: i am a moderate level developer i can say without these controls i couldn,t ever learn developing.just you have to admit yourself to it for a while till you find your way to customize it and the end result will be great
A: I'm trying to look at it all in context. I have a page that has a nice gridview (displays 10 rows at a time, 6 columns, sorting, and paging) and if I just look at the html table that is created along with the viewstate, I'm only seeing 29k of code.
Is 29k vs. 18k for using a repeater or listview really worth all the effort in these broadband times?
I personally stick with the gridviews however the design guy I work with sometimes gripes about trying to style it via css.
A: Just reading your posts. I agree PHP is easier than asp. but I just started using visual studio for formviews and gridviews. Can not get much easier for either vb or C# programmers. ASP still has problems uploading large files. PHP it's a snap. I run PHP under IIS 7.5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Creating your own table with CommandArgument buttons in ASP.NET MVC I'm trying to implement something like this:
<div>
<table>
<thead>
<tr>
<td>Port name</td>
<td>Current port version</td>
<td>New port version</td>
<td>Update</td>
</tr>
</thead>
<% foreach (var ip in Ports) { %>
<tr>
<td>
<%= ip.PortName %>
</td>
<td>
<%= ip.CurrentVersion %>
</td>
<td>
<%= ip.NewVersion %>
</td>
<td>
<asp:Button ID="btnUpdate" runat="server" Text="Update" CommandArgument="<% ip.PortName %>" />
</td>
</tr>
<% } %>
</table>
</div>
The button's CommandArgument property is where my code complains about not being able to resolve symbol ip. Is there any way to do what I'm trying to do?
A: You don't want to use a Webforms button in ASP.NET MVC. MVC is a completely different way of working, and you no longer have the WebForms abstraction.
You have 2 different options you can either replace your asp:Button with an input tag or use a standard hyperlink instead. If you use the input option then you will need to wrap in a form element. The form action should point to a Controller action.
A: You can't use webform controls in ASP.NET MVC in a trivial manner because they rely on things that are stripped out in MVC. Instead you add a button in two ways, both using the HtmlHelper on the ViewPage:
You can add a button in a form, which is easily handeled in a controller if you have a form for each single button:
<% using(Html.BeginForm("Update", "Ip", new {portName = ip.PortName} )) { %>
....
<input name="action" type="submit" value="Update">
<% } %>
BeginForm() will default to the same controller and action as the view was created from. The other way is to add a link instead, which is more fitting to your example of iterating through a list. For example lets say you have IpController
<%= Html.ActionLink("Update IP", "Update", "Ip",
new {
portName = ip.PortName
})
%>
The link will go to the Update action in IpController with the given portName as parameter. In both cases you'll need this action in IpController:
public ActionResult Update(string portName) {
// ...
}
Hope this helps.
A: I think you have to enclose your block in Form tags ans runat=server.
A: FWIW,
I think this text is missing an equals sign:
CommandArgument="<% ip.PortName %>"
Should be
CommandArgument="<%= ip.PortName %>"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JRuby / Rack deployment I know this is pretty exotic, but I want to deploy a Ruby web application (not Rails, but Rack based, so it plugs into most Ruby servers just fine) using JRuby. Google and friends give me a few success stories, but mostly rails related and if not, no details on the deployment are provided. The framework I'm using is Ramaze, if it matters.
Any help would be greatly appreciated.
A: In my opinion, running a Rack based application with a rackup script is the real Ruby way. And I wanted to apply the same for JRuby too. That is why I've written jetty-rackup http://github.com/geekq/jetty-rackup
We are using it for deploying a Sinatra web application. No Java specific configuration needed. A typical, small config.ru is enough. Embedded jetty web server is used in place of Webrick then.
A: This is the "just works" gem for me: https://github.com/matadon/mizuno
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: .net solution subversion best practices? There are so many examples of how to set up your dotnet projects but none seemed to fit our situation.
We have one solution with multiple applications, multiple dependencies. We're on SourceSafe currently and are planning to move to subversion but are finding it difficult to organize our source the right way.
*
*Example solution
*
*App1
*App2
*BizObjects
*DataAccess
*CustomControls
*Dependencies
*
*BizObjects->DataAccess
*App1->CustomControls
*App1->BizObjects
*App1->DataAccess
*App2->CustomControls
*App2->BizObjects
We also have a configuration management system which deploys (via copy from the database) depending on which workload the operator is working. We mark an application "release" with a version and to that release, we add multiple file dependencies. Bear in mind the solution we have in place now is an attempt to band-aid the old (windows 3.1 developed) solution to work with .NET file/dependency structure.
In the case of App1, we have App1.exe, BizObjects.dll, DataAccess.dll, and CustomControls.dll.
We have the same set of dependencies for App2 due to BizObjects referencing DataAccess -- but this is defined manually. We don't have a system in place to identify the dependency tree.
Each of the dependencies for a "release" is a file and version id. And the same application could contain different versions of each file for a different workload.
*
*Where in the world have we gone wrong? Did we go wrong?
*How can we structure an svn source tree to accommodate the deployment requirements?
*
*or
*how can we restructure the code the better support a deployment strategy which makes sense for our setup?
We have an old and over-engineered solution to (it would seem) a relatively simple problem. Can anyone steer me/us in the right direction?
edit: I read this question and remembered we also have the same dev/test/prod areas which the code must move through.
A: Sounds like you're trying to do configuration control with a source code control system.
Subversion my not be the right choice, since it's really for source code (ascii files) and build dependencies, not executable files (binary) and run-time dependencies.
My guess is you really need an installer:
http://en.wikipedia.org/wiki/List_of_installation_software
Or maybe just a script to launch the correct configuration from a network drive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I reset an increment identity's starting value in SQL Server I would like to have a nice template for doing this in development. How do I reset an increment identity's starting value in SQL Server?
A: Just a word of warning with:
DBCC CHECKIDENT (MyTable, RESEED, 0)
If you did not truncate the table, and the identity column is the PK, you will get an error when reaching pre-existing identites.
For example, you have identities (3,4,5) in the table already. You then reset the identity column to 1. After the identity 2 is inserted, the next insert will try to use the identity 3, which will fail.
A: To set the identity to 100:
DBCC CHECKIDENT (MyTable, RESEED, 100)
A: DBCC CHECKIDENT('TableName', RESEED, 0)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: What Ruby IDE do you prefer? I've been using Eclipse with RDT (not RadRails) a lot lately, and I'm quite happy with it, but I'm wondering if you guys know any decent alternatives. I know NetBeans also supports Ruby these days, but I'm not sure what it has to offer over Eclipse.
Please, list any features you think are brilliant or useful when suggesting an IDE, makes it easier to compare.
Also, I said Ruby, not Rails. While Rails support is a plus, I prefer things to be none Rails-centric. It should also be available on Linux and optionally Solaris.
A: RubyMine from JetBrains. (Also available as a plugin to IntelliJ IDEA)
A: NetBeans has some really solid Ruby support.
A: I have used Komodo and it's pretty good. I use TextMate now.
A: For very simple Linux support if you like TextMate, try just gedit loaded with the right plugins. Easy to set up and really customizable, I use it for just about everything. There's also a lot of talk about emacs plugins if you're already using that normally.
Gedit: How to set up like TextMate
A: In last 3 months, I have tried RadRails, Netbeans and RubyMine and finally settled on RubyMine not so much for features but for responsiveness and stability reasons.
In terms of features, RubyMine has slightly better code completion, debugging and code navigation, but only ruby beginners(like myself) need them most. Relying on code completion and code navigation is anti-ruby/rails, as ruby/rails names are supposed to be natural and each line of code needs to be in its convention determined location.
A: NetBeans is good because you can use it on Windows and Mac OS X.
A: Most IDEs present the project structure in a top down manner. This is great way to explore at a high level when joining an existing project. However, after working on the same project for more than a year, I realized that this approach can become counter-productive.
After Oracle declared the end of Ruby in NetBeans, I switched to Vim. By using a command line and an editor as the only tools, I was forced to mentally switch to a bottom-up perspective. To my amazement, I discovered that this made me more focused and productive. As a bonus, I got first class HAML and SASS syntax support.
I recommend Vim + Rails plugin for anyone that will work on a single project for an extended period of time.
A: While TextMate is not an IDE in the classical sense, try the following in terminal to be 'wowed'
cd 'your-shiny-ruby-project'
mate .
It'll spawn up TextMate and the project drawer will list the contents of your project. Pretty awesome if you ask me.
A: Aptana more or less is RadRails, or it's based on it. I've used it, and it's really good, but it does have some problems. For instance, it breaks the basic search dialog on my system (giving a raw java exception to the end user), and it clutters the interface with add like notices and upgrade bars and news feeds and...
But all in all it's pretty good, especially its editors (ERB, HTML/XML, ...) are top notch.
A: Have you tried Aptana? It's based on Eclipse and they have a sweet Rails plugin.
A: Redcar has been getting some attention lately, as well. Still early in its life, but it shows promise.
A: On Mac OS X, TextMate is a godsend.
A: I prefer TextMate on OS X. But Netbeans (multi-platform) is coming along quite nicely. Plus it comes with its IDE fully functional debugger.
A: Textmate on osx
A: E Text Editor is great (TextMate compatible sort-of-clone for Windows).
A: I started out using gEdit (ubuntu user), but even with all the plugins and modifications (class/file browser, terminal, darkmate scheme, etc, etc) it still always seemed to come up short. I've also tried like hell to get Aptana RadRails and Studio to work, but none of them ever really seemed to sync up with my workflow. I've even tried using Eclipse, but again, it just didn't work for me.
RubyMine also seemed like it would be great, but I found it to be way too buggy, even after the upgrade to 3.0.
So far, my favorite Ruby editor is Komodo Edit. It's got syntax highlighting and can detect errors and recognize your code based on user-specified ruby versions. Syntax highlighting schema are easily customizable and easy on the eyes. There are some very nice plugins for git, it can have split-screen editors (love that feature), and a great file-browser. I really wish Komodo had built-in terminal (multiple terminal) support, but everything else about it I've really come to love, and haven't found anything better yet.
A: emacs with ruby-mode, rdebug and a ruby interactive inferior shell.
A: The latest Netbeans IDE (6.1) has a pretty solid Ruby support.
You can check it out here.
A: Once I found Geany (Ubuntu), I switched from TextMate (OSX) and never looked back.
Geany is a lean, clean, speedy IDE that can be used either as a text editor or a light-weight IDE. It supports not only text editing features (syntax highlighting, code folding, auto-completion, auto-closing, symbol lists, code navigation, directory tree, multi-tabbed open files etc.) but also normal IDE features such as simple project management, compile-build-run within the main window. Unlike TextMate, it has a Terminal screen within its own window; you do not have to go back and force between your editor window and terminal window. Unlike TextMate, it supports international languages. Unlike TextMate, it supports multi-platforms, Unlike TextMate, it is open-source and free. Geany is now my favorite C/Ruby/XML development tool.
A: RubyMine is so awesome. Everything just works. I could go on and on. Code completion is fast, smooth, and accurate. Formatting is instantaneous. Project navigation is easy and without struggle. You can pop open any file with a few keystrokes. You don't even need to keep the project tree open, but it's there if you want. You can configure just about any aspect of it to behave exactly how you want.
NetBeans, Eclipse, and RubyMine all have more or less the same set of features. However, RubyMine is just so much more cleanly designed and easy to use. There's nothing awkward or clunky about it. There are all these nice little design touches that show how JetBrains really put thought into it instead of just amassing a big pile of features.
Incidentally RubyMine can do a lot of the things that Vim can do like select and edit a column of text or split the view into several editing panels with different files in them.
A: +1 for TextMate on Mac OS X.
See also answers to this question. I recommend trying NetBeans if you're on Windows.
A: I'd recommend NetBeans 6.1 too. Very nice IDE and makes working with Ruby a pleasure.
A: I started out with RadRails then moved to Aptana when they took it over, wasn't too bad. Got a macbook and have been using Textmate, never going back.
A: Ruby in Steel: http://www.sapphiresteel.com/Products/Ruby-In-Steel/Ruby-In-Steel-Developer-Overview
A Visual Studio based Ruby IDE. Fast Debugger. Intellisense.
A: On Mac OS there is also XCode. http://developer.apple.com/tools/developonrailsleopard.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "141"
} |
Q: Reading "chunked" response with HttpWebResponse I'm having trouble reading a "chunked" response when using a StreamReader to read the stream returned by GetResponseStream() of a HttpWebResponse:
// response is an HttpWebResponse
StreamReader reader = new StreamReader(response.GetResponseStream());
string output = reader.ReadToEnd(); // throws exception...
When the reader.ReadToEnd() method is called I'm getting the following System.IO.IOException: Unable to read data from the transport connection: The connection was closed.
The above code works just fine when server returns a "non-chunked" response.
The only way I've been able to get it to work is to use HTTP/1.0 for the initial request (instead of HTTP/1.1, the default) but this seems like a lame work-around.
Any ideas?
@Chuck
Your solution works pretty good. It still throws the same IOExeception on the last Read(). But after inspecting the contents of the StringBuilder it looks like all the data has been received. So perhaps I just need to wrap the Read() in a try-catch and swallow the "error".
A: Haven't tried it this with a "chunked" response but would something like this work?
StringBuilder sb = new StringBuilder();
Byte[] buf = new byte[8192];
Stream resStream = response.GetResponseStream();
string tmpString = null;
int count = 0;
do
{
count = resStream.Read(buf, 0, buf.Length);
if(count != 0)
{
tmpString = Encoding.ASCII.GetString(buf, 0, count);
sb.Append(tmpString);
}
}while (count > 0);
A: I am working on a similar problem. The .net HttpWebRequest and HttpWebRequest handle cookies and redirects automatically but they do not handle chunked content on the response body automatically.
This is perhaps because chunked content may contain more than simple data (i.e.: chunk names, trailing headers).
Simply reading the stream and ignoring the EOF exception will not work as the stream contains more than the desired content. The stream will contain chunks and each chunk begins by declaring its size. If the stream is simply read from beginning to end the final data will contain the chunk meta-data (and in case where it is gziped content it will fail the CRC check when decompressing).
To solve the problem it is necessary to manually parse the stream, removing the chunk size from each chunk (as well as the CR LF delimitors), detecting the final chunk and keeping only the chunk data. There likely is a library out there somewhere that does this, I have not found it yet.
Usefull resources :
http://en.wikipedia.org/wiki/Chunked_transfer_encoding
https://www.rfc-editor.org/rfc/rfc2616#section-3.6.1
A: I've had the same problem (which is how I ended up here :-). Eventually tracked it down to the fact that the chunked stream wasn't valid - the final zero length chunk was missing. I came up with the following code which handles both valid and invalid chunked streams.
using (StreamReader sr = new StreamReader(response.GetResponseStream(), Encoding.UTF8))
{
StringBuilder sb = new StringBuilder();
try
{
while (!sr.EndOfStream)
{
sb.Append((char)sr.Read());
}
}
catch (System.IO.IOException)
{ }
string content = sb.ToString();
}
A: After trying a lot of snippets from StackOverflow and Google, ultimately I found this to work the best (assuming you know the data a UTF8 string, if not, you can just keep the byte array and process appropriately):
byte[] data;
var responseStream = response.GetResponseStream();
var reader = new StreamReader(responseStream, Encoding.UTF8);
data = Encoding.UTF8.GetBytes(reader.ReadToEnd());
return Encoding.Default.GetString(data.ToArray());
I found other variations work most of the time, but occasionally truncate the data. I got this snippet from:
https://social.msdn.microsoft.com/Forums/en-US/4f28d99d-9794-434b-8b78-7f9245c099c4/problems-with-httpwebrequest-and-transferencoding-chunked?forum=ncl
A: It is funny. During playing with the request header and removing "Accept-Encoding: gzip,deflate" the server in my usecase did answer in a plain ascii manner and no longer with chunked, encoded snippets. Maybe you should give it a try and keep "Accept-Encoding: gzip,deflate" away. The idea came while reading the upper mentioned wiki in topic about using compression.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I parse and convert a DateTime to the RFC 3339 date-time format? How do I convert a DateTime structure to its equivalent RFC 3339 formatted string representation and/or parse this string representation back to a DateTime structure? The RFC-3339 date-time format is used in a number of specifications such as the Atom Syndication Format.
A: You don't need to write your own conversion code. Just use
XmlConvert.ToDateTime(string s, XmlDateTimeSerializationMode dateTimeOption)
to parse a RFC-3339 string, and
XmlConvert.ToString(DateTime value, XmlDateTimeSerializationMode dateTimeOption)
to convert a (UTC) datetime to a string.
Ref.
http://msdn.microsoft.com/en-us/library/ms162342(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/ms162344(v=vs.110).aspx
A: This is an implementation in C# of how to parse and convert a DateTime to and from its RFC-3339 representation. The only restriction it has is that the DateTime is in Coordinated Universal Time (UTC).
using System;
using System.Globalization;
namespace DateTimeConsoleApplication
{
/// <summary>
/// Provides methods for converting <see cref="DateTime"/> structures to and from the equivalent RFC 3339 string representation.
/// </summary>
public static class Rfc3339DateTime
{
//============================================================
// Private members
//============================================================
#region Private Members
/// <summary>
/// Private member to hold array of formats that RFC 3339 date-time representations conform to.
/// </summary>
private static string[] formats = new string[0];
/// <summary>
/// Private member to hold the DateTime format string for representing a DateTime in the RFC 3339 format.
/// </summary>
private const string format = "yyyy-MM-dd'T'HH:mm:ss.fffK";
#endregion
//============================================================
// Public Properties
//============================================================
#region Rfc3339DateTimeFormat
/// <summary>
/// Gets the custom format specifier that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format.
/// </summary>
/// <value>A <i>DateTime format string</i> that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format.</value>
/// <remarks>
/// <para>
/// This method returns a string representation of a <see cref="DateTime"/> that
/// is precise to the three most significant digits of the seconds fraction; that is, it represents
/// the milliseconds in a date and time value. The <see cref="Rfc3339DateTimeFormat"/> is a valid
/// date-time format string for use in the <see cref="DateTime.ToString(String, IFormatProvider)"/> method.
/// </para>
/// </remarks>
public static string Rfc3339DateTimeFormat
{
get
{
return format;
}
}
#endregion
#region Rfc3339DateTimePatterns
/// <summary>
/// Gets an array of the expected formats for RFC 3339 date-time string representations.
/// </summary>
/// <value>
/// An array of the expected formats for RFC 3339 date-time string representations
/// that may used in the <see cref="DateTime.TryParseExact(String, string[], IFormatProvider, DateTimeStyles, out DateTime)"/> method.
/// </value>
public static string[] Rfc3339DateTimePatterns
{
get
{
if (formats.Length > 0)
{
return formats;
}
else
{
formats = new string[11];
// Rfc3339DateTimePatterns
formats[0] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK";
formats[1] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffffK";
formats[2] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffK";
formats[3] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffK";
formats[4] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffK";
formats[5] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffK";
formats[6] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fK";
formats[7] = "yyyy'-'MM'-'dd'T'HH':'mm':'ssK";
// Fall back patterns
formats[8] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK"; // RoundtripDateTimePattern
formats[9] = DateTimeFormatInfo.InvariantInfo.UniversalSortableDateTimePattern;
formats[10] = DateTimeFormatInfo.InvariantInfo.SortableDateTimePattern;
return formats;
}
}
}
#endregion
//============================================================
// Public Methods
//============================================================
#region Parse(string s)
/// <summary>
/// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent.
/// </summary>
/// <param name="s">A string containing a date and time to convert.</param>
/// <returns>A <see cref="DateTime"/> equivalent to the date and time contained in <paramref name="s"/>.</returns>
/// <remarks>
/// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object.
/// </remarks>
/// <exception cref="ArgumentNullException"><paramref name="s"/> is a <b>null</b> reference (Nothing in Visual Basic).</exception>
/// <exception cref="FormatException"><paramref name="s"/> does not contain a valid RFC 3339 string representation of a date and time.</exception>
public static DateTime Parse(string s)
{
//------------------------------------------------------------
// Validate parameter
//------------------------------------------------------------
if(s == null)
{
throw new ArgumentNullException("s");
}
DateTime result;
if (Rfc3339DateTime.TryParse(s, out result))
{
return result;
}
else
{
throw new FormatException(String.Format(null, "{0} is not a valid RFC 3339 string representation of a date and time.", s));
}
}
#endregion
#region ToString(DateTime utcDateTime)
/// <summary>
/// Converts the value of the specified <see cref="DateTime"/> object to its equivalent string representation.
/// </summary>
/// <param name="utcDateTime">The Coordinated Universal Time (UTC) <see cref="DateTime"/> to convert.</param>
/// <returns>A RFC 3339 string representation of the value of the <paramref name="utcDateTime"/>.</returns>
/// <remarks>
/// <para>
/// This method returns a string representation of the <paramref name="utcDateTime"/> that
/// is precise to the three most significant digits of the seconds fraction; that is, it represents
/// the milliseconds in a date and time value.
/// </para>
/// <para>
/// While it is possible to display higher precision fractions of a second component of a time value,
/// that value may not be meaningful. The precision of date and time values depends on the resolution
/// of the system clock. On Windows NT 3.5 and later, and Windows Vista operating systems, the clock's
/// resolution is approximately 10-15 milliseconds.
/// </para>
/// </remarks>
/// <exception cref="ArgumentException">The specified <paramref name="utcDateTime"/> object does not represent a <see cref="DateTimeKind.Utc">Coordinated Universal Time (UTC)</see> value.</exception>
public static string ToString(DateTime utcDateTime)
{
if (utcDateTime.Kind != DateTimeKind.Utc)
{
throw new ArgumentException("utcDateTime");
}
return utcDateTime.ToString(Rfc3339DateTime.Rfc3339DateTimeFormat, DateTimeFormatInfo.InvariantInfo);
}
#endregion
#region TryParse(string s, out DateTime result)
/// <summary>
/// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent.
/// </summary>
/// <param name="s">A string containing a date and time to convert.</param>
/// <param name="result">
/// When this method returns, contains the <see cref="DateTime"/> value equivalent to the date and time
/// contained in <paramref name="s"/>, if the conversion succeeded,
/// or <see cref="DateTime.MinValue">MinValue</see> if the conversion failed.
/// The conversion fails if the s parameter is a <b>null</b> reference (Nothing in Visual Basic),
/// or does not contain a valid string representation of a date and time.
/// This parameter is passed uninitialized.
/// </param>
/// <returns><b>true</b> if the <paramref name="s"/> parameter was converted successfully; otherwise, <b>false</b>.</returns>
/// <remarks>
/// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object.
/// </remarks>
public static bool TryParse(string s, out DateTime result)
{
//------------------------------------------------------------
// Attempt to convert string representation
//------------------------------------------------------------
bool wasConverted = false;
result = DateTime.MinValue;
if (!String.IsNullOrEmpty(s))
{
DateTime parseResult;
if (DateTime.TryParseExact(s, Rfc3339DateTime.Rfc3339DateTimePatterns, DateTimeFormatInfo.InvariantInfo, DateTimeStyles.AdjustToUniversal, out parseResult))
{
result = DateTime.SpecifyKind(parseResult, DateTimeKind.Utc);
wasConverted = true;
}
}
return wasConverted;
}
#endregion
}
}
A: This worked for me in .NET 6:
public static class DateTimeExtensions
{
public static string ToRFC3339(this DateTime date)
{
return date.ToUniversalTime().ToString("yyyy-MM-dd'T'HH:mm:ss.fffK");
}
}
A: System.Text.Json does that as well:
JsonSerializer.Serialize(DateTime.Now)
A: A simple equation will able to obtain the result you are after:
rfcFormat = DateDiff("s", "1/1/1970", Now())
A: In .NET (assuming UTC):
datetime.ToString("yyyy-MM-dd'T'HH:mm:ssZ")
DateTime.Parse() can be used to convert back into a DateTime structure.
A: For completeness sake, Newtonsoft.Json will happily do it as well:
JsonConvert.SerializeObject(DateTime.Now);
(Unlike XmlConvert will have have escaped double-quotes on each end.)
A: <input asp-for="StartDate" class="form-control" value="@DateTime.Now.ToString("yyyy-MM-ddThh:mm:ss")" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: What is the best way to partition terabyte drive in a linux development machine? I have a new 1 TB drive coming in tomorrow. What is the best way to divide this space for a development workstation?
The biggest problem I think I'm going to have is that some partitions (probably /usr) will become to small after a bit of use. Other partitions are probably to huge. The swap drive for example is currently 2GB (2x 1GB RAM), but it is almost never used (only once that I know of).
A: If you partition your drive using LVM you won't have to worry about any individual partition running out of space in the future. Just move space around as necessary.
A: My standard strategy for normal "utility" boxes is to give them a swap partition twice the size of their RAM, a 1GB /boot partition and leave the rest as one vast partition. Whilst I see why some people want a separate /var, separate /home, etc., if I only have trusted users and I'm not running some production service, I don't think the reasons I've heard to date apply. Instead, I do my best to avoid any resizing, or any partition becoming too small - which is best achieved with one huge partition.
As for the size of swap and /boot - if your machine has 4GB memory, you may not want to have double that in swap. It's nonetheless wise to at least have some. Even if you nonetheless have double, you're using a total of 9GB, for 0.9% of your new drive. /boot can be smaller than 1GB, this is just my standard "will not become full, ever" size.
A: If you want a classic setup, I'd go for a 50GB "/" partition, for all your application goodness, and split the rest across users, or a full 950GB for a single user. Endless diskspace galore!
A: @wvdschel:
Don't create separate partitions for each user. Unused space on each partition is wasted.
Instead create one partition for all users. Use quota if necessary to limit each user's space. It's much more flexible than partitioning or LVM.
OTOH, one huge partition is usually a bit slower, depending on the file system.
A: I always setup LVM on Linux, and use the following layout to start with:
/ = 10GB
swap = 4GB
/boot = 100MB
/var = 5GB
/home = 10GB OR remainder of drive.
And then, later on if I need more space, I could simply increase /home, /var or / as needed. Since I work a lot with XEN Virtual Machines, I tend to leave the remaining space open so that I can quickly create LVM volumes for the XEN virtual machines.
A: Did you know 1TB can easily take up to half an hour to fsck? Workstations usually crash and reboot more often than servers, so that can get quite annoying. Do you really need all that space?
A: I would go with a 1 GB for /boot, 100 GB for /, and the rest for /home. 1 GB is probably too high for /boot, but it's not like you'll miss it. 100 GB might seem like a lot for everything outside home, until you start messing around with Databases and realize that MySQL keeps databases in /var. Best to leave some room to grow in that area. The reason that I recommend using a separtate partition for /home, is that when you want to completely switch distros, or if the upgrade option on your distro of choice, for whatever reason doesn't work, or if you just want to start from scratch and do a clean system install, you can just format / and /boot, and leave home with all the user data intact.
A: I would have two partitions. A small one (~20 GB) mounted on / would store all your programs, and then have a large one on /home. Many people have mentioned a partition for /boot but that is not really necessary. If you are worried about resizing, use LVM.
A: i give 40gb to / then how ever much ram i have i give the same to /swap then the rest to /home
A: Please tell me what are you doing to /boot that you need more than 64MB on it? Unless you never intend to clean it, anything more is a waste of space. Kernel image + initrd + System.map won't take more than 10MB (probably less - mine weight 5MB) and you really don't need to keep more than two spares.
And with the current prices of RAM - if you are needing swap, you'll be much better off buying more memory. Reserve 1GB for swap and have something monitoring it's usage (no swap at all is bad idea because the machine might lock up when it runs out of free memory).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Should I *always* favour implictly typed local variables in C# 3.0? Resharper certainly thinks so, and out of the box it will nag you to convert
Dooberry dooberry = new Dooberry();
to
var dooberry = new Dooberry();
Is that really considered the best style?
A: @jongalloway - var doesn't necessarily make your code more unreadable.
var myvariable = DateTime.Now
DateTime myvariable = DateTime.Now;
The first is just as readable as the second and requires less work
var myvariable = ResultFromMethod();
here, you have a point, var could make the code less readable. I like var because if I change a decimal to a double, I don't have to go change it in a bunch of places (and don't say refactor, sometimes I forget, just let me var!)
EDIT: just read the article, I agree. lol.
A: It's of course a matter of style, but I agree with Dare: C# 3.0 Implicit Type Declarations: To var or not to var?. I think using var instead of an explicit type makes your code less readable.In the following code:
var result = GetUserID();
What is result? An int, a string, a GUID? Yes, it matters, and no, I shouldn't have to dig through the code to know. It's especially annoying in code samples.
Jeff wrote a post on this, saying he favors var. But that guy's crazy!
I'm seeing a pattern for stackoverflow success: dig up old CodingHorror posts and (Jeopardy style) phrase them in terms of a question.
A: I have a feeling this will be one of the most popular questions asked over time on Stack Overflow. It boils down to preference. Whatever you think is more readable. I prefer var when the type is defined on the right side because it is terser. When I'm assigning a variable from a method call, I use the explicit type declaration.
A: There was a good discussion on this @ Coding Horror
Personally I try to keep its use to a minimum, I have found it hurts readability especially when assigning a variable from a method call.
A: I use it only when it's clearly obvious what var is.
clear to me:
XmlNodeList itemList = rssNode.SelectNodes("item");
var rssItems = new RssItem[itemList.Count];
not clear to me:
var itemList = rssNode.SelectNodes("item");
var rssItems = new RssItem[itemList.Count];
A: The best summary of the answer I've seen to this is Eric Lippert's comment, which essentially says you should use the concrete type if it's important what the type is, but not to otherwise. Essentially type information should be reserved for places where the type is important.
The standard at my company is to use var everywhere, which we came to after reading various recommendations and then spending some time trying it out to see whether the lack of annotated type information was a help or a hindrance. We felt it was a help.
Most of the recommendations people have linked to (e.g. Dare's one) are recommendations made by people who have never tried coding using var instead of the concrete type. This makes the recommendations all but worthless because they aren't speaking from experience, they're merely extrapolating.
The best advice I can give you is to try it for yourself, and see what works for you and your team.
A: One of the advantages of a tool like ReSharper is that you can write the code however you like and have it reformat to something more maintainable afterward. I have R# set to always reformat such that the actual type in use is visible, however, when writing code I nearly always type 'var'.
Good tools let you have the best of both worlds.
John.
A: It only make sense, when you don't know the type in advance.
A: In C# 9.0 there is a new way to initialize a class by Target-typed new expressions.
You can initialize the class like this:
Dooberry dooberry = new();
Personally, I like it more than using a var and it is more readable for me.
Regarding calling a method I think it is up to you. Personally, I prefer to specify the type because I think it is more readable this way:
Dooberry dooberry = GetDooberry();
In some cases, it is very clear what the type is, in this case, I use var:
var now = DateTime.Now;
A: "Best style" is subjective and varies depending on context.
Sometimes it is way easier to use 'var' instead of typing out some hugely long class name, or if you're unsure of the return type of a given function. I find I use 'var' more when mucking about with Linq, or in for loop declarations.
Other times, using the full class name is more helpful as it documents the code better than 'var' does.
I feel that it's up to the developer to make the decision. There is no silver bullet. No "one true way".
Cheers!
A: No not always but I would go as far as to say a lot of the time. Type declarations aren't much more useful than Hungarian notation ever was. You still have the same problem that types are subject to change and as much as refactoring tools are helpful for that it's not ideal compared to not having to change where a type is specified except in a single place, which follows the Don't Repeat Yourself principle.
Any single line statement where a type's name can be specified for both a variable and its value should definitely use var, especially when it's a long Generic<OtherGeneric< T,U,V>, Dictionary< X, Y>>>
A: There's a really good MSDN article on this topic and it outlines some cases where you can't use var:
The following restrictions apply to implicitly-typed variable declarations:
*
*var can only be used when a local variable is declared and initialized
in the same statement; the variable
cannot be initialized to null, or to a
method group or an anonymous function.
*var cannot be used on fields at class scope.
*Variables declared by using var cannot be used in the initialization
expression. In other words, this
expression is legal: int i = (i = 20);
but this expression produces a
compile-time error: var i = (i = 20);
*Multiple implicitly-typed variables cannot be initialized in the same
statement.
*If a type named var is in scope, then the var keyword will resolve to
that type name and will not be treated
as part of an implicitly typed local
variable declaration.
I would recommend checking it out to understand the full implications of using var in your code.
A:
I'm seeing a pattern for stackoverflow
success: dig up old CodingHorror posts
and (Jeopardy style) phrase them in
terms of a question.
I plead innocent! But you're right, this seemed to be a relatively popular little question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Sockets in Pascal How do you use network sockets in Pascal?
A: Here's an example taken from http://www.bastisoft.de/programmierung/pascal/pasinet.html
program daytime;
{ Simple client program }
uses
sockets, inetaux, myerror;
const
RemotePort : Word = 13;
var
Sock : LongInt;
sAddr : TInetSockAddr;
sin, sout : Text;
Line : String;
begin
if ParamCount = 0 then GenError('Supply IP address as parameter.');
with sAddr do
begin
Family := af_inet;
Port := htons(RemotePort);
Addr := StrToAddr(ParamStr(1));
if Addr = 0 then GenError('Not a valid IP address.');
end;
Sock := Socket(af_inet, sock_stream, 0);
if Sock = -1 then SockError('Socket: ');
if not Connect(Sock, sAddr, sizeof(sAddr)) then SockError('Connect: ');
Sock2Text(Sock, sin, sout);
Reset(sin);
Rewrite(sout);
while not eof(sin) do
begin
Readln(sin, Line);
Writeln(Line);
end;
Close(sin);
Close(sout);
Shutdown(Sock, 2);
end.
A: If you're using FPC or Lazarus(which is basically a rad IDE for FPC and a clone of delphi) you could use the Synapse socket library. It's amazing.
A: If you are using Delphi, I highly recommend Indy sockets, a set of classes for easy manipulation of sockets and many other internet protocols (HTTP, FTP, NTP, POP3 etc.)
A: You cannot use OpenSSL with Indy version 10.5 that shippes with Delphi 2007. You have to download version 10,6 from http://www.indyproject.org/ and install it into the IDE.
Note that other packages might use Indy, like RemObjects, and therefore they have to be re-compiled too and this can be tricky due to cross-references.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: SQL Server Full Text Searching I'm currently working on an application where we have a SQL-Server database and I need to get a full text search working that allows us to search people's names.
Currently the user can enter a into a name field that searches 3 different varchar cols. First, Last, Middle names
So say I have 3 rows with the following info.
1 - Phillip - J - Fry
2 - Amy - NULL - Wong
3 - Leo - NULL - Wong
If the user enters a name such as 'Fry' it will return row 1. However if they enter Phillip Fry, or Fr, or Phil they get nothing.. and I don't understand why its doing this. If they search for Wong they get rows 2 and 3 if they search for Amy Wong they again get nothing.
Currently the query is using CONTAINSTABLE but I have switched that with FREETEXTTABLE, CONTAINS, and FREETEXT without any noticeable differences in the results. The table methods are be preferred because they return the same results but with ranking.
Here is the query.
....
@Name nvarchar(100),
....
--""s added to prevent crash if searching on more then one word.
DECLARE @SearchString varchar(100)
SET @SearchString = '"'+@Name+'"'
SELECT Per.Lastname, Per.Firstname, Per.MiddleName
FROM Person as Per
INNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString)
AS KEYTBL
ON Per.Person_ID = KEYTBL.[KEY]
WHERE KEY_TBL.RANK > 2
ORDER BY KEYTBL.RANK DESC;
....
Any Ideas...? Why this full text search is not working correctly ?
A: If you're just searching people's names, it might be in your best interest to not even use the full text index. Full text index makes sense when you have large text fields, but if you're mostly dealing with one word per field, I'm not sure how much extra you would get out of full text indexes. Waiting for the full text index to reindex itself before you can search for new records can be one of the many problems.
You could just make a query such as the following. Split your searchstring on spaces, and create a list of the search terms.
Select FirstName,MiddleName,LastName
From person
WHERE
Firstname like @searchterm1 + '%'
or MiddleName like @searchterm1 + '%'
or LastName like @searchterm1 + '%'
or Firstname like @searchterm2 + '%'
etc....
A: FreeTextTable should work.
INNER JOIN FREETEXTTABLE(Person, (LastName, Firstname, MiddleName), @SearchString)
@SearchString should contain the values like 'Phillip Fry' (one long string containing all of the lookup strings separated by spaces).
If you would like to search for Fr or Phil, you should use asterisk: Phil* and Fr*
'Phil' is looking for exactly the word 'Phil'. 'Phil*' is looking for every word which is starting with 'Phil'
A: Thanks for the responses guys I finally was able to get it to work. With part of both Biri, and Kibbee's answers. I needed to add * to the string and break it up on spaces in order to work. So in the end I got
....
@Name nvarchar(100),
....
--""s added to prevent crash if searching on more then one word.
DECLARE @SearchString varchar(100)
--Added this line
SET @SearchString = REPLACE(@Name, ' ', '*" OR "*')
SET @SearchString = '"*'+@SearchString+'*"'
SELECT Per.Lastname, Per.Firstname, Per.MiddleName
FROM Person as Per
INNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString)
AS KEYTBL
ON Per.Person_ID = KEYTBL.[KEY]
WHERE KEY_TBL.RANK > 2
ORDER BY KEYTBL.RANK DESC;
....
There are more fields being searched upon I just simplified it for the question, sorry about that, I didn't think it would effect the answer. It actually searches a column that has a csv of nicknames and a notes column as well.
Thanks for the help.
A: Another approach could be to abstract the searching away from the individual fields.
In other words create a view on your data which turns all the split fields like firstname lastname into concatenated fields i.e. full_name
Then search on the view. This would likely make the search query simpler.
A: You might want to check out Lucene.net as an alternative to Full Text.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What is the best way to gzip and cache static images in Asp.net mvc I'm trying to find the best way to speed up the delivery of the static images that compose the design of an mvc site. The images are not gzipped, nor cached in the server or on the client (with content expire). Options are:
*
*Find why images are not cached and gzipped direcly from IIS6
*Write a specialized http handler
*Register a special route for static images and write a bynary actionresult method
What could be the best solution in terms of performance?
A: Best solution is to let IIS do it.
IIS6 Compression - most likely you need to specify file types to be compressed like .jpg, .png, .gif types, etc.
Caching will come from making sure that the correct headers are being sent to the client from code, and i believe there is a setting you can set in IIS that enables it for static content, but i'm not sure on that one.
A: Surely the gain from gzipping most images is negligable since they're already compressed ?
Naybe you have some really badly compressed PNG files or something?
You might want to check out yahoo's performance advice site which includes some useful tips on optimizing images including links to utilities such as pngcrush.
its much better to use an image optimizing utility ONCE than to rely on IIS to compress them (possibly inefficiently) on the fly.
A: There's a nice library up on the MSDN Code Gallery that does this. It's called FastMVC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Escaping XML tag contents I have a simple CAML query like
<Where><Eq><Field="FieldName"><Value Type="Text">Value text</Value></Field></Eq></Where>
And I have a variable to substitute for Value text. What's the best way to validate/escape the text that is substituted here in the .NET framework?
I've done a quick web search on this problem but all what I found was System.Xml.Convert class but this seems to be not quite what I need here.
I know I could have gone with an XmlWriter here, but it seems like a lot of code for such a simple task where I just need to make sure that the Value text part is formatted well.
A: When working with XML, always use the XML API that works with your programming environment. Don't try to roll your own XML document building and escaping code. As Longhorn213 mentioned, in .Net all the appropriate stuff is in the System.XML namespace. Trying to to write your own code for writing XML documents will just result in many bugs and troubles down the line.
A: The problem with the System.Xml approach in my case was that it required too much code to build this simple XML fragment. I think I've found a compromise.
XmlDocument doc = new XmlDocument();
doc.InnerXml = @"<Where><Eq><Field Name=""FieldName""><Value Type=""Text"">/Value></Field></Eq></Where>";
XmlNode valueNode = doc.SelectSingleNode("Where/Eq/Field/Value");
valueNode.InnerText = @"Text <>!$% value>";
A: Use this:
System.Security.SecurityElement.Escape("<unescaped text>");
A: use System.Xml.Linq.XElement and SetValue method. This will format the text (assuming a string), but also allows you to set xml as the value.
A: I am not sure what context the xml is coming from, but if it is stored in a string const variable that you created, then the easiest way to modify it would be:
public class Example
{
private const string CAMLQUERY = "<Where><Eq><Field=\"FieldName\"><Value Type=\"Text\">{0}</Value></Field></Eq></Where>";
public string PrepareCamlQuery(string textValue)
{
return String.Format(CAMLQUERY, textValue);
}
}
Of course, this is the easiest approach based on the question. You could also store the xml in an xml file and read it in and manipulate it that way, like what Darren Kopp answered. That also requires C# 3.0 and I am not sure what .Net Framework you are targeting. If you aren't targeting .Net 3.5 and you want to manipulate the Xml, I recommend just using Xpath with C#. This reference goes into detail on using xpath with C# to manipulate xml, than me typing it all out.
A: You can use the System.XML namespace to do it. Of course you can also use LINQ. But I choose the .NET 2.0 approach because I am not sure which version of .NET you are using.
XmlDocument doc = new XmlDocument();
// Create the Where Node
XmlNode whereNode = doc.CreateNode(XmlNodeType.Element, "Where", string.Empty);
XmlNode eqNode = doc.CreateNode(XmlNodeType.Element, "Eq", string.Empty);
XmlNode fieldNode = doc.CreateNode(XmlNodeType.Element, "Field", string.Empty);
XmlAttribute newAttribute = doc.CreateAttribute("FieldName");
newAttribute.InnerText = "Name";
fieldNode.Attributes.Append(newAttribute);
XmlNode valueNode = doc.CreateNode(XmlNodeType.Element, "Value", string.Empty);
XmlAttribute valueAtt = doc.CreateAttribute("Type");
valueAtt.InnerText = "Text";
valueNode.Attributes.Append(valueAtt);
// Can set the text of the Node to anything.
valueNode.InnerText = "Value Text";
// Or you can use
//valueNode.InnerXml = "<aValid>SomeStuff</aValid>";
// Create the document
fieldNode.AppendChild(valueNode);
eqNode.AppendChild(fieldNode);
whereNode.AppendChild(eqNode);
doc.AppendChild(whereNode);
// Or you can use XQuery to Find the node and then change it
// Find the Where Node
XmlNode foundWhereNode = doc.SelectSingleNode("Where/Eq/Field/Value");
if (foundWhereNode != null)
{
// Now you can set the Value
foundWhereNode.InnerText = "Some Value Text";
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the best way to authenticate over WCF? What's the best way to implement authentication over WCF?
I'd prefer to not use WS-* as it needs to be transport independent.
Should I "roll my own"? Is there any guidance for doing that (articles/blog posts)?
Or is there some way to (and should I) use the built in ASP.NET Membership and Profile providers on the server side?
A: Message based authentication, which is WS-Security based, is what you're looking for and is definitely supported by basicHttpBinding and netTcpBinding. I think you are making the mistaken assumption that only WsHttpBinding will support WS-Security, which is inaccurate.
The WS bindings are for WS-* elements other than WS-Security, such as WS-ReliableMessaging. Setting up transport independent message security is still going to be tricky, if you want it to stay secure. For the transports that aren't duplex you'll need to have at least one certificate exchanged in advance.
That might be the other reason you believe message security isn't supported by basicHttpBinding. basicHttpBinding will not allow you to use UserName authentication without transport security (for good reason too I'll add). And since transport security is inherently transport dependent I'm guessing you're trying to avoid it.
So anyhow, if you want to be fully transport independent the first thing you need to tackle is getting the certificates in order and figuring out how you're going to distribute the first (root) certificate(s), or securely exchange certificates. If you have the luxury of an application where you can distribute a master certificate, then take that route. If you're in a more complex scenario than that, you need to step back and think about how hard this problem really is.
A: Why should WS-* be transport dependant?
The whole point of the WS-* specifications is that they are part of the message, and hence transport independent.
A: WS-* is transport independant. That's the entire point.
Authentication really depends on who the desired consumers of your service are. Don't weigh down internal services with security that isn't required, and likewise it's useful to add extra layers of security if you need to know specific things about third party users.
For external APIs we've gone with WS-* authentication using certificates and then a simple authentication mechanism (username and password is supplied, GUID authentication token is returned, token is supplied with all requests after the fact).
A: Thanks for your answers.
I did not mean transport dependant, my mistake. I meant that I'd like the consumer to be able to choose which endpoint to bind to. And since basicHttpBinding and netTcpBinding, amongst others, don't suppport WS-* I need to use something at the at the service level.
Davids simple authentication is what I've been trying to avoid. Ideally I'd like a way to accomplish the same thing without having to add a token argument to all my operation contracts.
A: If you are exposing an external service that requires user level authentication / authorization, I would recommend using the ASP.NET provider.
There's a useful utility here that allows remote administration of the ASP.NET provider. The ASP.NET solution does require SQL...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to generate sample XML documents from their DTD or XSD? We are developing an application that involves a substantial amount of XML transformations. We do not have any proper input test data per se, only DTD or XSD files. We'd like to generate our test data ourselves from these files. Is there an easy/free way to do that?
Edit
There are apparently no free tools for this, and I agree that OxygenXML is one of the best tools for this.
A: Seems like nobody was able to answer the question so far :)
I use EclipseLink's MOXy to dynamically generate binding classes and then recursively go through the bound types. It is somewhat heavy, but it allows XPath value injection once the object tree is instantiated:
InputStream in = new FileInputStream(PATH_TO_XSD);
DynamicJAXBContext jaxbContext =
DynamicJAXBContextFactory.createContextFromXSD(in, null, Thread.currentThread().getContextClassLoader(), null);
DynamicType rootType = jaxbContext.getDynamicType(YOUR_ROOT_TYPE);
DynamicEntity root = rootType.newDynamicEntity();
traverseProps(jaxbContext, root, rootType, 0);
TraverseProps is pretty simple recursive method:
private void traverseProps(DynamicJAXBContext c, DynamicEntity e, DynamicType t, int level) throws DynamicException, InstantiationException, IllegalAccessException{
if (t!=null) {
logger.info(indent(level) + "type [" + t.getName() + "] of class [" + t.getClassName() + "] has " + t.getNumberOfProperties() + " props");
for (String pName:t.getPropertiesNames()){
Class<?> clazz = t.getPropertyType(pName);
logger.info(indent(level) + "prop [" + pName + "] in type: " + clazz);
//logger.info("prop [" + pName + "] in entity: " + e.get(pName));
if (clazz==null){
// need to create an instance of object
String updatedClassName = pName.substring(0, 1).toUpperCase() + pName.substring(1);
logger.info(indent(level) + "Creating new type instance for " + pName + " using following class name: " + updatedClassName );
DynamicType child = c.getDynamicType("generated." + updatedClassName);
DynamicEntity childEntity = child.newDynamicEntity();
e.set(pName, childEntity);
traverseProps(c, childEntity, child, level+1);
} else {
// just set empty value
e.set(pName, clazz.newInstance());
}
}
} else {
logger.warn("type is null");
}
}
Converting everything to XML is pretty easy:
Marshaller marshaller = jaxbContext.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.marshal(root, System.out);
A: You can also use XMLPad (free to use) found here http://www.wmhelp.com
to generate your xml samples.
From the menu : XSD -> generate sample XML file.
A: Microsoft has published a "document generator" tool as a sample. This is an article that describes the architecture and operation of the sample app in some detail.
If you just want to run the sample generation tool, click here and install the MSI.
It's free. The source is available. Requires the .NET Framework to run. Works only with XSDs. (not Relax NG or DTD).
A: XML-XIG: XML Instance Generator
http://xml-xig.sourceforge.net/
This opensource would be helpful.
A: Microsoft Office has 'InfoPath', which takes an XSD as an import and lets you quickly and easily define a form-based editor for creating XML files. It has two modes - one where you define the form, and another mode where you create the XML file by filling out the form. I believe it first came with Office 2003, and most people never install it. It shocks me at how much I like it.
A: For Intellij Idea users:
Have a look at Tools -> XML Actions
Seems to work very well (as far as I have tested).
Edit:
As mentioned by @naXa, you can now also right-click on the XSD file and click "Generate XML Document from XSD Schema..."
A: I think Oxygen (http://www.oxygenxml.com/) does it as well, but that's another commerical product. It's a nice one, though... I'd strongly recommend it for anyone doing a lot of XML work. It comes in a nice Eclipse plugin, too.
I do believe there is a free, fully-featured 30 day trial.
A: In Visual Studio 2008 SP1 and later the XML Schema Explorer can create an XML document with some basic sample data:
*
*Open your XSD document
*Switch to XML Schema Explorer
*Right click the root node and choose "Generate Sample Xml"
A: In recent versions of the free and open source Eclipse IDE you can generate XML documents from DTD and XSD files. Right-click on a given *.dtd or *.xsd file and select "Generate -> XML File...". You can choose which root element to generate and whether optional attributes and elements should be generated.
Of course you can use Eclipse to create and edit your DTD and XSD schema files, too. And you don't need to install any plugins. It is included in the standard distribution.
A: The camprocessor available on Sourceforge.net will do xml test case generation for any XSD. There is a tutorial available to show you how to generate your own test examples - including using content hints to ensure realistic examples, not just random junk ones.
The tutorial is available here:
http://www.oasis-open.org/committees/download.php/29661/XSD%20and%20jCAM%20tutorial.pdf
And more information on the tool - which is using the OASIS Content Assembly Mechanism (CAM) standard to refactor your XSD into a more XSLT friendly structure - can be found from the resource website - http://www.jcam.org.uk
Enjoy, DW
A: XMLSpy does that for you, although that's not free...
I believe that Liquid Xml Studio does it for you and is free, but I have not personally used it to create test data.
A: You can use the XML Instance Generator which is part of the Sun/Oracle Multi-Schema Validator.
It's README.txt states:
Sun XML Generator is a Java tool to generate various XML instances from
several kinds of schemas. It supports DTD, RELAX Namespace, RELAX Core,
TREX, and a subset of W3C XML Schema Part 1. [...]
This is a command-line tool that can generate both valid and invalid
instances from schemas. It can be used for generating test cases for XML
applications that need to conform to a particular schema.
Download and unpack xmlgen.zip from the msv download page and run the following command to get detailed usage instructions:
java -jar xmlgen.jar -help
The tool appears to be released under a BSD license; the source code is accessible from here
A: XMLBlueprint 7.5 can do the following:
- generate sample xml from dtd
- generate sample xml from relax ng schema
- generate sample xml from xml schema
A: Liquid XML Studio has an XML Sample Generator wizard which will build sample XML files from an XML Schema. The resulting data seems to comply with the schema (it just can't generate data for regex patterns).
A: The open source Version of SoapUI can generate SOAP requests from WSDL (which contains XSD type definitions), so it looks like there IS an open source implementation of this functionality. Unfortunately, I haven't figured out which library is used to to this.
A: The OpenXSD library mentions that they have support for generating XML instances based on the XSD. Check that out.
A: For completeness I'll add http://code.google.com/p/jlibs/wiki/XSInstance, which was mentioned in a similar (but Java-specific) question: Any Java "API" to generate Sample XML from XSD?
A: XML Blueprint also does that; instructions here
http://www.xmlblueprint.com/help/html/topic_170.htm
It's not free, but there's a 10-day free trial; it seems fast and efficient; unfortunately it's Windows only.
A: There's also http://xsd2xml.com/, an online XSD to XML generator
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "198"
} |
Q: Where to start with CruiseControl.NET I'm setting up my team's source control + build/integration process from scratch. We were using VSS and a tool we created in-house to perform the build process. We decided to move on to a more modern environment. I have the SVN running, and I would like to set a continuous integration process with CruiseControl.NET.
Is there a good step-by-step starter guide that will get me started with the best practices of this tool?
A: Before leveraging CruiseControl to it's fullest extent, you need to create an automated build script that can be run by msbuild or nant. After you get your project building in one step, then you can start integrating CruiseControl into the mix. Here are some resources to help get you started:
*
*CruiseControl.net Wiki - A very good resource.
*CruiseControl.net SourceControl Block - Shows how to use svn with CruiseControl.net with the sourcecontrol block
*Getting CruiseControl.net, MsBuild, and SVN setup - A resource stepping you through the steps to get everything meshing together.
A: Here are some links that might be useful:
*
*http://www.codeproject.com/KB/dotnet/cruisecontrol_continuous.aspx
*http://devlicio.us/blogs/ziemowit_skowronski/archive/2007/03/10/continuous-integration-1-the-environment-and-the-first-build.aspx
*http://code.google.com/p/ci-factory/
A: An excellent resource I've found for CI recently is by Martin Fowler, author of the famous "Enterprise Application Architecture" book.
URL: http://martinfowler.com/articles/continuousIntegration.html
A: One tip we have learned - if you have a reasonably large team and the product you're referring to is some "push to QA so people can test" type of scenario, resist the urge to have it build every single time someone checks something in. It will likely take down QA for some amount of time and cause QA to be disrupted a lot before you figure out that people are checking stuff in all day long.
For a "push to QA" scenario, just have it go off every evening if it detects changes.
For a "see if it builds" scenario, once every hour is good (again, people check in stuff way too often on a decent sized team to make instant builds worthwhile)
A: If you're looking a .NET CI could I suggest you have a look at Team City. I think it's better and it is free for up to 20 users.
A: Really, the documentation is pretty solid
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: C++ std::tr2 for VS2005 Is Boost the only way for VS2005 users experience TR2? Also is there a idiot proof way of downloading only the TR2 related packages?
I was looking at the boost installer provided by BoostPro Consulting. If I select the options for all the threading options with all the packages for MSVC8 it requires 1.1GB. While I am not short of space, it seems ridiculous that a library needs over a gigabyte of space and it takes BPC a long time to catch up with the current release.
What packages do I need? I'm really only interested in those that comprise std::tr2 and can find that out by comparing those on offer to those in from the TR2 report and selecting those from the list but even then it isn't clear what is needed and the fact that it is a version behind annoys me.
I know from previous encounters with Boost (1.33.1) that self compiling is a miserable experience: A lot of time wasted to get it started and then a hoard of errors passes across your screen faster than you can read, so what you are left with is an uneasy feeling that something is broken but you don't quite know what.
I've never had these problems with any Apache library but that is another rant...
A: I believe you're actually referring to TR1, rather than TR2. The call for proposals for TR2 is open, but don't expect to see much movement until the new C++ standard is out. Also, although boost is a provider of an implementation of TR1, dinkumware and the GNU FSF are other providers - on VC2005 boost is probably the easiest way to access this functionality.
The libraries from boost which are likely to be of most importance are
*
*reference
*smart pointer
*bind
*type traits
*array
*regular expressions
The documentation for building boost has been gradually improving for the last few releases, the current getting started guide is quite detailed. smart pointer and bind, should work from header files, and IMO, these are the most useful elements of TR1.
A: Part of the beauty of Boost is that all code is in header files. They have to for template reasons. So probably downloading the code and including it in your project will work. There are some libraries in Boost that do need compiling, but as long as you don't need those...
A: The libraries I am most interested in from TR1/TR2 are threads and the related atomics.
A: Compiling the boost libraries for yourself is actually quite simple, if not that well documented. The documentation is in the jamroot file. Run bjam --help in the boost root directory for a detailed list of options. As an example I used the following command line to build my current set up with boost 1.36.0:
bjam --build-type=complete --toolset=msvc --build-dir=c:\boost\build install
It ran for about a half hour on my machine and put the resulting files into c:\boost
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What are real life applications of yield? I know what yield does, and I've seen a few examples, but I can't think of real life applications, have you used it to solve some specific problem?
(Ideally some problem that cannot be solved some other way)
A: actually I use it in a non traditional way on my site IdeaPipe
public override IEnumerator<T> GetEnumerator()
{
// goes through the collection and only returns the ones that are visible for the current user
// this is done at this level instead of the display level so that ideas do not bleed through
// on services
foreach (T idea in InternalCollection)
if (idea.IsViewingAuthorized)
yield return idea;
}
so basically it checks if viewing the idea is currently authorized and if it is it returns the idea. If it isn't, it is just skipped. This allows me to cache the Ideas but still display the ideas to the users that are authorized. Else I would have to re pull them each time based on permissions, when they are only re-ranked every 1 hour.
A: One interesting use is as a mechanism for asynchronous programming esp for tasks that take multiple steps and require the same set of data in each step. Two examples of this would be Jeffery Richters AysncEnumerator Part 1 and Part 2. The Concurrency and Coordination Runtime (CCR) also makes use of this technique CCR Iterators.
A: I realise this is an old question (pre Jon Skeet?) but I have been considering this question myself just lately. Unfortunately the current answers here (in my opinion) don't mention the most obvious advantage of the yield statement.
The biggest benefit of the yield statement is that it allows you to iterate over very large lists with much more efficient memory usage then using say a standard list.
For example, let's say you have a database query that returns 1 million rows. You could retrieve all rows using a DataReader and store them in a List, therefore requiring list_size * row_size bytes of memory.
Or you could use the yield statement to create an Iterator and only ever store one row in memory at a time. In effect this gives you the ability to provide a "streaming" capability over large sets of data.
Moreover, in the code that uses the Iterator, you use a simple foreach loop and can decide to break out from the loop as required. If you do break early, you have not forced the retrieval of the entire set of data when you only needed the first 5 rows (for example).
Regarding:
Ideally some problem that cannot be solved some other way
The yield statement does not give you anything you could not do using your own custom iterator implementation, but it saves you needing to write the often complex code needed. There are very few problems (if any) that can't solved more than one way.
Here are a couple of more recent questions and answers that provide more detail:
Yield keyword value added?
Is yield useful outside of LINQ?
A: LINQ's operators on the Enumerable class are implemented as iterators that are created with the yield statement. It allows you to chain operations like Select() and Where() without actually enumerating anything until you actually use the enumerator in a loop, typically by using the foreach statement. Also, since only one value is computed when you call IEnumerator.MoveNext() if you decide to stop mid-collection, you'll save the performance hit of calculating all of the results.
Iterators can also be used to implement other kinds of lazy evaluation where expressions are evaluated only when you need it. You can also use yield for more fancy stuff like coroutines.
A: Another good use for yield is to perform a function on the elements of an IEnumerable and to return a result of a different type, for example:
public delegate T SomeDelegate(K obj);
public IEnumerable<T> DoActionOnList(IEnumerable<K> list, SomeDelegate action)
{
foreach (var i in list)
yield return action(i);
}
A: Using yield can prevent downcasting to a concrete type. This is handy to ensure that the consumer of the collection doesn't manipulate it.
A: You can also use yield return to treat a series of function results as a list. For instance, consider a company that pays its employees every two weeks. One could retrieve a subset of payroll dates as a list using this code:
void Main()
{
var StartDate = DateTime.Parse("01/01/2013");
var EndDate = DateTime.Parse("06/30/2013");
foreach (var d in GetPayrollDates(StartDate, EndDate)) {
Console.WriteLine(d);
}
}
// Calculate payroll dates in the given range.
// Assumes the first date given is a payroll date.
IEnumerable<DateTime> GetPayrollDates(DateTime startDate, DateTime endDate, int daysInPeriod = 14) {
var thisDate = startDate;
while (thisDate < endDate) {
yield return thisDate;
thisDate = thisDate.AddDays(daysInPeriod);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Can you link 68K code compiled with CodeWarrior for Palm OS with code compiled with PRC-Tools (GCC)? I've got a Palm OS/Garnet 68K application that uses a third-party static library built with CodeWarrior. Can I rebuilt the application using PRC-Tools, the port of GCC for the Palm OS platform and still link with the third-party library?
A: (Expanding on Ben's original answer... not sure of the exact etiquette for that but I can't edit yet so I'll re-post)
No, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be statically linked together, it may use symbols in a different way.
However, if you can wrap the third-party static library into a Palm OS shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The Palm OS shared library interface works across tools, but shared libraries have limited system support so you'll need to be sure the original code doesn't use global variables for this to work.
For more information on shared libraries, see Shared libraries on the Palm Pilot.
A: No, CodeWarrior uses a different object file format than PRC-Tools. Also, the compiler support libraries are different, so even if the code could be linked together, it may use symbols in a different way.
However, if you can wrap the third-party library into a shared library using CodeWarrior, then you should be able to call it from PRC-Tools applications. The shared library interface works across tools, but shared libraries have limited system support, so you'll need to be sure the original code doesn't use global variables for this to work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you spawn another process in C? How do you run an external program and pass it command line parameters using C? If you have to use operating system API, include a solution for Windows, Mac, and Linux.
A: On UNIX, I think you basically need to fork it if you want the spawned process to run detached from your the spawing one : For instance if you don't want your spawned process to be terminate when you quit your spawning process.
Here is a page that explains all the subtle differences between Fork, System, Exec.
If you work on Win,Mac and linux, I can recommend you the Qt Framework and its QProcess object, but I don't know if that's an option for you. The great advantages is that you will be able to compile the same code on windows linux and mac :
QString program = "./yourspawnedprogram";
QProcess * spawnedProcess = new QProcess(parent);
spawnedProcess->start(program);
// or spawnedProcess->startDetached(program);
And for extra, you can even kill the child process from the mother process,
and keep in communication with it through a stream.
A: It really depends on what you're trying to do, exactly, as it's:
*
*OS dependent
*Not quite clear what you're trying to do.
Nevertheless, I'll try to provide some information for you to decide.
On UNIX, fork() creates a clone of your process from the place where you called fork. Meaning, if I have the following process:
#include <unistd.h>
#include <stdio.h>
int main()
{
printf( "hi 2 u\n" );
int mypid = fork();
if( 0 == mypid )
printf( "lol child\n" );
else
printf( "lol parent\n" );
return( 0 );
}
The output will look as follows:
hi 2 u
lol child
lol parent
When you fork() the pid returned in the child is 0, and the pid returned in the parent is the child's pid. Notice that "hi2u" is only printed once... by the parent.
execve() and its family of functions are almost always used with fork(). execve() and the like overwrite the current stackframe with the name of the application you pass to it. execve() is almost always used with fork() where you fork a child process and if you're the parent you do whatever you need to keep doing and if you're the child you exec a new process. execve() is also almost always used with waitpid() -- waitpid takes a pid of a child process and, quite literally, waits until the child terminates and returns the child's exit status to you.
Using this information, you should be able to write a very basic shell; one that takes process names on the command line and runs processes you tell it to. Of course, shells do more than that, like piping input and output, but you should be able to accomplish the basics using fork(), execve() and waitpid().
NOTE: This is *nix specific! This will NOT work on Windows.
Hope this helped.
A: One solution is the system function defined in stdlib.h
int system(const char *string);
system api example
A: If you want to perform more complicated operations, like reading the output of the external program, you may be better served by the popen system call. For example, to programmatically access a directory listing (this is a somewhat silly example, but useful as an example), you could write something like this:
#include <stdio.h>
int main()
{
int entry = 1;
char line[200];
FILE* output = popen("/usr/bin/ls -1 /usr/man", "r");
while ( fgets(line, 199, output) )
{
printf("%5d: %s", entry++, line);
}
}
to give output like this
1: cat1
2: cat1b
3: cat1c
4: cat1f
5: cat1m
6: cat1s
...
A: #include <stdlib.h>
int main()
{
system("echo HAI");
return 0;
}
A: I want to give a big warning to not use system and 100% never use system when you write a library. It was designed 30 years ago when multithreading was unknown to the toy operating system called Unix. And it is still not useable even when almost all programs are multithreaded today.
Use popen or do a fork+execvp, all else is will give you hard to find problems with signal handling, crashs in environment handling code etc. It's pure evil and a shame that the selected and most rated answer is promoting the use of "system". It's more healthy to promote the use of Cocain on the workplace.
A: If you need to check/read/parse the output of your external command, I would suggest to use popen() instead of system().
A: Speaking of platform-dependent recipes, on Windows use CreateProcess, on Posix (Linux, Mac) use fork + execvp. But system() should cover your basic needs and is part of standard library.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: When to use IList and when to use List I know that IList is the interface and List is the concrete type but I still don't know when to use each one. What I'm doing now is if I don't need the Sort or FindAll methods I use the interface. Am I right? Is there a better way to decide when to use the interface or the concrete type?
A: It's always best to use the lowest base type possible. This gives the implementer of your interface, or consumer of your method, the opportunity to use whatever they like behind the scenes.
For collections you should aim to use IEnumerable where possible. This gives the most flexibility but is not always suited.
A: Microsoft guidelines as checked by FxCop discourage use of List<T> in public APIs - prefer IList<T>.
Incidentally, I now almost always declare one-dimensional arrays as IList<T>, which means I can consistently use the IList<T>.Count property rather than Array.Length. For example:
public interface IMyApi
{
IList<int> GetReadOnlyValues();
}
public class MyApiImplementation : IMyApi
{
public IList<int> GetReadOnlyValues()
{
List<int> myList = new List<int>();
... populate list
return myList.AsReadOnly();
}
}
public class MyMockApiImplementationForUnitTests : IMyApi
{
public IList<int> GetReadOnlyValues()
{
IList<int> testValues = new int[] { 1, 2, 3 };
return testValues;
}
}
A: If you're working within a single method (or even in a single class or assembly in some cases) and no one outside is going to see what you're doing, use the fullness of a List. But if you're interacting with outside code, like when you're returning a list from a method, then you only want to declare the interface without necessarily tying yourself to a specific implementation, especially if you have no control over who compiles against your code afterward. If you started with a concrete type and you decided to change to another one, even if it uses the same interface, you're going to break someone else's code unless you started off with an interface or abstract base type.
A: You are most often better of using the most general usable type, in this case the IList or even better the IEnumerable interface, so that you can switch the implementation conveniently at a later time.
However, in .NET 2.0, there is an annoying thing - IList does not have a Sort() method. You can use a supplied adapter instead:
ArrayList.Adapter(list).Sort()
A: IEnumerable
You should try and use the least specific type that suits your purpose.
IEnumerable is less specific than IList.
You use IEnumerable when you want to loop through the items in a collection.
IList
IList implements IEnumerable.
You should use IList when you need access by index to your collection, add and delete elements, etc...
List
List implements IList.
A: There's an important thing that people always seem to overlook:
You can pass a plain array to something which accepts an IList<T> parameter, and then you can call IList.Add() and will receive a runtime exception:
Unhandled Exception: System.NotSupportedException: Collection was of a fixed size.
For example, consider the following code:
private void test(IList<int> list)
{
list.Add(1);
}
If you call that as follows, you will get a runtime exception:
int[] array = new int[0];
test(array);
This happens because using plain arrays with IList<T> violates the Liskov substitution principle.
For this reason, if you are calling IList<T>.Add() you may want to consider requiring a List<T> instead of an IList<T>.
A: I would agree with Lee's advice for taking parameters, but not returning.
If you specify your methods to return an interface that means you are free to change the exact implementation later on without the consuming method ever knowing. I thought I'd never need to change from a List<T> but had to later change to use a custom list library for the extra functionality it provided. Because I'd only returned an IList<T> none of the people that used the library had to change their code.
Of course that only need apply to methods that are externally visible (i.e. public methods). I personally use interfaces even in internal code, but as you are able to change all the code yourself if you make breaking changes it's not strictly necessary.
A: There are two rules I follow:
*
*Accept the most basic type that will work
*Return the richest type your user will need
So when writing a function or method that takes a collection, write it not to take a List, but an IList<T>, an ICollection<T>, or IEnumerable<T>. The generic interfaces will still work even for heterogenous lists because System.Object can be a T too. Doing this will save you headache if you decide to use a Stack or some other data structure further down the road. If all you need to do in the function is foreach through it, IEnumerable<T> is really all you should be asking for.
On the other hand, when returning an object out of a function, you want to give the user the richest possible set of operations without them having to cast around. So in that case, if it's a List<T> internally, return a copy as a List<T>.
A: I don't think there are hard and fast rules for this type of thing, but I usually go by the guideline of using the lightest possible way until absolutely necessary.
For example, let's say you have a Person class and a Group class. A Group instance has many people, so a List here would make sense. When I declare the list object in Group I will use an IList<Person> and instantiate it as a List.
public class Group {
private IList<Person> people;
public Group() {
this.people = new List<Person>();
}
}
And, if you don't even need everything in IList you can always use IEnumerable too. With modern compilers and processors, I don't think there is really any speed difference, so this is more just a matter of style.
A: You should use the interface only if you need it, e.g., if your list is casted to an IList implementation other than List. This is true when, for example, you use NHibernate, which casts ILists into an NHibernate bag object when retrieving data.
If List is the only implementation that you will ever use for a certain collection, feel free to declare it as a concrete List implementation.
A: In situations I usually come across, I rarely use IList directly.
Usually I just use it as an argument to a method
void ProcessArrayData(IList almostAnyTypeOfArray)
{
// Do some stuff with the IList array
}
This will allow me to do generic processing on almost any array in the .NET framework, unless it uses IEnumerable and not IList, which happens sometimes.
It really comes down to the kind of functionality you need. I'd suggest using the List class in most cases. IList is best for when you need to make a custom array that could have some very specific rules that you'd like to encapsulate within a collection so you don't repeat yourself, but still want .NET to recognize it as a list.
A: A List object allows you to create a list, add things to it, remove it, update it, index into it and etc. List is used whenever you just want a generic list where you specify object type in it and that's it.
IList on the other hand is an Interface. Basically, if you want to create your own custom List, say a list class called BookList, then you can use the Interface to give you basic methods and structure to your new class. IList is for when you want to create your own, special sub-class that implements List.
Another difference is:
IList is an Interface and cannot be instantiated. List is a class and can be instantiated. It means:
IList<string> list1 = new IList<string>(); // this is wrong, and won't compile
IList<string> list2 = new List<string>(); // this will compile
List<string> list3 = new List<string>(); // this will compile
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "207"
} |
Q: Implementing permissions in PHP I've tried to do this several times with no luck. After reading this post, it made me interested in doing this again. So can anyone tell me why the following doesn't work?
<?php
$guest = 1;
$editor = 2;
$admin = 4;
$user = $editor;
if( $user == ($editor | $admin) ) {
echo "Test";
}
?>
A: In the interest of not reinventing the wheel, why not take a look at ACL/Authentication systems like Zend ACL and Zend Auth? Both can be used independently from the Zend Framework as a whole. Access Control is a tricky situation so it benefits one to at least look at how other systems do it.
A: It's been a long time since I used PHP, but I will assume that this will work:
<?php
$guest = 1;
$editor = 2;
$admin = 4;
$user = $editor;
if( ($user == $editor) || ($user == $admin) ) {
echo "Test";
}
?>
A: I've used this in error reporting and it works quite well. As for user permissions it should work very well - you could have several columns for each user permission in your database or one userlevel column in your database. Go for this option.
A: Use the bitwise OR operator (|) to set bits, use the AND operator (&) to check bits. Your code should look like this:
<?php
$guest = 1;
$editor = 2;
$admin = 4;
$user = $editor;
if( $user & ($editor | $admin) ) {
echo "Test";
}
?>
If you don't understand binary and exactly what the bitwise operators do, you should go learn it. You'll understand how to do this much better.
A: (2 | 4) is evaluating to 6, but 2 == 6 is false.
A: @mk: (2 | 4) evaluates to 6.
A: $guest = 1;
$editor = 2;
$admin = 4;
$user = $editor;
if (user == $editor || $user == $admin) {
echo "Test";
}
A:
Awesome, this seems like the best way to do permissions in a CMS. Yes? No?
Maybe, I've never really done it that way. What I have done is used bitwise operators to store a whole bunch of "yes or no" settings in a single number in a single column in the database.
I guess for permissions, this way would work good if you want to store permissions in the database. If someone wants to post some content, and only wants admins and editors to see it, you just have to store the result of
($editor | $admin)
into the database, then to check it, do something like
if ($user & $database_row['permissions']) {
// display content
} else {
// display permissions error
}
A: In my opinion this doesn't scale well. I haven't actually tried using it on a large scale project, but a CMS sounds way to complicated to use this on.
A: It always depends on what you need. If you know the Zend Framework already, then I'd second the Zend_Acl/_Auth suggestion which was made earlier. But keep in mind that every framework propably comes with a similar component.
The other thing that comes to mind is LiveUser. I like working with it a lot as well.
I think you can do pretty much anything and while your approach looks very simple, it's also limited since (through all those if()'s) you are gonna put a lot of the ACL-logic right in the middle of your application. Which is not the greatest thing to do in order to keep it simple and extendible. ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What's your top feature request for Silverlight? I'll take away the obvious one here: mic and webcam support. Other than that, if you ran the Silverlight team, what would your highest priority be for Silverlight v.Next?
Disclaimer: If we get some good responses, I'll pass them along to folks I know on the Silverlight team.
UPDATE: The best place to report Silverlight feature requests now is the UserVoice site: http://silverlight.uservoice.com/
A: *
*SQL Compact Edition running on the Silverlight CLR
*Support for Triggers
*Support for resource dictionaries
Also, since you brought up Webcam I have to plug my Silverlight 2 Webcam Support POC. It's using Flash interop and allows you to capture PNG stills from Silverlight. I guess it's more a fun example of Silverlight, JavaScript and Flash interoperability than a really useful webcam solution. But you can do fun things with it. In my most recent blog post I use the webcam support to capture still pictures for a sliding puzzle game.
http://jonas.follesoe.no/WebcamInSilverlight2NdashSlidingPuzzleGame.aspx
A: I've been working on a business app in silverlight for the past couple of months, so I'm biased more towards that direction. These are my problems with 2 beta 2, I have no idea if they will be solved with the final version.
*
*Printing. Some kind, any kind, I don't care, as long as I have some control over it. A business app without printing is a hard sell, and no, the print from the browser is not good enough.
*Ability to deploy updates. Currently I can't easily post a new version of the xap and expect the users to get it. That's very nearly a show stopper. All the suggestions to make this work I've had don't seem to work or make things worse. Adding a query string did nothing. Renaming the xap with a version number will wipe the iso storage and adding a no cache header to the website breaks PDF's in IE which is part of my work around for #1.
*Right Click, double click and scroll wheel. Where are they? Sure I can hack on it and make it work, but that stuff should just work. The only excuse I've heard is some mice don't have a second button. I hope that's not the reason. If so, let's get rid of everything but the text box so the lynx guys don't feel bad.
A: Okay, fine, I'll throw another one out there: audio file support. I'd love to be able to generate WAV data on the client and immediately play it. As it is, Silverlight only plays WMV and MP3, neither of which is simple (legal?) to create without a per-client license.
A: Parity with WPF.
Triggers (event triggers and data triggers too),
Binding to other elements in xaml,
Multi-part value converters,
and DynamicResources.
Commands... maybe if they got time.
A: For them to fix the ugly text rendering.
A: Full cross-platform support for Windows, Mac and Linux with complete feature parity for each OS. ;)
A: Printing ability. I have been working on a business app since the alpha version and the biggest problem is that I have to create PDF files on the server and download them to the client so they can be printed. Some of them get really big. If I could generate them on the client and print that would solve all my problems. Otherwise, SL 3.0 will work great for my app.
A: I'm actually on the silverlight team.. so I can also pass along suggestions.
Not really sure how much i can divulge, but webcam is being worked on.
I can definitely agree with the desire to gen wav files. I wanted to speed up/slow down sounds for a piano demo..
Carl - that's the plan. Though linux support is being handled by the mono team.
Brian - while parity with WPF isn't a goal, subset compatibility is. Silverlight's 'minimality' is indeed at times pretty annoying.
A:
SQL Compact Edition running on the Silverlight CLR
I thought the point of silverlight was to provide a small, embedded runtime in the browser.
Adding every kitchen sink (like SQL or any kind of ORM library, or parity with WPF) is just going to cause what happened with .net 3.5. Nobody will develop for it because they don't want to burden their end users with a 200 megabyte download
My Top Feature Requests for silverlight would be:
*
*The smallest download size possible. Last time I looked I think it was at 4.6 meg? This is too big.
*One click installation with no disruption. Don't make me navigate off to other sites, reboot my browser*, or DARE reboot my computer.
*Backwards compatibility. I've been to several silverlight sites now which don't work because they require 1.0 and I have 2.0 beta something, but I can't install 1.0 because 2.0 stops it. This is stupid.
* yeah I realise it might not be possible within the confines of firefox etc, but still. This is the end goal.
A: The XAML Hyperlink element inside text blocks. Google "silverlight text Hyperlink" to see how many complex and ugly workarounds are being posted for this omission. Notice how the best one doesn't have any line-breaks in the text, because the WrapPanel that it uses doesn't deal with them.
Failing that, I could do with at least one of the following ways to make the workarounds more palatable:
*
*A FlowDocument so that I can work with multiple text blocks inside a larger document
*A good way determining which text run is under the mouse click when the user clicks somewhere on a text block.
In general - given the click X, Y co-ordinates, find out what XAML element was clicked upon.
*Mouse events on text runs, not just on their containing text block.
I have asked how to do this as a question here, and there is no satisfactory answer, which is very disappointing..
A: Streaming Video over RTSP. Sadly, Silverlight 2 only supports HTTP Streaming, and telling it to use mms:// only signals it to do streaming video over HTTP.
A: Tiff support.
This would be huge for businesses that need to access scanned documents from a central server - Silverlight is much easier to deploy than Windows Forms components hosted in IE, and pretty much all document imaging is done with Tiffs.
A: *
*Basic HTML / Rich Text support.
*WPF's Inline Hyperlink.
A: Mic + Webcam support...must for web dialers
Printing support...for LoB apps
Silverlight running on Symbian (S60 atleast) and iPhone
DataSet/TypedDataSet...with Control Binding...Visual Studio generating WCF based Adapters (like currently it does for WinForms / Sql). Lot of LoB developers will get attracted!
A: I would just like to add that Silverlight does have its own uservoice site were you can add and vote for feature suggestions:
http://silverlight.uservoice.com/
This was set up by the Silverlight product team and they are actively watching the suggestions on this site.
A: What about some way to be able to wrap Silverlight around AIR and be able to run it as a client in a multi platform way... I guess this is more of a request to the Adobe team rather than the Microsoft one, yet I should be cool!
Cheers!
A: I know this is probably difficult to implement in Silverlight since it is probably resource intensive, but it would be nice if the VisualBrush was supported.
A: Dropdown boxes and a more simple way to highlight text in a text box!
That's what I would want right now anyway.
A: Let me add another vote for the ability to generate/edit/play wav files (or at least a low-level raw bitstream.)
A: Ok. I would like to see full support for modal dialogs. Without this building serious line of business applications cannot be seriously considered.
This needs to behave exactly the same way Modal dialogs behave in the win forms world, meaning not just simulating a popup, but halting code execution and returning to the code when the modal dialog is closed.
A: That automatic update of new silverlight code sounds like a big problem.
Also right click should be there. It's up to the dev to deal with users who don't have a 2 button mouse. I'm betting that 90% of users have a 2 button mouse. And mac users have Cmd click to emulate it don't they? If you cover windows and mac that's 97% of the market or something, that's as good as it gets.
A: Two things:
*
*Being able to do an HttpWebRequest without the whole request body loaded into memory on the client
*Being able to do socket connections to the source server port (e.g. 80 or 443)
A: I'm not gonna be that guy that lists off all the features of WPF. I'm trying to be tactical here.
Here's my list:
*
*Full Trust Mode (i.e. file system access, full screen text entry)
*Direct access to the printer
*ItemContainerGenerator promoted from the SL Toolkit to SL. This class is so ridiculously important for building custom ItemsControls.
*Drag & Drop from the Desktop
*Better RelativeSource Binding Support
*ScatterView & other touch optimized controls
*Receive notification of assembly updates at runtime (so that users that don't close the browser can receive code updates).
A: I have a request that may be solved in one of two ways (as I see it):
*
*An automatically-scaling Canvas control (i.e. when you resize the canvas within Blend it would actually change the scale of the canvas w/o crazy fly-off-the-screen, infinity-crash side effects. And programmatically, if the width/height of this control were set, the contents of the canvas would also scale within those bounds.
*An alternative way of doing the above would be a Path Panel. As it is now, Paths scale just the way I would like them to in Blend. I would think that a Path Panel would also scale just like individual Paths do. You know, like a Path Collection of sorts.
*How about NOT clipping Path Strokes when they go outside of the width/height bounds? Or somehow giving an option (checkbox) to enable/disable this feature?
A: I wonder someday will it be possible to develop a website, using silverlight, which implements features like the ones available at TinyChat and TokBox.com [sorry I am new user and cannot post more than a link per post :-(], namely: chat rooms with multiple audio and video streams coming from mics and webcams (using Flash).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Test Cases VS ASSERTION statement In my most C++ project I heavily used ASSERTION statement as following:
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
if(!fantasticData)
return -1;
// ,,,
return WOW_VALUE;
}
But TDD community seems like to enjoy doing something like this:
int doMoreWonderfulThings(const int* fantasticData)
{
if(!fantasticData)
return ERROR_VALUE;
// ...
return AHA_VALUE;
}
TEST(TDD_Enjoy)
{
ASSERT_EQ(ERROR_VALUE, doMoreWonderfulThings(0L));
ASSERT_EQ(AHA_VALUE, doMoreWonderfulThings("Foo"));
}
Just with my experiences first approaches let me remove so many subtle bugs.
But TDD approaches are very smart idea to handle legacy codes.
"Google" - they compare "FIRST METHOD" to "Walk the shore with life-vest, swim ocean without any safe guard".
Which one is better?
Which one makes software robust?
A: In my (limited) experience the first option is quite a bit safer. In a test-case you only test predefined input and compare the outcome, this works well as long as every possible edge-case has been checked. The first option just checks every input and thus tests the 'live' values, it filters out bugs real quickly, however it comes with a performance penalty.
In Code Complete Steve McConnell learns us the first method can be used successfully to filter out bugs in a debug build. In release build you can filter-out all assertions (for instance with a compiler flag) to get the extra performance.
In my opinion the best way is to use both methods:
Method 1 to catch illegal values
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
ASSERTNOTEQUAL(0, fantasticData)
return WOW_VALUE / fantasticData;
}
and method 2 to test edge-cases of an algorithm.
int doMoreWonderfulThings(const int fantasticNumber)
{
int count = 100;
for(int i = 0; i < fantasticNumber; ++i) {
count += 10 * fantasticNumber;
}
return count;
}
TEST(TDD_Enjoy)
{
// Test lower edge
ASSERT_EQ(0, doMoreWonderfulThings(-1));
ASSERT_EQ(0, doMoreWonderfulThings(0));
ASSERT_EQ(110, doMoreWonderfulThings(1));
//Test some random values
ASSERT_EQ(350, doMoreWonderfulThings(5));
ASSERT_EQ(2350, doMoreWonderfulThings(15));
ASSERT_EQ(225100, doMoreWonderfulThings(150));
}
A: Both mechanisms have value. Any decent test framework will catch the standard assert() anyway, so a test run that causes the assert to fail will result in a failed test.
I typically have a series of asserts at the start of each c++ method with a comment '// preconditions'; it's just a sanity check on the state I expect the object to have when the method is called. These dovetail nicely into any TDD framework because they not only work at runtime when you're testing functionality but they also work at test time.
A: There is no reason why your test package cannot catch asserts such as the one in doMoreWonderfulThings. This can be done either by having your ASSERT handler support a callback mechanism, or your test asserts contain a try/catch block.
A: I don't know which particlar TDD subcommunity you're refering to but the TDD patterns I've come across either use Assert.AreEqual() for positive results or otherwise use an ExpectedException mechanism (e.g., attributes in .NET) to declare the error that should be observed.
A: In C++, I prefer method 2 when using most testing frameworks. It usually makes for easier to understand failure reports. This is invaluable when a test months to years after the test was written.
My reason is that most C++ testing frameworks will print out the file and line number of where the assert occurred without any kind of stack trace information. So most of the time you will get the reporting line number inside of the function or method and not inside of the test case.
Even if the assert is caught and re-asserted from the caller the reporting line will be with the catch statement and may not be anywhere close to the test case line which called the method or function that asserted. This can be really annoying when the function that asserted may have been used on multiple times in the test case.
There are exceptions though. For example, Google's test framework has a scoped trace statement which will print as part of the trace if an exception occurs. So you can wrap a call to generalized test function with the trace scope and easily tell, within a line or two, which line in the exact test case failed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: SQL query to compare product sales by month I have a Monthly Status database view I need to build a report based on. The data in the view looks something like this:
Category | Revenue | Yearh | Month
Bikes 10 000 2008 1
Bikes 12 000 2008 2
Bikes 12 000 2008 3
Bikes 15 000 2008 1
Bikes 11 000 2007 2
Bikes 11 500 2007 3
Bikes 15 400 2007 4
... And so forth
The view has a product category, a revenue, a year and a month. I want to create a report comparing 2007 and 2008, showing 0 for the months with no sales. So the report should look something like this:
Category | Month | Rev. This Year | Rev. Last Year
Bikes 1 10 000 0
Bikes 2 12 000 11 000
Bikes 3 12 000 11 500
Bikes 4 0 15 400
The key thing to notice is how month 1 only has sales in 2008, and therefore is 0 for 2007. Also, month 4 only has no sales in 2008, hence the 0, while it has sales in 2007 and still show up.
Also, the report is actually for financial year - so I would love to have empty columns with 0 in both if there was no sales in say month 5 for either 2007 or 2008.
The query I got looks something like this:
SELECT
SP1.Program,
SP1.Year,
SP1.Month,
SP1.TotalRevenue,
IsNull(SP2.TotalRevenue, 0) AS LastYearTotalRevenue
FROM PVMonthlyStatusReport AS SP1
LEFT OUTER JOIN PVMonthlyStatusReport AS SP2 ON
SP1.Program = SP2.Program AND
SP2.Year = SP1.Year - 1 AND
SP1.Month = SP2.Month
WHERE
SP1.Program = 'Bikes' AND
SP1.Category = @Category AND
(SP1.Year >= @FinancialYear AND SP1.Year <= @FinancialYear + 1) AND
((SP1.Year = @FinancialYear AND SP1.Month > 6) OR
(SP1.Year = @FinancialYear + 1 AND SP1.Month <= 6))
ORDER BY SP1.Year, SP1.Month
The problem with this query is that it would not return the fourth row in my example data above, since we didn't have any sales in 2008, but we actually did in 2007.
This is probably a common query/problem, but my SQL is rusty after doing front-end development for so long. Any help is greatly appreciated!
Oh, btw, I'm using SQL 2005 for this query so if there are any helpful new features that might help me let me know.
A: The Case Statement is my best sql friend. You also need a table for time to generate your 0 rev in both months.
Assumptions are based on the availability of following tables:
sales: Category | Revenue | Yearh |
Month
and
tm: Year | Month (populated with all
dates required for reporting)
Example 1 without empty rows:
select
Category
,month
,SUM(CASE WHEN YEAR = 2008 THEN Revenue ELSE 0 END) this_year
,SUM(CASE WHEN YEAR = 2007 THEN Revenue ELSE 0 END) last_year
from
sales
where
year in (2008,2007)
group by
Category
,month
RETURNS:
Category | Month | Rev. This Year | Rev. Last Year
Bikes 1 10 000 0
Bikes 2 12 000 11 000
Bikes 3 12 000 11 500
Bikes 4 0 15 400
Example 2 with empty rows:
I am going to use a sub query (but others may not) and will return an empty row for every product and year month combo.
select
fill.Category
,fill.month
,SUM(CASE WHEN YEAR = 2008 THEN Revenue ELSE 0 END) this_year
,SUM(CASE WHEN YEAR = 2007 THEN Revenue ELSE 0 END) last_year
from
sales
Right join (select distinct --try out left, right and cross joins to test results.
product
,year
,month
from
sales --this ideally would be from a products table
cross join tm
where
year in (2008,2007)) fill
where
fill.year in (2008,2007)
group by
fill.Category
,fill.month
RETURNS:
Category | Month | Rev. This Year | Rev. Last Year
Bikes 1 10 000 0
Bikes 2 12 000 11 000
Bikes 3 12 000 11 500
Bikes 4 0 15 400
Bikes 5 0 0
Bikes 6 0 0
Bikes 7 0 0
Bikes 8 0 0
Note that most reporting tools will do this crosstab or matrix functionality, and now that i think of it SQL Server 2005 has pivot syntax that will do this as well.
Here are some additional resources.
CASE
https://web.archive.org/web/20210728081626/https://www.4guysfromrolla.com/webtech/102704-1.shtml
SQL SERVER 2005 PIVOT
http://msdn.microsoft.com/en-us/library/ms177410.aspx
A: @Christian -- markdown editor -- UGH; especially when the preview and the final version of your post disagree...
@Christian -- full outer join -- the full outer join is overruled by the fact that there are references to SP1 in the WHERE clause, and the WHERE clause is applied after the JOIN. To do a full outer join with filtering on one of the tables, you need to put your WHERE clause into a subquery, so the filtering happens before the join, or try to build all of your WHERE criteria onto the JOIN ON clause, which is insanely ugly. Well, there's actually no pretty way to do this one.
@Jonas: Considering this:
Also, the report is actually for financial year - so I would love to have empty columns with 0 in both if there was no sales in say month 5 for either 2007 or 2008.
and the fact that this job can't be done with a pretty query, I would definitely try to get the results you actually want. No point in having an ugly query and not even getting the exact data you actually want. ;)
So, I'd suggest doing this in 5 steps:
1. create a temp table in the format you want your results to match
2. populate it with twelve rows, with 1-12 in the month column
3. update the "This Year" column using your SP1 logic
4. update the "Last Year" column using your SP2 logic
5. select from the temp table
Of course, I guess I'm working from the assumption that you can create a stored procedure to accomplish this. You might technically be able to run this whole batch inline, but that kind of ugliness is very rarely seen. If you can't make an SP, I suggest you fall back on the full outer join via subquery, but it won't get you a row when a month had no sales either year.
A: The trick is to do a FULL JOIN, with ISNULL's to get the joined columns from either table. I usually wrap this into a view or derived table, otherwise you need to use ISNULL in the WHERE clause as well.
SELECT
Program,
Month,
ThisYearTotalRevenue,
PriorYearTotalRevenue
FROM (
SELECT
ISNULL(ThisYear.Program, PriorYear.Program) as Program,
ISNULL(ThisYear.Month, PriorYear.Month),
ISNULL(ThisYear.TotalRevenue, 0) as ThisYearTotalRevenue,
ISNULL(PriorYear.TotalRevenue, 0) as PriorYearTotalRevenue
FROM (
SELECT Program, Month, SUM(TotalRevenue) as TotalRevenue
FROM PVMonthlyStatusReport
WHERE Year = @FinancialYear
GROUP BY Program, Month
) as ThisYear
FULL OUTER JOIN (
SELECT Program, Month, SUM(TotalRevenue) as TotalRevenue
FROM PVMonthlyStatusReport
WHERE Year = (@FinancialYear - 1)
GROUP BY Program, Month
) as PriorYear ON
ThisYear.Program = PriorYear.Program
AND ThisYear.Month = PriorYear.Month
) as Revenue
WHERE
Program = 'Bikes'
ORDER BY
Month
That should get you your minimum requirements - rows with sales in either 2007 or 2008, or both. To get rows with no sales in either year, you just need to INNER JOIN to a 1-12 numbers table (you do have one of those, don't you?).
A: I could be wrong but shouldn't you be using a full outer join instead of just a left join? That way you will be getting 'empty' columns from both tables.
http://en.wikipedia.org/wiki/Join_(SQL)#Full_outer_join
A: About the markdown - Yeah that is frustrating. The editor did preview my HTML table, but after posting it was gone - So had to remove all HTML formatting from the post...
@kcrumley I think we've reached similar conclusions. This query easily gets real ugly. I actually solved this before reading your answer, using a similar (but yet different approach). I have access to create stored procedures and functions on the reporting database. I created a Table Valued function accepting a product category and a financial year as the parameter. Based on that the function will populate a table containing 12 rows. The rows will be populated with data from the view if any sales available, if not the row will have 0 values.
I then join the two tables returned by the functions. Since I know all tables will have twelve roves it's allot easier, and I can join on Product Category and Month:
SELECT
SP1.Program,
SP1.Year,
SP1.Month,
SP1.TotalRevenue AS ThisYearRevenue,
SP2.TotalRevenue AS LastYearRevenue
FROM GetFinancialYear(@Category, 'First Look', 2008) AS SP1
RIGHT JOIN GetFinancialYear(@Category, 'First Look', 2007) AS SP2 ON
SP1.Program = SP2.Program AND
SP1.Month = SP2.Month
I think your approach is probably a little cleaner as the GetFinancialYear function is quite messy! But at least it works - which makes me happy for now ;)
A: Using pivot and Dynamic Sql we can achieve this result
SET NOCOUNT ON
IF OBJECT_ID('TEMPDB..#TEMP') IS NOT NULL
DROP TABLE #TEMP
;With cte(Category , Revenue , Yearh , [Month])
AS
(
SELECT 'Bikes', 10000, 2008,1 UNION ALL
SELECT 'Bikes', 12000, 2008,2 UNION ALL
SELECT 'Bikes', 12000, 2008,3 UNION ALL
SELECT 'Bikes', 15000, 2008,1 UNION ALL
SELECT 'Bikes', 11000, 2007,2 UNION ALL
SELECT 'Bikes', 11500, 2007,3 UNION ALL
SELECT 'Bikes', 15400, 2007,4
)
SELECT * INTO #Temp FROM cte
Declare @Column nvarchar(max),
@Column2 nvarchar(max),
@Sql nvarchar(max)
SELECT @Column=STUFF((SELECT DISTINCT ','+ 'ISNULL('+QUOTENAME(CAST(Yearh AS VArchar(10)))+','+'''0'''+')'+ 'AS '+ QUOTENAME(CAST(Yearh AS VArchar(10)))
FROM #Temp order by 1 desc FOR XML PATH ('')),1,1,'')
SELECT @Column2=STUFF((SELECT DISTINCT ','+ QUOTENAME(CAST(Yearh AS VArchar(10)))
FROM #Temp FOR XML PATH ('')),1,1,'')
SET @Sql= N'SELECT Category,[Month],'+ @Column +'FRom #Temp
PIVOT
(MIN(Revenue) FOR yearh IN ('+@Column2+')
) AS Pvt
'
EXEC(@Sql)
Print @Sql
Result
Category Month 2008 2007
----------------------------------
Bikes 1 10000 0
Bikes 2 12000 11000
Bikes 3 12000 11500
Bikes 4 0 15400
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can Perl's system() print the command that it's running? In Perl, you can execute system commands using system() or `` (backticks). You can even capture the output of the command into a variable. However, this hides the program execution in the background so that the person executing your script can't see it.
Normally this is useful but sometimes I want to see what is going on behind the scenes. How do you make it so the commands executed are printed to the terminal, and those programs' output printed to the terminal? This would be the .bat equivalent of "@echo on".
A: Use open instead. Then you can capture the output of the command.
open(LS,"|ls");
print LS;
A: Here's an updated execute that will print the results and return them:
sub execute {
my $cmd = shift;
print "$cmd\n";
my $ret = `$cmd`;
print $ret;
return $ret;
}
A: Hmm, interesting how different people are answering this different ways. It looks to me like mk and Daniel Fone interpreted it as wanting to see/manipulate the stdout of the command (neither of their solutions capture stderr fwiw). I think Rudd got closer. One twist you could make on Rudd's response is to overwite the built in system() command with your own version so that you wouldn't have to rewrite existing code to use his execute() command.
using his execute() sub from Rudd's post, you could have something like this at the top of your code:
if ($DEBUG) {
*{"CORE::GLOBAL::system"} = \&{"main::execute"};
}
I think that will work but I have to admit this is voodoo and it's been a while since I wrote this code. Here's the code I wrote years ago to intercept system calls on a local (calling namespace) or global level at module load time:
# importing into either the calling or global namespace _must_ be
# done from import(). Doing it elsewhere will not have desired results.
delete($opts{handle_system});
if ($do_system) {
if ($do_system eq 'local') {
*{"$callpkg\::system"} = \&{"$_package\::system"};
} else {
*{"CORE::GLOBAL::system"} = \&{"$_package\::system"};
}
}
A: Another technique to combine with the others mentioned in the answers is to use the tee command. For example:
open(F, "ls | tee /dev/tty |");
while (<F>) {
print length($_), "\n";
}
close(F);
This will both print out the files in the current directory (as a consequence of tee /dev/tty) and also print out the length of each filename read.
A: I don't know of any default way to do this, but you can define a subroutine to do it for you:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
And then see it in action:
pbook:~/foo rudd$ perl foo.pl ls
ls
file1 file2 foo.pl
A: As I understand, system() will print the result of the command, but not assign it. Eg.
[daniel@tux /]$ perl -e '$ls = system("ls"); print "Result: $ls\n"'
bin dev home lost+found misc net proc sbin srv System tools var
boot etc lib media mnt opt root selinux sys tmp usr
Result: 0
Backticks will capture the output of the command and not print it:
[daniel@tux /]$ perl -e '$ls = `ls`; print "Result: $ls\n"'
Result: bin
boot
dev
etc
home
lib
etc...
Update: If you want to print the name of the command being system() 'd as well, I think Rudd's approach is good. Repeated here for consolidation:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What tools do you use to develop C++ applications on Linux? I develop C++ applications in a Linux environment. The tools I use every day include Eclipse with the CDT plugin, gdb and valgrind.
What tools do other people use? Is there anything out there for Linux that rivals the slickness of Microsoft Visual Studio?
A: emacs, cmake, gdb, git, valgrind. It may not be as slick as Visual Studio but it works well, and it's easy to add functionality via bash scripting or emacs lisp.
A: Right now I use Qt Creator. It's cross-platform and integrates pretty nicely with Qt, though (of course) you have the option of creating a standalone application.
A: g++ and make
A: I believe KDevelop is what would be the closest from Microsoft Visual Studio.
You get pretty much everything (except unfortunately VS debugger which is indeed a killer).
Its already mature and its development is pretty fast and promising.
It actually implement a few stuff you won't even see in VS. For instance, open header file and cpp file in vertical tile mode, and have the cursor synchronized in both,
ie: when you select a functions prototype, you always have its implementation on your right.
KDevelop is a KDE project, but run on Gnome. Anjuta is an equivalent project on Gnome, but I find it unusable for real work. For the rest of the stack gcc make valgrind ddd (a gdb IDE) and python for scripting my code.
If you're ok to try a different approach than the VS IDE. You may consider trying vim. It takes a long time to get used to it though.
A: Eclipse CDT is really quite nice. I still have to resort to Emacs from time to time but I really love the indexing, call trees, type trees, refactoring support (thought it's nothing like Java refactoring), etc. Syntax highlighting is quite powerful if you customize it (can have separate colors for local variables, function arguments, methods, etc.). The code completion is really handy too. I've mostly used Eclipse 3.3 but 3.4 is great too.
Also, mostly I'm using this for a somewhat large project (~1e6 sloc) -- it may be overkill for toy projects.
A: I use a bunch of terminal windows. I have vim running on interesting source files, make and g++ output on another for compiler errors or a gdb session for runtime errors. If I need help finding definitions I run cscope and use vim's cscope support to jump around.
Eclipse CDT is my second choice. It's nice but huge, ungainly and slow compared to vim.
Using terminal windows and vim is very flexible because I do not need to carry 400 MB of Java around with me I can use SSH sessions from anywhere.
I use valgrind when I need to find a memory issue.
I use strace to watch what my software is doing on a system call level. This lets me clean up really stupid code that calls time(0) four times in a row or makes too many calls to poll() or non-blocking read() or things like calling read() on a socket to read 1 byte at a time. (That is super inefficient and lazy!)
I use objdump -d to inspect the machine code, especially for performance sensitive inner loops. That is how I find things like the slowness of the array index operator on strings compared to using iterators.
I use oprofile to try to find hot spots in optimized code, I find that it often works a little better than gprof, and it can do things like look for data and instruction cache misses. That can show you where to drop some helpful prefetch hints using GCC's __builtin_prefetch. I tried to use it to find hot mis-predicted branches as well, but couldn't get that to work for me.
Update: I've found that perf works way better than oprofile. At least on Linux. Learn to use perf and love it as I do.
A: When I developed C++ code on linux, I used emacs as an editor and as a gdb front-end. Later, my company purchased SlickEdit for all of the programmers, which is a nice IDE, maybe not on a par with Visual Studio. We used gdb extensively, with the occasional use of valgrind and gprof. I highly recommend using a scripting language to complement C++ on day-to-day tasks. I went from PERL to python to the current ruby. All of them get the job done and have strengths where C++ has weaknesses. And, of course, you have all the shell commands at your disposal. I daily use sort(), uniq(), awk, etc. And one more recommendation is ack, a grep successor.
A: You need a standard toolchain + an IDE.
There's nothing much to say about the standard toolchain. Just install e.g. on Ubuntu/Debian via
aptitude install build-essential
The interesting part is about an IDE.
My personal impression is that nowadays - in the 21th century - vi/emacs/make/autotools/configure is not enough for developing software projects above a certain size (... and yes, please please please blame me for the heritage heresy ...).
Which IDE to choose is simply a matter of taste. You will find a lot of threads on SOF. Here is a permalink discussing which C++ IDE might be the "best": C++ IDE for Linux.
A: I use the NetBeans C++ plugin, which is superb and integrates with CVS and SVN. The project management side is also very good. I was up and running with it in minutes. It's an impressive IDE but being Java, can be a little sluggish.
A: *
*GCC
*GHC
*Vim
*Cmake
*cscope
*GDB
*Valgrind
*strace
*git
Is there really anything else you could possibly need?
A: g++ of course, but also Code::Blocks which is an absolutely fantastic cross platform IDE (Win32, *nix, Mac).
I use the nightly (more like weekly lately) builds from the SVN. It has almost all the bells and whistles you would expect from a modern IDE. It's really a truly fantastic Open Source project.
Also, on Linux you get the joy of using Valgrind which is probably the best memory tracker (it does other things as well) tool that money can buy. And it's free :) Track down memory leaks and more with ease.
And there is just so much more! Linux is such a great dev platform :)
(edit) Just realized you mentioned Valgrind in your question, silly me for reading it too fast.
A: *
*Bash
*Vim
*Make
*G++
*GDB
*Valgrind
*Gprof
*svn
Never a GUI to be seen except a good terminal with tab support; keep code, debugger, output, etc all in separate windows and tab back and forwards really quickly.
A: In addition to many already listed, we use the autoconf toolset for deploying our program to users.
A: *
*CMake
*vim
*g++
*kdevelop (compiled from SVN daily!)
*Mercurial when I can, SVN when I have to, git when there's really no other choice (contributing to project that uses it)
*valgrind
A: When develop C++ apps for linux, i prefer using a bunch of cmdline tools.
Vim extended with a lot of plugins.
Gdb with ddd, valgrind, libefence
and SCons (automake is a pain in ... you know where)
A: *
*g++
*emacs
*bash command line
*gdb-mode in emacs (type M-X gdb)
*make
A: Anjuta is a nice idea that makes Linux C++ dev quite enjoyable as well.
A: I'm another for KDevelop. It has a very diverse set of tools. I'm not real familiar with VS and whether or not it has integrated console access via its interface, but KDevelop can allow you to run a konsole inside the IDE, which I always find very useful. You could always give Netbeans a go now that it has full C/C++ support.
Other than that, I make good use of gdb and its gui-based version ddd for problems with the code or other bugs. For throw-away programs, like others that already posted - I use g++ at the terminal and make for some larger projects.
A: Eclipse CDT for editing, SVN for source control, SCons for build management, CruiseControl for automated builds and a proprietary unit test framework.
A: I use Eclipse+CDT on Windows and Cygwin + g++ to cross compile for Linux.
(Cross compilers are built using crosstool, a nice script-set for generating cross compilers)
A: Mi first choice is allways emacs with a lot of plugins: ecb gives some buffers to navigate on the folders, gdb, svn or git integration... This is mi first choice using Python too.
As a second choice, Netbeans with C++ plugin, is very simple and quite powerfull, but too heavy I think.
A: I use whatever is on the system. I prefer Eclipse CDT as an editor, and g++ as a compiler. However, if eclipse is not an option I use vi, which is fine as well.
A: The Eclipse incubation project Linux Tools integrates C/C++ Development tools.
It's a GUI plugin to integrate tools like Valgrind, GProf, GCov, SystemTap etc into the Eclipse C++ CDT IDE.
Search for Eclipse Helios IDE for C/C++ Linux Developers (includes Incubating components), (120 MB)
Found this after trying to build Linux Tools using the .psf file available.
Thankfully found this package hiding right at the bottom of the Helios packages download page.
Note that this is an incubation project so you can expect the support to only get better with time.
See Also:
For updated info on installing and using Eclipse Linux Tools Click Here
A: FlexeLint for static code analysis, in addition to mentioned above:
Eclipse with CDT, gcc, make, gdb, valgrind, bash shell.
Source version control: Clearcase or git, depending on project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: Why shouldn't I "bet the future of the company" on shell scripts? I was looking at http://tldp.org/LDP/abs/html/why-shell.html and was struck by:
When not to use shell scripts
...
*
*Mission-critical applications upon which you are betting the future of the company
Why not?
A: I kind of think the article points out a really good list of the reasons when not to use shell scripts - with the single mission critical bullet you point out being more of a conclusion based on all the other bullets.
With that said, I think the reason you do not want to build a mission critical application on a shell script is because even if none of the other bullet points apply today, any application that is going to be maintained over a period of time will evolve to the point of being bit by at least one of the those potential pitfalls at some point.....and there isn't anything you are really going to be able to do about it without it being a complete do over to come up with a fix....wishing you used something more industrial strength from the beginning.
A: Scripts are nothing more or less than computer programs. Some would argue that scripts are less sophisticated. These same folks will usually admit that you can write sophisticated code in scripting languages, but that these scripts are really not scripts any more, but full-fledged programs, by definition.
Whatever.
The correct answer, in my opinion, is "it depends". Which, by the way, is the same answer to the converse question of whether you should place your trust in compiled executables for mission critical applications.
Good code is good, and bad code is bad - whether it is written as a Bash script, a Windows CMD file, in Python, Ruby, Perl, Basic, Forth, Ada, Pascal, Common Lisp, Cobol, or compiled C.
Which is not to say that choice of language doesn't matter. There are very good reasons, sometimes, for choosing a particular language or for compiling vs. interpreting (performance, scalability, capability, security, etc). But, all things being equal, I would trust a shell script written by a great programmer over an equivalent C++ program written by a doofus any day of the week.
A: Obviously, this is a bit of a straw man for me to knock down. I really am interested in why people believe shell scripts should be avoided in "mission-critical applications", but I can't think of a compelling reason.
For instance, I've seen (and written) some ksh scripts that interact with an Oracle database using SQL*Plus. Sadly, the system couldn't scale properly because the queries didn't use bind variables. Strike one against shell scripts, right? Wrong. The issue wasn't with the shell scripts but with SQL*Plus. in fact, the performance problem went away when I replaced SQL*Plus with a Perl script that connected to the database and used bind variables.
I can easily imagine putting shell scripts in spacecraft flight software would be a bad idea. But Java or C++ may be an equally poor choices. The best choice would be whatever language (assembly?) is usually used for that purpose.
The fact is, if you use any flavor of Unix, you are using shell scripts in mission-critical situations assuming you think booting up is mission critical. When a script needs to do something that shell isn't particularly good at, you put that portion into a sub-program. You don't throw out the script wholesale.
A: It is probably shell scripts that help take a company into the future. I know just from a programming standpoint that I would waste a lot of time doing repetitive tasks that I have delegated to shell scripts. For example, I know most of the subversion commands for the command line but if I can lump all those commands into one script I can fire at will I save time and mental energy.
Like a few other people have said language is a factor. For my short don't-want-to-remember steps and glue programs I completely trust my shell scripts and rely upon them. That doesn't mean I'm going to build a website that runs bash on the backend but I will surely use bash/ksh/python/whatever to help me generate the skeleton project and manage my packaging and deployment.
A: When I read thise quote I focus on the "applications" part rather than the "mission critical" part.
I read it as saying bash isn't for writing applications it's for, well, scripting. So sure, your application might have some housekeeping scripts but don't go writing critical-business-logic.sh because another language is probably better for stuff like that.
A: Using shell scripts is fine when you're using their strengths. My company has some class 5 soft switches and the call processing code and the provisioning interface is written in java. Everything else is written in KSH - DB dumps for backups, pruning, log file rotation, and all the automated reporting. I would argue that all those support functions, though not directly related to call-path, are mission critical. Especially the DB interaction. If something went wrong with the DB-interaction code and dumped the call routing tables it could put us out of business.
But nothing ever does go wrong, because shell scripts are the perfect language for stuff like this. They're small, they're well understood, manipulating files is their strength, and they're stable. It's not like KSH09 is going to be a complete rewrite because someone thinks it should compile to byte code, so it's a stable interface. Frankly, the provisioning interface written in Java goes wonky fairly often and the shell scripts have never messed up that I can remember.
A: I would wager the author is showing they are uncomfortable with certain aspects of qualtiy wrt shell scripting. Who unit tests BASH scripts for example.
Also scripts are rather heavily coupled with the underlying operating system, which could be something of a negative thing.
A: No matter we all need a flexible tool to interact with os. It is human readable interaction with an os that we use; it's like using a screwdriver with the screws. The command line will always be a tool we need either admin, programmer, or network. Look at the window they even expanded on their Powershell.
A: Scripts are inappropriate for implementing certain mission-critical functions, since they must have both +r and +x permissions to function. Executables need only have +x.
The fact that a script has +r means users might be able to make a copy of the script, edit/subvert it, and execute their edited Cuckoo's-Egg version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to create an encrypted ZIP file? I am creating an ZIP file with ZipFile in Python 2.5, it works OK so far:
import zipfile, os
locfile = "test.txt"
loczip = os.path.splitext (locfile)[0] + ".zip"
zip = zipfile.ZipFile (loczip, "w")
zip.write (locfile)
zip.close()
But I couldn't find how to encrypt the files in the ZIP file.
I could use system and call PKZIP -s, but I suppose there must be a more "Pythonic" way. I'm looking for an open source solution.
A: The duplicate question: Code to create a password encrypted zip file? has an answer that recommends using 7z instead of zip. My experience bears this out.
Copy/pasting the answer by @jfs here too, for completeness:
To create encrypted zip archive (named 'myarchive.zip') using open-source 7-Zip utility:
rc = subprocess.call(['7z', 'a', '-mem=AES256', '-pP4$$W0rd', '-y', 'myarchive.zip'] +
['first_file.txt', 'second.file'])
To install 7-Zip, type:
$ sudo apt-get install p7zip-full
To unzip by hand (to demonstrate compatibility with zip utility), type:
$ unzip myarchive.zip
And enter P4$$W0rd at the prompt.
Or the same in Python 2.6+:
>>> zipfile.ZipFile('myarchive.zip').extractall(pwd='P4$$W0rd')
A: This thread is a little bit old, but for people looking for an answer to this question in 2020/2021.
Look at pyzipper
A 100% API compatible replacement for Python’s zipfile that can read and write AES encrypted zip files.
7-zip is also a good choice, but if you do not want to use subprocess, go with pyzipper...
A: pyminizip works great in creating a password protected zip file. For unziping ,it fails at some situations. Tested on python 3.7.3
Here, i used pyminizip for encrypting the file.
import pyminizip
compression_level = 5 # 1-9
pyminizip.compress("src.txt",'src', "dst.zip", "password", compression_level)
For unzip, I used zip file module:
from zipfile import ZipFile
with ZipFile('/home/paulsteven/dst.zip') as zf:
zf.extractall(pwd=b'password')
A: I created a simple library to create a password encrypted zip file in python. - here
import pyminizip
compression_level = 5 # 1-9
pyminizip.compress("src.txt", "dst.zip", "password", compression_level)
The library requires zlib.
I have checked that the file can be extracted in WINDOWS/MAC.
A: You can use pyzipper for this task and it will work great when you want to encrypt a zip file or generate a protected zip file.
pip install pyzipper
import pyzipper
def encrypt_():
secret_password = b'your password'
with pyzipper.AESZipFile('new_test.zip',
'w',
compression=pyzipper.ZIP_LZMA,
encryption=pyzipper.WZ_AES) as zf:
zf.setpassword(secret_password)
zf.writestr('test.txt', "What ever you do, don't tell anyone!")
with pyzipper.AESZipFile('new_test.zip') as zf:
zf.setpassword(secret_password)
my_secrets = zf.read('test.txt')
The strength of the AES encryption can be configure to be 128, 192 or 256 bits. By default it is 256 bits. Use the setencryption() method to specify the encryption kwargs:
def encrypt_():
secret_password = b'your password'
with pyzipper.AESZipFile('new_test.zip',
'w',
compression=pyzipper.ZIP_LZMA) as zf:
zf.setpassword(secret_password)
zf.setencryption(pyzipper.WZ_AES, nbits=128)
zf.writestr('test.txt', "What ever you do, don't tell anyone!")
with pyzipper.AESZipFile('new_test.zip') as zf:
zf.setpassword(secret_password)
my_secrets = zf.read('test.txt')
Official Python ZipFile documentation is available here: https://docs.python.org/3/library/zipfile.html
A: @tripleee's answer helped me, see my test below.
This code works for me on python 3.5.2 on Windows 8.1 ( 7z path added to system).
rc = subprocess.call(['7z', 'a', output_filename + '.zip', '-mx9', '-pSecret^)'] + [src_folder + '/'])
With two parameters:
*
*-mx9 means max compression
*-pSecret^) means password is Secret^). ^ is escape for ) for Windows OS, but when you unzip, it will need type in the ^.
Without ^ Windows OS will not apply the password when 7z.exe creating the zip file.
Also, if you want to use -mhe switch, you'll need the file format to be in 7z instead of zip.
I hope that may help.
A: 2022 answer:
I believe this is an utterly mundane task and therefore should be oneliner. I abstracted away all the frevolous details in a library that is as powerfull as a bash terminal.
from crocodile.toolbox import Path
file = Path(r'my_string_path')
result_file = file.zip(pwd="lol", use_7z=True)
*
*when the 7z flag is raised, it gets called behind the scenes.
*
*You don't need to learn 7z command line syntax.
*You don't need to worry about installing 7z, does that automatically if it's not installed. (tested on windows so far)
A: You can use the Chilkat library. It's commercial, but has a free evaluation and seems pretty nice.
Here's an example I got from here:
import chilkat
# Demonstrates how to create a WinZip-compatible 128-bit AES strong encrypted zip
zip = chilkat.CkZip()
zip.UnlockComponent("anything for 30-day trial")
zip.NewZip("strongEncrypted.zip")
# Set the Encryption property = 4, which indicates WinZip compatible AES encryption.
zip.put_Encryption(4)
# The key length can be 128, 192, or 256.
zip.put_EncryptKeyLength(128)
zip.SetPassword("secret")
zip.AppendFiles("exampleData/*",True)
zip.WriteZip()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Hyperlinks displaced on IE7 Browse to a webpage with hyperlinks using IE (I am using IE7) Once on the page, enlarge the fonts using ctl + mouse wheel. Now when you try to hover over the hyperlinks, they are laterally displaced to the right. To click on the link, i have to move the mouse to the right till the cursor turns into a hand.
Anyone has a comment on this??
I was browsing the following page.
It is the 2nd hyperlink in the body of the article. (the link text is "here")
A: IE7 doesn't handle Zoom correctly, You can see this error on this page (I mean the page you're reading right now) if you zoom large enough, view the logout | about link at the top, hover over it, hover off to the right, back over.
A: All of the links on that page are displaced to the right on my copy of IE7 (7.0.6001.18000) even before I enlarge or shrink the fonts. Whereas other pages act normally. (My test page was http://www.frito-lay.com/fl/flstore/cgi-bin/good_questions.htm).
It appears to be something specific to the page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: REALLY Simple Website--How Basic Can You Go? Although I've done programming, I'm not a programmer. I've recently agreed to coordinate getting a Website up for a club. The resources are--me, who has done Web content maintenance (putting content into HTML and ColdFusion templates via a gatekeeper to the site itself; doing simple HTML and XML coding); a serious Web developer who does database programming, ColdFusion, etc., and talks way over the heads of the rest of us; two designers who use Dreamweaver; the guy who created the original (and now badly broken) site in Front Page and wants to use Expression Web; and assorted other club members who are even less technically inclined.
What we need up first is some text and graphics (a gorgeous design has been created in Dreamweaver), some links (including to existing PDF newsletters for download), and maybe hooking up an existing Blogspot blog. Later (or earlier if it's not hard), we may add mouseover menus to the links, a gallery, a calendar, a few Mapquest hotlinks, and so on.
My question--First, is there any real problem with sticking with HTML and jpegs for the initial site? Second, for the "later" part of the site development, what's the simplest we can go with? Third, are there costs in doing this the simple way that will make us regret it down the road? Also, is there a good site/resource where I can learn more about this from a newbie perspective?
A: Plain old HTML is fine, just as long as you don't use tags like blink and marquee.
A: I personally love tools like CityDesk.
And I'm not just plugging Joel. (There are others out there in this class I'm sure.) The point is they make making a static website very easy:
*
*The structure is just a filesystem structure
*pages have templates to consolidate formatting
*all resources are contained in one file
*easy and fast Preview and Publish functions
For a dynamic collaborative site, I would just install one of many open source CMSs available on shared hosting sites.
A: If you're familiar with html/javascript basics I'd look into a CMS - wordpress, drupal, joomla, nuke, etc. All of these are free. Very often your web hosting company will install one of these by default which takes all of the hard part out of your hands. Next is just learning to customize the system and there's tons of docs out there for any of those systems.
All that being said there is noting wrong with good old fashioned html.
A: In addition to some of the great content management systems already mentioned, consider cms made simple.
It makes it very easy to turn a static site into a content managed site (which sounds like exactly what you might need to do in the future), and the admin area is very easy to use. Our clients have found it much simpler to use than the likes of Joomla.
It's also free and open source.
Good luck!
A: If you don't require any dynamic content, heck, if you don't plan on editing the content more than once a week, I'd say stick to basic HTML.
Later, you'd probably want a basic, no-fuss and easily installable CMS. The brand really depends on the platform (most likely PHP/Rails/ASP), but most of them can be found by typing " CMS" into Google. Try prefixing it with "free" or "open source" if you want.
I'm pretty sure you can do all this for absolutely free. Most PHP and Ruby CMS's are free and web hosting is free/extremely cheap if you're not demanding.
And last/best tip: Find someone who has done this before, preferably more than once. He'll probably set you up so you never have to look at anything more complicated than a WYSIWYG editor.
A: There's no reason to not go with plain old HTML and JPGs if you don't know any server side scripting languages. Also, once you want to get more advanced, most cheap hosting services have tools that can be installed with one click, and provide things like blogs, photo galleries, bulletin boards (PHPBB), and even content management tools like Joomla.
A: I had the same problem myself, I was just looking for something really easy to smash together a website quickly. First I went with just plain old HTML, but then I realised a simple CMS would be better.
I went for Wordpress. Wordpress is mostly known as a blogging platform, but in my opinion it is really great as a deadly simple CMS as well.
A: why not simply use Google pages?
Here is an example of a website I did, takes about 2 hours, easy to maintain (not that I do (-: ) and FREE.
I think that suggesting you mess with HTML for what you need is crazy!
A: Plain HTML is great, gives you the most control. If you want to make updating a bit easier though, you could use SSI. Most servers have this enabled. It basically let's you attach one file to many pages.
For example, you could have your menu in navigation.html and every page would include this file. That way you wouldn't have to update this one file on every page each time you need to update.
<!--#include virtual="navigation.html" -->
A: I agree with the other commenters that a CMS might be useful to you, however as I see it, probably a solution like Webby might do it for you. It generates plain HTML pages based on Templates. Think about it as a "webpage preprocessor" which outputs plain HTML files. It has most of the advantages of using a server-based CMS, but without a lot of load on the server, and making it easy for you to change stuff on any of the templates you might use.
A: *
*It's fine
*Rails (or purchase / use a CMS)
*Not unless you start becoming crazy-popular
*It really depends on what you go with for 2. Rails has a plethora of tutorials on the net and any product you go with will have its own community etc.
To be perfectly honest though, if the dynamic part is someone elses blog and you move the gallery out into flikr you may find that you can actually live with large parts of it being static HTML for a very long time.
A: If a to Implement a website With User Profiles/Logins, Extensions, Gallery's etc s a Newbi then a CMS like Joomla, Etc are good , but Else if you presently have only Static Content then Its good to go with Good Old HTML, About JPEG , I though Presently Its better to use PNG or GIF as its Less Bulky.
Also About you Query About Shifting to Server Scripts , When you have Database Driven Material or When you have Other Things that Require Advanced Prog Languages , Just use PHP Scripts inside PHP , and Rename teh File as a PHP, Thats IT, No Loss to you HTML Data.....
Do Go Ahead and Launch you Site ......
A: Dude, you're talking about HTML, obviously you'll be styling your content with CSS. Wait till you run into IE issues and god forbid your client wants ie6 compatibility.
Go with the HTML for now, I'm sure you guys will hack it through. Our prayers are with you.
A: Personally, I'd never use JPEG images on a website, mainly because of three reasons:
*
*JPEGs often contains artifacts.
*Quality is often proportional
with filesize.
*Does not support
alpha transparency.
That said, I'd recommend you to use PNGs for images since it's lossless and a 24-bit palette (meaning full colors + alpha transparency). The only quirk is that IE6 and below does not support native alpha for PNGs, however this could be resolved by running a javascript which would fix this issue.
As for designing a website, there's both pros and cons for this. I suggest you read through:
*
*37 Signal's Why We Skip Photoshop
*Jeff Croft's Why We Don't Skip Photoshop
As for newbie resources, I'd recommend you flip through the pages at W3 Schools.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: iFrame Best Practices I have a large, hi-def JavaScript-intensive image banner for a site I'm designing. What is everyone's opinion of using iframes so that you incur the load time only once? Is there a CSS alternative to the iframe?
Feel free to preview the site.
It is very much a work in progress.
A:
I should also have mentioned that I would like the banner rotation to keep moving. When the visitor clicks on a link, the banner rotation starts over. It would be nice if the "animation" kept rotating, regardless of the page the user visits.Blockquote
Well, in that case I would strongly recommend not doing that. The only real way of achieving that is to have the actual website content in the iframe, which means that you suddenly have lots of negative sides to the site: not being able to bookmark urls easily due to the address bar not changing; accessibility concerns; etc
I think you'll find that most people won't care that it reloads again. Once a visitor lands on your website, they'll marvel at the wonderful banner immediately, and then will continue to ignore it while they browse your site - until an image they haven't seen appears and distracts them away from your content.
Keep the rotation random enough, and with enough images, and people will stop to look at it from whatever page they're on.
A: I find the main challenge with iFrame headers is resizing. Since the font in your header is of static size, I don't see a problem with using an iFrame. Although I'm not sure if it's really intensive enough to be worth it.
A: Well, the browser appears to cache all seven banner images upon the first load, and runs them out from the cache (for each subsequent page) thereafter. I don't think you have a problem :D
Try it out with Firebug's Net monitoring tool in Firefox.
A: While using IFrames as a sort of master page/template for your pages might be a good thing, IFrames have a known negative impact to searchability/SEO.
It might also be unnecessary in the first place because once your images are loaded the first time (and with the large high-def images you have on your site, that would be slow no matter what you do) the images are cached by browsers and will not be reloaded until the user clears their cache or does a Ctrl+F5.
A: This may work without CSS also, but if you use CSS to load the background and your server is configured correctly, the image should already only be downloaded once.
Usually the browser will request a resource by asking for it only if it has not been modified since the last time it was downloaded. In this case, the only things sent back and forth are the HTTP headers, no content.
If you want to ensure the image is only downloaded once, add an .htacces or an apache2.conf rule to make the image expire a few days into the future so that users will only request it again if their cache is cleared or the content expiration date passes. An .htaccess file is probably too excessive to use in your case, though results may vary.
A: You could have it load the main page once, then asynchronously load the other elements when needed (ajax). If you did that, an iFrame would not be necessary. Here is an example of loading only the new material.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I sort an array of double pointers based on the values they point to? I am trying to build a function in C/C++ to sort an array and replace each value with its "score" or rank. It takes in a double pointer array to an array of ints, and sorts the double pointers based on the dereferenced value of the integers. I have tried quite a few times to make it work, but can't get it down. Once again, it must sort the double pointers based on the values they point to. This is what I have:
void SortArray( int ** pArray, int ArrayLength )
{
int i, j, flag = 1; // set flag to 1 to begin initial pass
int * temp; // holding variable orig with no *
for(i = 1; (i <= ArrayLength) && flag; i++)
{
flag = 0;
for (j = 0; j < (ArrayLength -1); j++)
{
if (*pArray[j+1] > *pArray[j]) // ascending order simply changes to <
{
temp = &pArray[j]; // swap elements
pArray[j] = &pArray[j+1];
pArray[j+1] = &temp;
flag = 1; // indicates that a swap occurred.
}
}
}
}
A: You're close. You're referencing the address of the array items when you swap, which isn't necessary. The items in the array are pointers, and that's what needs to be swapped.
See below:
void SortArray( int ** pArray, int ArrayLength )
{
int i, j, flag = 1; // set flag to 1 to begin initial pass
int * temp; // holding variable orig with no *
for(i = ArrayLength - 1; i > 0 && flag; i--)
{
flag = 0;
for (j = 0; j < i; j++)
{
if (*pArray[j] > *pArray[j+1]) // ascending order simply changes to <
{
temp = pArray[j]; // swap elements
pArray[j] = pArray[j+1];
pArray[j+1] = temp;
flag = 1; // indicates that a swap occurred.
}
}
}
}
Also, check out this lovely blog post on Bubble Sorting in case you're interested (sorry, shameless plug :)). Hope that helps you with your homework ;)
Edit: Note the subtle "optimisation" where you count back from the array length and only increment up until 'i' in the inner loop. This saves you from needlessly reparsing items that have already been sorted.
A:
Heh, this isnt homework.
If thats the case then consider using the STL to manage arrays and sort. Its easier to develop and maintain and the std::sort algorithm is asymptotically faster than bubble sort.
A: You should consider using std::swap() to do your swapping. If you do, call it as such:
swap( obj1, obj2 );
rather than:
std::swap( obj1, obj2 );
As the first calling semantic will allow the proper namespace lookup to find the correct overload if one exists. Be sure to have either:
using namespace std;
or:
using std::swap;
somewhere.
A:
Hmm, I don't have much experience with the STL. Could you give an example?
This program creates a vector of ints, sorts it, and displays the results.
#include <vector>
#include <algorithm>
#include <iostream>
using namespace std;
int main()
{
vector<int>; vec;
vec.push_back(7);
vec.push_back(5);
vec.push_back(13);
sort(vec.begin(), vec.end());
for (vector<int>::size_type i = 0; i < vec.size(); ++i)
{
cout << vec[i] << endl;
}
}
A: To complete Brian Ensink's post, you'll find the STL full of surprises. For example, the std::sort algorithm:
#include <iostream>
#include <vector>
#include <algorithm>
void printArray(const std::vector<int *> & p_aInt)
{
for(std::vector<int *>::size_type i = 0, iMax = p_aInt.size(); i < iMax; ++i)
{
std::cout << "i[" << static_cast<int>(i) << "] = " << reinterpret_cast<unsigned int>(p_aInt[i]) << std::endl ;
}
std::cout << std::endl ;
}
int main(int argc, char **argv)
{
int a = 1 ;
int b = 2 ;
int c = 3 ;
int d = 4 ;
int e = 5 ;
std::vector<int *> aInt ;
// We fill the vector with variables in an unordered way
aInt.push_back(&c) ;
aInt.push_back(&b) ;
aInt.push_back(&e) ;
aInt.push_back(&d) ;
aInt.push_back(&a) ;
printArray(aInt) ; // We see the addresses are NOT ordered
std::sort(aInt.begin(), aInt.end()) ; // DO THE SORTING
printArray(aInt) ; // We see the addresses are ORDERED
return EXIT_SUCCESS;
}
The first printing of the array will show unordered addresses. The second, after the sort, will show ordered adresses. On my compiler, we have:
i[0] = 3216087168
i[1] = 3216087172
i[2] = 3216087160
i[3] = 3216087164
i[4] = 3216087176
i[0] = 3216087160
i[1] = 3216087164
i[2] = 3216087168
i[3] = 3216087172
i[4] = 3216087176
Give STL's <algorithm> header a look http://www.cplusplus.com/reference/algorithm/
You'll find a lot of utilities. Note that you have other implementation of containers that could suit you better (std::list? std::map?).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What areas of specialization within programming would you recommend to a beginner I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
*
*What are the common specializations within computer programming and software development?
*Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
*Which skill sets complement each other?
*Are there any areas of specialization that hinder your ability of developing other areas of specialization.
A: Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.
A: I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!
A: I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).
A: Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)
A: Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.
A: As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is the most effective way for float and double comparison? What would be the most efficient way to compare two double or two float values?
Simply doing this is not correct:
bool CompareDoubles1 (double A, double B)
{
return A == B;
}
But something like:
bool CompareDoubles2 (double A, double B)
{
diff = A - B;
return (diff < EPSILON) && (-diff < EPSILON);
}
Seems to waste processing.
Does anyone know a smarter float comparer?
A: Be extremely careful using any of the other suggestions. It all depends on context.
I have spent a long time tracing bugs in a system that presumed a==b if |a-b|<epsilon. The underlying problems were:
*
*The implicit presumption in an algorithm that if a==b and b==c then a==c.
*Using the same epsilon for lines measured in inches and lines measured in mils (.001 inch). That is a==b but 1000a!=1000b. (This is why AlmostEqual2sComplement asks for the epsilon or max ULPS).
*The use of the same epsilon for both the cosine of angles and the length of lines!
*Using such a compare function to sort items in a collection. (In this case using the builtin C++ operator == for doubles produced correct results.)
Like I said: it all depends on context and the expected size of a and b.
By the way, std::numeric_limits<double>::epsilon() is the "machine epsilon". It is the difference between 1.0 and the next value representable by a double. I guess that it could be used in the compare function but only if the expected values are less than 1. (This is in response to @cdv's answer...)
Also, if you basically have int arithmetic in doubles (here we use doubles to hold int values in certain cases) your arithmetic will be correct. For example 4.0/2.0 will be the same as 1.0+1.0. This is as long as you do not do things that result in fractions (4.0/3.0) or do not go outside of the size of an int.
A: For a more in depth approach read Comparing floating point numbers. Here is the code snippet from that link:
// Usable AlmostEqual function
bool AlmostEqual2sComplement(float A, float B, int maxUlps)
{
// Make sure maxUlps is non-negative and small enough that the
// default NAN won't compare as equal to anything.
assert(maxUlps > 0 && maxUlps < 4 * 1024 * 1024);
int aInt = *(int*)&A;
// Make aInt lexicographically ordered as a twos-complement int
if (aInt < 0)
aInt = 0x80000000 - aInt;
// Make bInt lexicographically ordered as a twos-complement int
int bInt = *(int*)&B;
if (bInt < 0)
bInt = 0x80000000 - bInt;
int intDiff = abs(aInt - bInt);
if (intDiff <= maxUlps)
return true;
return false;
}
A: Here's proof that using std::numeric_limits::epsilon() is not the answer — it fails for values greater than one:
Proof of my comment above:
#include <stdio.h>
#include <limits>
double ItoD (__int64 x) {
// Return double from 64-bit hexadecimal representation.
return *(reinterpret_cast<double*>(&x));
}
void test (__int64 ai, __int64 bi) {
double a = ItoD(ai), b = ItoD(bi);
bool close = std::fabs(a-b) < std::numeric_limits<double>::epsilon();
printf ("%.16f and %.16f %s close.\n", a, b, close ? "are " : "are not");
}
int main()
{
test (0x3fe0000000000000L,
0x3fe0000000000001L);
test (0x3ff0000000000000L,
0x3ff0000000000001L);
}
Running yields this output:
0.5000000000000000 and 0.5000000000000001 are close.
1.0000000000000000 and 1.0000000000000002 are not close.
Note that in the second case (one and just larger than one), the two input values are as close as they can possibly be, and still compare as not close. Thus, for values greater than 1.0, you might as well just use an equality test. Fixed epsilons will not save you when comparing floating-point values.
A: Qt implements two functions, maybe you can learn from them:
static inline bool qFuzzyCompare(double p1, double p2)
{
return (qAbs(p1 - p2) <= 0.000000000001 * qMin(qAbs(p1), qAbs(p2)));
}
static inline bool qFuzzyCompare(float p1, float p2)
{
return (qAbs(p1 - p2) <= 0.00001f * qMin(qAbs(p1), qAbs(p2)));
}
And you may need the following functions, since
Note that comparing values where either p1 or p2 is 0.0 will not work,
nor does comparing values where one of the values is NaN or infinity.
If one of the values is always 0.0, use qFuzzyIsNull instead. If one
of the values is likely to be 0.0, one solution is to add 1.0 to both
values.
static inline bool qFuzzyIsNull(double d)
{
return qAbs(d) <= 0.000000000001;
}
static inline bool qFuzzyIsNull(float f)
{
return qAbs(f) <= 0.00001f;
}
A: Unfortunately, even your "wasteful" code is incorrect. EPSILON is the smallest value that could be added to 1.0 and change its value. The value 1.0 is very important — larger numbers do not change when added to EPSILON. Now, you can scale this value to the numbers you are comparing to tell whether they are different or not. The correct expression for comparing two doubles is:
if (fabs(a - b) <= DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
// ...
}
This is at a minimum. In general, though, you would want to account for noise in your calculations and ignore a few of the least significant bits, so a more realistic comparison would look like:
if (fabs(a - b) <= 16 * DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
// ...
}
If comparison performance is very important to you and you know the range of your values, then you should use fixed-point numbers instead.
A: Realizing this is an old thread but this article is one of the most straight forward ones I have found on comparing floating point numbers and if you want to explore more it has more detailed references as well and it the main site covers a complete range of issues dealing with floating point numbers The Floating-Point Guide :Comparison.
We can find a somewhat more practical article in Floating-point tolerances revisited and notes there is absolute tolerance test, which boils down to this in C++:
bool absoluteToleranceCompare(double x, double y)
{
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon() ;
}
and relative tolerance test:
bool relativeToleranceCompare(double x, double y)
{
double maxXY = std::max( std::fabs(x) , std::fabs(y) ) ;
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXY ;
}
The article notes that the absolute test fails when x and y are large and fails in the relative case when they are small. Assuming he absolute and relative tolerance is the same a combined test would look like this:
bool combinedToleranceCompare(double x, double y)
{
double maxXYOne = std::max( { 1.0, std::fabs(x) , std::fabs(y) } ) ;
return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXYOne ;
}
A: General-purpose comparison of floating-point numbers is generally meaningless. How to compare really depends on a problem at hand. In many problems, numbers are sufficiently discretized to allow comparing them within a given tolerance. Unfortunately, there are just as many problems, where such trick doesn't really work. For one example, consider working with a Heaviside (step) function of a number in question (digital stock options come to mind) when your observations are very close to the barrier. Performing tolerance-based comparison wouldn't do much good, as it would effectively shift the issue from the original barrier to two new ones. Again, there is no general-purpose solution for such problems and the particular solution might require going as far as changing the numerical method in order to achieve stability.
A: You have to do this processing for floating point comparison, since float's can't be perfectly compared like integer types. Here are functions for the various comparison operators.
Floating Point Equal to (==)
I also prefer the subtraction technique rather than relying on fabs() or abs(), but I'd have to speed profile it on various architectures from 64-bit PC to ATMega328 microcontroller (Arduino) to really see if it makes much of a performance difference.
So, let's forget about all this absolute value stuff and just do some subtraction and comparison!
Modified from Microsoft's example here:
/// @brief See if two floating point numbers are approximately equal.
/// @param[in] a number 1
/// @param[in] b number 2
/// @param[in] epsilon A small value such that if the difference between the two numbers is
/// smaller than this they can safely be considered to be equal.
/// @return true if the two numbers are approximately equal, and false otherwise
bool is_float_eq(float a, float b, float epsilon) {
return ((a - b) < epsilon) && ((b - a) < epsilon);
}
bool is_double_eq(double a, double b, double epsilon) {
return ((a - b) < epsilon) && ((b - a) < epsilon);
}
Example usage:
constexpr float EPSILON = 0.0001; // 1e-4
is_float_eq(1.0001, 0.99998, EPSILON);
I'm not entirely sure, but it seems to me some of the criticisms of the epsilon-based approach, as described in the comments below this highly-upvoted answer, can be resolved by using a variable epsilon, scaled according to the floating point values being compared, like this:
float a = 1.0001;
float b = 0.99998;
float epsilon = std::max(std::fabs(a), std::fabs(b)) * 1e-4;
is_float_eq(a, b, epsilon);
This way, the epsilon value scales with the floating point values and is therefore never so small of a value that it becomes insignificant.
For completeness, let's add the rest:
Greater than (>), and less than (<):
/// @brief See if floating point number `a` is > `b`
/// @param[in] a number 1
/// @param[in] b number 2
/// @param[in] epsilon a small value such that if `a` is > `b` by this amount, `a` is considered
/// to be definitively > `b`
/// @return true if `a` is definitively > `b`, and false otherwise
bool is_float_gt(float a, float b, float epsilon) {
return a > b + epsilon;
}
bool is_double_gt(double a, double b, double epsilon) {
return a > b + epsilon;
}
/// @brief See if floating point number `a` is < `b`
/// @param[in] a number 1
/// @param[in] b number 2
/// @param[in] epsilon a small value such that if `a` is < `b` by this amount, `a` is considered
/// to be definitively < `b`
/// @return true if `a` is definitively < `b`, and false otherwise
bool is_float_lt(float a, float b, float epsilon) {
return a < b - epsilon;
}
bool is_double_lt(double a, double b, double epsilon) {
return a < b - epsilon;
}
Greater than or equal to (>=), and less than or equal to (<=)
/// @brief Returns true if `a` is definitively >= `b`, and false otherwise
bool is_float_ge(float a, float b, float epsilon) {
return a > b - epsilon;
}
bool is_double_ge(double a, double b, double epsilon) {
return a > b - epsilon;
}
/// @brief Returns true if `a` is definitively <= `b`, and false otherwise
bool is_float_le(float a, float b, float epsilon) {
return a < b + epsilon;
}
bool is_double_le(double a, double b, double epsilon) {
return a < b + epsilon;
}
Additional improvements:
*
*A good default value for epsilon in C++ is std::numeric_limits<T>::epsilon(), which evaluates to either 0 or FLT_EPSILON, DBL_EPSILON, or LDBL_EPSILON. See here: https://en.cppreference.com/w/cpp/types/numeric_limits/epsilon. You can also see the float.h header for FLT_EPSILON, DBL_EPSILON, and LDBL_EPSILON.
*
*See https://en.cppreference.com/w/cpp/header/cfloat and
*https://www.cplusplus.com/reference/cfloat/
*You could template the functions instead, to handle all floating point types: float, double, and long double, with type checks for these types via a static_assert() inside the template.
*Scaling the epsilon value is a good idea to ensure it works for really large and really small a and b values. This article recommends and explains it: http://realtimecollisiondetection.net/blog/?p=89. So, you should scale epsilon by a scaling value equal to max(1.0, abs(a), abs(b)), as that article explains. Otherwise, as a and/or b increase in magnitude, the epsilon would eventually become so small relative to those values that it becomes lost in the floating point error. So, we scale it to become larger in magnitude like they are. However, using 1.0 as the smallest allowed scaling factor for epsilon also ensures that for really small-magnitude a and b values, epsilon itself doesn't get scaled so small that it also becomes lost in the floating point error. So, we limit the minimum scaling factor to 1.0.
*If you want to "encapsulate" the above functions into a class, don't. Instead, wrap them up in a namespace if you like in order to namespace them. Ex: if you put all of the stand-alone functions into a namespace called float_comparison, then you could access the is_eq() function like this, for instance: float_comparison::is_eq(1.0, 1.5);.
*It might also be nice to add comparisons against zero, not just comparisons between two values.
*So, here is a better type of solution with the above improvements in place:
namespace float_comparison {
/// Scale the epsilon value to become large for large-magnitude a or b,
/// but no smaller than 1.0, per the explanation above, to ensure that
/// epsilon doesn't ever fall out in floating point error as a and/or b
/// increase in magnitude.
template<typename T>
static constexpr T scale_epsilon(T a, T b, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
T scaling_factor;
// Special case for when a or b is infinity
if (std::isinf(a) || std::isinf(b))
{
scaling_factor = 0;
}
else
{
scaling_factor = std::max({(T)1.0, std::abs(a), std::abs(b)});
}
T epsilon_scaled = scaling_factor * std::abs(epsilon);
return epsilon_scaled;
}
// Compare two values
/// Equal: returns true if a is approximately == b, and false otherwise
template<typename T>
static constexpr bool is_eq(T a, T b, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
// test `a == b` first to see if both a and b are either infinity
// or -infinity
return a == b || std::abs(a - b) <= scale_epsilon(a, b, epsilon);
}
/*
etc. etc.:
is_eq()
is_ne()
is_lt()
is_le()
is_gt()
is_ge()
*/
// Compare against zero
/// Equal: returns true if a is approximately == 0, and false otherwise
template<typename T>
static constexpr bool is_eq_zero(T a, T epsilon =
std::numeric_limits<T>::epsilon()) noexcept
{
static_assert(std::is_floating_point_v<T>, "Floating point comparisons "
"require type float, double, or long double.");
return is_eq(a, (T)0.0, epsilon);
}
/*
etc. etc.:
is_eq_zero()
is_ne_zero()
is_lt_zero()
is_le_zero()
is_gt_zero()
is_ge_zero()
*/
} // namespace float_comparison
See also:
*
*The macro forms of some of the functions above in my repo here: utilities.h.
*
*UPDATE 29 NOV 2020: it's a work-in-progress, and I'm going to make it a separate answer when ready, but I've produced a better, scaled-epsilon version of all of the functions in C in this file here: utilities.c. Take a look.
*ADDITIONAL READING I need to do now have done: Floating-point tolerances revisited, by Christer Ericson. VERY USEFUL ARTICLE! It talks about scaling epsilon in order to ensure it never falls out in floating point error, even for really large-magnitude a and/or b values!
A: I ended up spending quite some time going through material in this great thread. I doubt everyone wants to spend so much time so I would highlight the summary of what I learned and the solution I implemented.
Quick Summary
*
*Is 1e-8 approximately same as 1e-16? If you are looking at noisy sensor data then probably yes but if you are doing molecular simulation then may be not! Bottom line: You always need to think of tolerance value in context of specific function call and not just make it generic app-wide hard-coded constant.
*For general library functions, it's still nice to have parameter with default tolerance. A typical choice is numeric_limits::epsilon() which is same as FLT_EPSILON in float.h. This is however problematic because epsilon for comparing values like 1.0 is not same as epsilon for values like 1E9. The FLT_EPSILON is defined for 1.0.
*The obvious implementation to check if number is within tolerance is fabs(a-b) <= epsilon however this doesn't work because default epsilon is defined for 1.0. We need to scale epsilon up or down in terms of a and b.
*There are two solution to this problem: either you set epsilon proportional to max(a,b) or you can get next representable numbers around a and then see if b falls into that range. The former is called "relative" method and later is called ULP method.
*Both methods actually fails anyway when comparing with 0. In this case, application must supply correct tolerance.
Utility Functions Implementation (C++11)
//implements relative method - do not use for comparing with zero
//use this most of the time, tolerance needs to be meaningful in your context
template<typename TReal>
static bool isApproximatelyEqual(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = std::fabs(a - b);
if (diff <= tolerance)
return true;
if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
//supply tolerance that is meaningful in your context
//for example, default tolerance may not work if you are comparing double with float
template<typename TReal>
static bool isApproximatelyZero(TReal a, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
if (std::fabs(a) <= tolerance)
return true;
return false;
}
//use this when you want to be on safe side
//for example, don't start rover unless signal is above 1
template<typename TReal>
static bool isDefinitelyLessThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = a - b;
if (diff < tolerance)
return true;
if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
template<typename TReal>
static bool isDefinitelyGreaterThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon())
{
TReal diff = a - b;
if (diff > tolerance)
return true;
if (diff > std::fmax(std::fabs(a), std::fabs(b)) * tolerance)
return true;
return false;
}
//implements ULP method
//use this when you are only concerned about floating point precision issue
//for example, if you want to see if a is 1.0 by checking if its within
//10 closest representable floating point numbers around 1.0.
template<typename TReal>
static bool isWithinPrecisionInterval(TReal a, TReal b, unsigned int interval_size = 1)
{
TReal min_a = a - (a - std::nextafter(a, std::numeric_limits<TReal>::lowest())) * interval_size;
TReal max_a = a + (std::nextafter(a, std::numeric_limits<TReal>::max()) - a) * interval_size;
return min_a <= b && max_a >= b;
}
A: The portable way to get epsilon in C++ is
#include <limits>
std::numeric_limits<double>::epsilon()
Then the comparison function becomes
#include <cmath>
#include <limits>
bool AreSame(double a, double b) {
return std::fabs(a - b) < std::numeric_limits<double>::epsilon();
}
A: The comparison with an epsilon value is what most people do (even in game programming).
You should change your implementation a little though:
bool AreSame(double a, double b)
{
return fabs(a - b) < EPSILON;
}
Edit: Christer has added a stack of great info on this topic on a recent blog post. Enjoy.
A: My class based on previously posted answers. Very similar to Google's code but I use a bias which pushes all NaN values above 0xFF000000. That allows a faster check for NaN.
This code is meant to demonstrate the concept, not be a general solution. Google's code already shows how to compute all the platform specific values and I didn't want to duplicate all that. I've done limited testing on this code.
typedef unsigned int U32;
// Float Memory Bias (unsigned)
// ----- ------ ---------------
// NaN 0xFFFFFFFF 0xFF800001
// NaN 0xFF800001 0xFFFFFFFF
// -Infinity 0xFF800000 0x00000000 ---
// -3.40282e+038 0xFF7FFFFF 0x00000001 |
// -1.40130e-045 0x80000001 0x7F7FFFFF |
// -0.0 0x80000000 0x7F800000 |--- Valid <= 0xFF000000.
// 0.0 0x00000000 0x7F800000 | NaN > 0xFF000000
// 1.40130e-045 0x00000001 0x7F800001 |
// 3.40282e+038 0x7F7FFFFF 0xFEFFFFFF |
// Infinity 0x7F800000 0xFF000000 ---
// NaN 0x7F800001 0xFF000001
// NaN 0x7FFFFFFF 0xFF7FFFFF
//
// Either value of NaN returns false.
// -Infinity and +Infinity are not "close".
// -0 and +0 are equal.
//
class CompareFloat{
public:
union{
float m_f32;
U32 m_u32;
};
static bool CompareFloat::IsClose( float A, float B, U32 unitsDelta = 4 )
{
U32 a = CompareFloat::GetBiased( A );
U32 b = CompareFloat::GetBiased( B );
if ( (a > 0xFF000000) || (b > 0xFF000000) )
{
return( false );
}
return( (static_cast<U32>(abs( a - b ))) < unitsDelta );
}
protected:
static U32 CompareFloat::GetBiased( float f )
{
U32 r = ((CompareFloat*)&f)->m_u32;
if ( r & 0x80000000 )
{
return( ~r - 0x007FFFFF );
}
return( r + 0x7F800000 );
}
};
A: The code you wrote is bugged :
return (diff < EPSILON) && (-diff > EPSILON);
The correct code would be :
return (diff < EPSILON) && (diff > -EPSILON);
(...and yes this is different)
I wonder if fabs wouldn't make you lose lazy evaluation in some case. I would say it depends on the compiler. You might want to try both. If they are equivalent in average, take the implementation with fabs.
If you have some info on which of the two float is more likely to be bigger than then other, you can play on the order of the comparison to take better advantage of the lazy evaluation.
Finally you might get better result by inlining this function. Not likely to improve much though...
Edit: OJ, thanks for correcting your code. I erased my comment accordingly
A: Comparing floating point numbers for depends on the context. Since even changing the order of operations can produce different results, it is important to know how "equal" you want the numbers to be.
Comparing floating point numbers by Bruce Dawson is a good place to start when looking at floating point comparison.
The following definitions are from The art of computer programming by Knuth:
bool approximatelyEqual(float a, float b, float epsilon)
{
return fabs(a - b) <= ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool essentiallyEqual(float a, float b, float epsilon)
{
return fabs(a - b) <= ( (fabs(a) > fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool definitelyGreaterThan(float a, float b, float epsilon)
{
return (a - b) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
bool definitelyLessThan(float a, float b, float epsilon)
{
return (b - a) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon);
}
Of course, choosing epsilon depends on the context, and determines how equal you want the numbers to be.
Another method of comparing floating point numbers is to look at the ULP (units in last place) of the numbers. While not dealing specifically with comparisons, the paper What every computer scientist should know about floating point numbers is a good resource for understanding how floating point works and what the pitfalls are, including what ULP is.
A:
`return fabs(a - b) < EPSILON;
This is fine if:
*
*the order of magnitude of your inputs don't change much
*very small numbers of opposite signs can be treated as equal
But otherwise it'll lead you into trouble. Double precision numbers have a resolution of about 16 decimal places. If the two numbers you are comparing are larger in magnitude than EPSILON*1.0E16, then you might as well be saying:
return a==b;
I'll examine a different approach that assumes you need to worry about the first issue and assume the second is fine your application. A solution would be something like:
#define VERYSMALL (1.0E-150)
#define EPSILON (1.0E-8)
bool AreSame(double a, double b)
{
double absDiff = fabs(a - b);
if (absDiff < VERYSMALL)
{
return true;
}
double maxAbs = max(fabs(a) - fabs(b));
return (absDiff/maxAbs) < EPSILON;
}
This is expensive computationally, but it is sometimes what is called for. This is what we have to do at my company because we deal with an engineering library and inputs can vary by a few dozen orders of magnitude.
Anyway, the point is this (and applies to practically every programming problem): Evaluate what your needs are, then come up with a solution to address your needs -- don't assume the easy answer will address your needs. If after your evaluation you find that fabs(a-b) < EPSILON will suffice, perfect -- use it! But be aware of its shortcomings and other possible solutions too.
A: I found that the Google C++ Testing Framework contains a nice cross-platform template-based implementation of AlmostEqual2sComplement which works on both doubles and floats. Given that it is released under the BSD license, using it in your own code should be no problem, as long as you retain the license. I extracted the below code from http://code.google.com/p/googletest/source/browse/trunk/include/gtest/internal/gtest-internal.h https://github.com/google/googletest/blob/master/googletest/include/gtest/internal/gtest-internal.h and added the license on top.
Be sure to #define GTEST_OS_WINDOWS to some value (or to change the code where it's used to something that fits your codebase - it's BSD licensed after all).
Usage example:
double left = // something
double right = // something
const FloatingPoint<double> lhs(left), rhs(right);
if (lhs.AlmostEquals(rhs)) {
//they're equal!
}
Here's the code:
// Copyright 2005, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Authors: [email protected] (Zhanyong Wan), [email protected] (Sean Mcafee)
//
// The Google C++ Testing Framework (Google Test)
// This template class serves as a compile-time function from size to
// type. It maps a size in bytes to a primitive type with that
// size. e.g.
//
// TypeWithSize<4>::UInt
//
// is typedef-ed to be unsigned int (unsigned integer made up of 4
// bytes).
//
// Such functionality should belong to STL, but I cannot find it
// there.
//
// Google Test uses this class in the implementation of floating-point
// comparison.
//
// For now it only handles UInt (unsigned int) as that's all Google Test
// needs. Other types can be easily added in the future if need
// arises.
template <size_t size>
class TypeWithSize {
public:
// This prevents the user from using TypeWithSize<N> with incorrect
// values of N.
typedef void UInt;
};
// The specialization for size 4.
template <>
class TypeWithSize<4> {
public:
// unsigned int has size 4 in both gcc and MSVC.
//
// As base/basictypes.h doesn't compile on Windows, we cannot use
// uint32, uint64, and etc here.
typedef int Int;
typedef unsigned int UInt;
};
// The specialization for size 8.
template <>
class TypeWithSize<8> {
public:
#if GTEST_OS_WINDOWS
typedef __int64 Int;
typedef unsigned __int64 UInt;
#else
typedef long long Int; // NOLINT
typedef unsigned long long UInt; // NOLINT
#endif // GTEST_OS_WINDOWS
};
// This template class represents an IEEE floating-point number
// (either single-precision or double-precision, depending on the
// template parameters).
//
// The purpose of this class is to do more sophisticated number
// comparison. (Due to round-off error, etc, it's very unlikely that
// two floating-points will be equal exactly. Hence a naive
// comparison by the == operation often doesn't work.)
//
// Format of IEEE floating-point:
//
// The most-significant bit being the leftmost, an IEEE
// floating-point looks like
//
// sign_bit exponent_bits fraction_bits
//
// Here, sign_bit is a single bit that designates the sign of the
// number.
//
// For float, there are 8 exponent bits and 23 fraction bits.
//
// For double, there are 11 exponent bits and 52 fraction bits.
//
// More details can be found at
// http://en.wikipedia.org/wiki/IEEE_floating-point_standard.
//
// Template parameter:
//
// RawType: the raw floating-point type (either float or double)
template <typename RawType>
class FloatingPoint {
public:
// Defines the unsigned integer type that has the same size as the
// floating point number.
typedef typename TypeWithSize<sizeof(RawType)>::UInt Bits;
// Constants.
// # of bits in a number.
static const size_t kBitCount = 8*sizeof(RawType);
// # of fraction bits in a number.
static const size_t kFractionBitCount =
std::numeric_limits<RawType>::digits - 1;
// # of exponent bits in a number.
static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount;
// The mask for the sign bit.
static const Bits kSignBitMask = static_cast<Bits>(1) << (kBitCount - 1);
// The mask for the fraction bits.
static const Bits kFractionBitMask =
~static_cast<Bits>(0) >> (kExponentBitCount + 1);
// The mask for the exponent bits.
static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask);
// How many ULP's (Units in the Last Place) we want to tolerate when
// comparing two numbers. The larger the value, the more error we
// allow. A 0 value means that two numbers must be exactly the same
// to be considered equal.
//
// The maximum error of a single floating-point operation is 0.5
// units in the last place. On Intel CPU's, all floating-point
// calculations are done with 80-bit precision, while double has 64
// bits. Therefore, 4 should be enough for ordinary use.
//
// See the following article for more details on ULP:
// http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm.
static const size_t kMaxUlps = 4;
// Constructs a FloatingPoint from a raw floating-point number.
//
// On an Intel CPU, passing a non-normalized NAN (Not a Number)
// around may change its bits, although the new value is guaranteed
// to be also a NAN. Therefore, don't expect this constructor to
// preserve the bits in x when x is a NAN.
explicit FloatingPoint(const RawType& x) { u_.value_ = x; }
// Static methods
// Reinterprets a bit pattern as a floating-point number.
//
// This function is needed to test the AlmostEquals() method.
static RawType ReinterpretBits(const Bits bits) {
FloatingPoint fp(0);
fp.u_.bits_ = bits;
return fp.u_.value_;
}
// Returns the floating-point number that represent positive infinity.
static RawType Infinity() {
return ReinterpretBits(kExponentBitMask);
}
// Non-static methods
// Returns the bits that represents this number.
const Bits &bits() const { return u_.bits_; }
// Returns the exponent bits of this number.
Bits exponent_bits() const { return kExponentBitMask & u_.bits_; }
// Returns the fraction bits of this number.
Bits fraction_bits() const { return kFractionBitMask & u_.bits_; }
// Returns the sign bit of this number.
Bits sign_bit() const { return kSignBitMask & u_.bits_; }
// Returns true iff this is NAN (not a number).
bool is_nan() const {
// It's a NAN if the exponent bits are all ones and the fraction
// bits are not entirely zeros.
return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0);
}
// Returns true iff this number is at most kMaxUlps ULP's away from
// rhs. In particular, this function:
//
// - returns false if either number is (or both are) NAN.
// - treats really large numbers as almost equal to infinity.
// - thinks +0.0 and -0.0 are 0 DLP's apart.
bool AlmostEquals(const FloatingPoint& rhs) const {
// The IEEE standard says that any comparison operation involving
// a NAN must return false.
if (is_nan() || rhs.is_nan()) return false;
return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_)
<= kMaxUlps;
}
private:
// The data type used to store the actual floating-point number.
union FloatingPointUnion {
RawType value_; // The raw floating-point number.
Bits bits_; // The bits that represent the number.
};
// Converts an integer from the sign-and-magnitude representation to
// the biased representation. More precisely, let N be 2 to the
// power of (kBitCount - 1), an integer x is represented by the
// unsigned number x + N.
//
// For instance,
//
// -N + 1 (the most negative number representable using
// sign-and-magnitude) is represented by 1;
// 0 is represented by N; and
// N - 1 (the biggest number representable using
// sign-and-magnitude) is represented by 2N - 1.
//
// Read http://en.wikipedia.org/wiki/Signed_number_representations
// for more details on signed number representations.
static Bits SignAndMagnitudeToBiased(const Bits &sam) {
if (kSignBitMask & sam) {
// sam represents a negative number.
return ~sam + 1;
} else {
// sam represents a positive number.
return kSignBitMask | sam;
}
}
// Given two numbers in the sign-and-magnitude representation,
// returns the distance between them as an unsigned number.
static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1,
const Bits &sam2) {
const Bits biased1 = SignAndMagnitudeToBiased(sam1);
const Bits biased2 = SignAndMagnitudeToBiased(sam2);
return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1);
}
FloatingPointUnion u_;
};
EDIT: This post is 4 years old. It's probably still valid, and the code is nice, but some people found improvements. Best go get the latest version of AlmostEquals right from the Google Test source code, and not the one I pasted up here.
A: As others have pointed out, using a fixed-exponent epsilon (such as 0.0000001) will be useless for values away from the epsilon value. For example, if your two values are 10000.000977 and 10000, then there are NO 32-bit floating-point values between these two numbers -- 10000 and 10000.000977 are as close as you can possibly get without being bit-for-bit identical. Here, an epsilon of less than 0.0009 is meaningless; you might as well use the straight equality operator.
Likewise, as the two values approach epsilon in size, the relative error grows to 100%.
Thus, trying to mix a fixed point number such as 0.00001 with floating-point values (where the exponent is arbitrary) is a pointless exercise. This will only ever work if you can be assured that the operand values lie within a narrow domain (that is, close to some specific exponent), and if you properly select an epsilon value for that specific test. If you pull a number out of the air ("Hey! 0.00001 is small, so that must be good!"), you're doomed to numerical errors. I've spent plenty of time debugging bad numerical code where some poor schmuck tosses in random epsilon values to make yet another test case work.
If you do numerical programming of any kind and believe you need to reach for fixed-point epsilons, READ BRUCE'S ARTICLE ON COMPARING FLOATING-POINT NUMBERS.
Comparing Floating Point Numbers
A: I'd be very wary of any of these answers that involves floating point subtraction (e.g., fabs(a-b) < epsilon). First, the floating point numbers become more sparse at greater magnitudes and at high enough magnitudes where the spacing is greater than epsilon, you might as well just be doing a == b. Second, subtracting two very close floating point numbers (as these will tend to be, given that you're looking for near equality) is exactly how you get catastrophic cancellation.
While not portable, I think grom's answer does the best job of avoiding these issues.
A: There are actually cases in numerical software where you want to check whether two floating point numbers are exactly equal. I posted this on a similar question
https://stackoverflow.com/a/10973098/1447411
So you can not say that "CompareDoubles1" is wrong in general.
A: In terms of the scale of quantities:
If epsilon is the small fraction of the magnitude of quantity (i.e. relative value) in some certain physical sense and A and B types is comparable in the same sense, than I think, that the following is quite correct:
#include <limits>
#include <iomanip>
#include <iostream>
#include <cmath>
#include <cstdlib>
#include <cassert>
template< typename A, typename B >
inline
bool close_enough(A const & a, B const & b,
typename std::common_type< A, B >::type const & epsilon)
{
using std::isless;
assert(isless(0, epsilon)); // epsilon is a part of the whole quantity
assert(isless(epsilon, 1));
using std::abs;
auto const delta = abs(a - b);
auto const x = abs(a);
auto const y = abs(b);
// comparable generally and |a - b| < eps * (|a| + |b|) / 2
return isless(epsilon * y, x) && isless(epsilon * x, y) && isless((delta + delta) / (x + y), epsilon);
}
int main()
{
std::cout << std::boolalpha << close_enough(0.9, 1.0, 0.1) << std::endl;
std::cout << std::boolalpha << close_enough(1.0, 1.1, 0.1) << std::endl;
std::cout << std::boolalpha << close_enough(1.1, 1.2, 0.01) << std::endl;
std::cout << std::boolalpha << close_enough(1.0001, 1.0002, 0.01) << std::endl;
std::cout << std::boolalpha << close_enough(1.0, 0.01, 0.1) << std::endl;
return EXIT_SUCCESS;
}
A: In a more generic way:
template <typename T>
bool compareNumber(const T& a, const T& b) {
return std::abs(a - b) < std::numeric_limits<T>::epsilon();
}
Note:
As pointed out by @SirGuy, this approach is flawed.
I am leaving this answer here as an example not to follow.
A: I use this code:
bool AlmostEqual(double v1, double v2)
{
return (std::fabs(v1 - v2) < std::fabs(std::min(v1, v2)) * std::numeric_limits<double>::epsilon());
}
A: Found another interesting implementation on: https://en.cppreference.com/w/cpp/types/numeric_limits/epsilon
#include <cmath>
#include <limits>
#include <iomanip>
#include <iostream>
#include <type_traits>
#include <algorithm>
template<class T>
typename std::enable_if<!std::numeric_limits<T>::is_integer, bool>::type
almost_equal(T x, T y, int ulp)
{
// the machine epsilon has to be scaled to the magnitude of the values used
// and multiplied by the desired precision in ULPs (units in the last place)
return std::fabs(x-y) <= std::numeric_limits<T>::epsilon() * std::fabs(x+y) * ulp
// unless the result is subnormal
|| std::fabs(x-y) < std::numeric_limits<T>::min();
}
int main()
{
double d1 = 0.2;
double d2 = 1 / std::sqrt(5) / std::sqrt(5);
std::cout << std::fixed << std::setprecision(20)
<< "d1=" << d1 << "\nd2=" << d2 << '\n';
if(d1 == d2)
std::cout << "d1 == d2\n";
else
std::cout << "d1 != d2\n";
if(almost_equal(d1, d2, 2))
std::cout << "d1 almost equals d2\n";
else
std::cout << "d1 does not almost equal d2\n";
}
A: I use this code. Unlike the above answers this allows one to
give a abs_relative_error that is explained in the comments of the code.
The first version compares complex numbers, so that the error
can be explained in terms of the angle between two "vectors"
of the same length in the complex plane (which gives a little
insight). Then from there the correct formula for two real
numbers follows.
https://github.com/CarloWood/ai-utils/blob/master/almost_equal.h
The latter then is
template<class T>
typename std::enable_if<std::is_floating_point<T>::value, bool>::type
almost_equal(T x, T y, T const abs_relative_error)
{
return 2 * std::abs(x - y) <= abs_relative_error * std::abs(x + y);
}
where abs_relative_error is basically (twice) the absolute value of what comes closest to being defined in the literature: a relative error. But that is just the choice of the name.
What it really is seen most clearly in the complex plane I think. If |x| = 1, and y lays in a circle around x with diameter abs_relative_error, then the two are considered equal.
A: I use the following function for floating-point numbers comparison:
bool approximatelyEqual(double a, double b)
{
return fabs(a - b) <= ((fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * std::numeric_limits<double>::epsilon());
}
A: Why not perform bitwise XOR? Two floating point numbers are equal if their corresponding bits are equal. I think, the decision to place the exponent bits before mantissa was made to speed up comparison of two floats.
I think, many answers here are missing the point of epsilon comparison. Epsilon value only depends on to what precision floating point numbers are compared. For example, after doing some arithmetic with floats you get two numbers: 2.5642943554342 and 2.5642943554345. They are not equal, but for the solution only 3 decimal digits matter so then they are equal: 2.564 and 2.564. In this case you choose epsilon equal to 0.001. Epsilon comparison is also possible with bitwise XOR. Correct me if I am wrong.
A: This is another solution with lambda:
#include <cmath>
#include <limits>
auto Compare = [](float a, float b, float epsilon = std::numeric_limits<float>::epsilon()){ return (std::fabs(a - b) <= epsilon); };
A: How about this?
template<typename T>
bool FloatingPointEqual( T a, T b ) { return !(a < b) && !(b < a); }
I've seen various approaches - but never seen this, so I'm curious to hear of any comments too!
A: In this version you check, that numbers differ from one another not more that for some fraction (say, 0.0001%):
bool floatApproximatelyEquals(const float a, const float b) {
if (b == 0.) return a == 0.; // preventing division by zero
return abs(1. - a / b) < 1e-6;
}
Please note Sneftel's comment about possible fraction limits for float.
Also note, that it differs from approach with absolute epsilons - here you don't bother about "order of magnitude" - numbers might be, say 1e100, or 1e-100, they will always be compared consistently, and you don't have to update epsilon for every case.
A: My way may not be correct but useful
Convert both float to strings and then do string compare
bool IsFlaotEqual(float a, float b, int decimal)
{
TCHAR form[50] = _T("");
_stprintf(form, _T("%%.%df"), decimal);
TCHAR a1[30] = _T(""), a2[30] = _T("");
_stprintf(a1, form, a);
_stprintf(a2, form, b);
if( _tcscmp(a1, a2) == 0 )
return true;
return false;
}
operator overlaoding can also be done
A: I write this for java, but maybe you find it useful. It uses longs instead of doubles, but takes care of NaNs, subnormals, etc.
public static boolean equal(double a, double b) {
final long fm = 0xFFFFFFFFFFFFFL; // fraction mask
final long sm = 0x8000000000000000L; // sign mask
final long cm = 0x8000000000000L; // most significant decimal bit mask
long c = Double.doubleToLongBits(a), d = Double.doubleToLongBits(b);
int ea = (int) (c >> 52 & 2047), eb = (int) (d >> 52 & 2047);
if (ea == 2047 && (c & fm) != 0 || eb == 2047 && (d & fm) != 0) return false; // NaN
if (c == d) return true; // identical - fast check
if (ea == 0 && eb == 0) return true; // ±0 or subnormals
if ((c & sm) != (d & sm)) return false; // different signs
if (abs(ea - eb) > 1) return false; // b > 2*a or a > 2*b
d <<= 12; c <<= 12;
if (ea < eb) c = c >> 1 | sm;
else if (ea > eb) d = d >> 1 | sm;
c -= d;
return c < 65536 && c > -65536; // don't use abs(), because:
// There is a posibility c=0x8000000000000000 which cannot be converted to positive
}
public static boolean zero(double a) { return (Double.doubleToLongBits(a) >> 52 & 2047) < 3; }
Keep in mind that after a number of floating-point operations, number can be very different from what we expect. There is no code to fix that.
A: It depends on how precise you want the comparison to be. If you want to compare for exactly the same number, then just go with ==. (You almost never want to do this unless you actually want exactly the same number.) On any decent platform you can also do the following:
diff= a - b; return fabs(diff)<EPSILON;
as fabs tends to be pretty fast. By pretty fast I mean it is basically a bitwise AND, so it better be fast.
And integer tricks for comparing doubles and floats are nice but tend to make it more difficult for the various CPU pipelines to handle effectively. And it's definitely not faster on certain in-order architectures these days due to using the stack as a temporary storage area for values that are being used frequently. (Load-hit-store for those who care.)
A: /// testing whether two doubles are almost equal. We consider two doubles
/// equal if the difference is within the range [0, epsilon).
///
/// epsilon: a positive number (supposed to be small)
///
/// if either x or y is 0, then we are comparing the absolute difference to
/// epsilon.
/// if both x and y are non-zero, then we are comparing the relative difference
/// to epsilon.
bool almost_equal(double x, double y, double epsilon)
{
double diff = x - y;
if (x != 0 && y != 0){
diff = diff/y;
}
if (diff < epsilon && -1.0*diff < epsilon){
return true;
}
return false;
}
I used this function for my small project and it works, but note the following:
Double precision error can create a surprise for you. Let's say epsilon = 1.0e-6, then 1.0 and 1.000001 should NOT be considered equal according to the above code, but on my machine the function considers them to be equal, this is because 1.000001 can not be precisely translated to a binary format, it is probably 1.0000009xxx. I test it with 1.0 and 1.0000011 and this time I get the expected result.
A: You cannot compare two double with a fixed EPSILON. Depending on the value of double, EPSILON varies.
A better double comparison would be:
bool same(double a, double b)
{
return std::nextafter(a, std::numeric_limits<double>::lowest()) <= b
&& std::nextafter(a, std::numeric_limits<double>::max()) >= b;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "640"
} |
Q: What are OpenGL extensions, and what are the benefits/tradeoffs of using them? In relation to this question on Using OpenGL extensions, what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them?
A: Extensions are, in general, a way for graphics card vendors to add new functionality to OpenGL without having to wait until the next revision of the OpenGL spec. There are different types of extensions:
*
*Vendor extension - only one vendor provides a certain type of functionality.
*
*Example: NV_vertex_program
*Multivendor extension - multiple vendors have gotten together and agreed on the functionality.
*
*Example: EXT_vertex_program
*ARB extension - the OpenGL Architecture Review Board has blessed the extension. You have a reasonable expectation that this type of extension will be around for a while.
*
*Example: ARB_vertex_program
Extensions don't have to go through all of these steps. Sometimes an extension is only ever implemented by one vendor, before hardware designs go a different way and the extension is abandoned. Other times, an extension might make it as far as ARB status before everyone decides there's a better way. (The ARB_vertex_program approach, for instance, was set aside in favor of the high-level shading language approach of ARB_vertex_shader when it came time to roll shaders into the core OpenGL spec.) Even ARB extensions don't last forever; I wouldn't write something today requiring ARB_matrix_palette, for instance.
All of that having been said, it's a very good idea to keep up to date on extensions, in particular the latest ARB and EXT extensions. In the past it has been true that some of the 'fast paths' through the hardware were only accessible via extensions. Likewise, if you want to know what all functionality a piece of hardware can do, there's no better place to look than in a vendor-specific extension.
If you're just getting started in OpenGL, I'd recommend investigating:
*
*ARB_vertex_buffer_object (vertices)
*ARB_vertex_shader / ARB_fragment_shader / ARB_shader_objects / GLSL spec (shaders)
More advanced:
*
*ARB/EXT_framebuffer_object (off-screen rendering)
This is all functionality that's been rolled into core, but it can be good to see it in isolation so you can get a better feel for where its boundaries lie. (The core OpenGL spec seamlessly mixes the old with the new, so this can be pretty important if you want to stay on the fast path, and avoid the legacy and sometimes implemented in software paths.)
Whatever you do, make sure you have appropriate checks for the extensions you decide to use, and fallbacks where necessary. Even though your card may have a given extension, there's no guarantee that the extension will be present on another vendor's card, or even on another operating system with the same card.
A: OpenGL Extensions are new features added to the OpenGL specification, they are added by the OpenGL standards body and by the various graphics card vendors. These are exposed to the programmer as new function calls or variables. Every new version of the OpenGL specification ships with newer functionality and (typically) includes all the previous functionality and extensions.
The real problem with OpenGL extensions exists only on Windows. Microsoft hasn't supported any extensions that have been released after OpenGL v1.1. The graphics card vendors overcome this by shipping their own version of this functionality through header files and libraries. However, using this can be a bit painful as the question you linked to shows. But this problem has mostly gone away with the popularity of GLEW, which takes care of wrapping all this into a easy-to-use package.
If you do use a very recent OpenGL extension, be aware that it may not be supported on older graphics hardware. Other than this, there's no other disadvantage to using these extensions. Most of the extensions which become standard are pretty darn useful and there's very little logic to not use them.
A: The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions.
Each vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV.
It may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board "blesses" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3.
Before using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do I send a file as an email attachment using Linux command line? I've created a script that runs every night on my Linux server that uses mysqldump to back up each of my MySQL databases to .sql files and packages them together as a compressed .tar file. The next step I want to accomplish is to send that tar file through email to a remote email server for safekeeping. I've been able to send the raw script in the body an email by piping the backup text file to mailx like so:
$ cat mysqldbbackup.sql | mailx [email protected]
cat echoes the backup file's text which is piped into the mailx program with the recipient's email address passed as an argument.
While this accomplishes what I need, I think it could be one step better, Is there any way, using shell scripts or otherwise, to send the compressed .tar file to an outgoing email message as an attachment? This would beat having to deal with very long email messages which contain header data and often have word-wrapping issues etc.
A: Or, failing mutt:
gzip -c mysqldbbackup.sql | uuencode mysqldbbackup.sql.gz | mail -s "MySQL DB" [email protected]
A: Another alternative - Swaks (Swiss Army Knife for SMTP).
swaks -tls \
--to ${MAIL_TO} \
--from ${MAIL_FROM} \
--server ${MAIL_SERVER} \
--auth LOGIN \
--auth-user ${MAIL_USER} \
--auth-password ${MAIL_PASSWORD} \
--header "Subject: $MAIL_SUBJECT" \
--header "Content-Type: text/html; charset=UTF-8" \
--body "$MESSAGE" \
--attach mysqldbbackup.sql
A: Depending on your version of Linux it may be called mail. To quote @David above:
mail -s "Backup" -a mysqldbbackup.sql [email protected] < message.txt
or also:
cat message.txt | mail -s "Backup" -a mysqldbbackup.sql [email protected]
A: From looking at man mailx, the mailx program does not have an option for attaching a file. You could use another program such as mutt.
echo "This is the message body" | mutt -a file.to.attach -s "subject of message" [email protected]
Command line options for mutt can be shown with mutt -h.
A: I use SendEmail, which was created for this scenario. It's packaged for Ubuntu so I assume it's available
sendemail -f [email protected] -t [email protected] -m "Here are your files!" -a file1.jpg file2.zip
http://caspian.dotconf.net/menu/Software/SendEmail/
A: None of the mutt ones worked for me. It was thinking the email address was part of the attachment. Had to do:
echo "This is the message body" | mutt -a "/path/to/file.to.attach" -s "subject of message" -- [email protected]
A: Mailutils makes this a piece of cake
echo "Body" | mail.mailutils -M -s "My Subject" -A attachment.pdf [email protected]
*
*-A file attaches a file
*-M enables MIME, so that you can have an attachment and plaintext body.
If not yet installed, run
sudo apt install mailutils
A: I use mpack.
mpack -s subject file [email protected]
Unfortunately mpack does not recognize '-' as an alias for stdin. But the following work, and can easily be wrapped in an (shell) alias or a script:
mpack -s subject /dev/stdin [email protected] < file
A: echo -e 'Hi, \n These are contents of my mail. \n Thanks' | mailx -s 'This is my email subject' -a /path/to/attachment_file.log -b [email protected] -c [email protected] -r [email protected] [email protected] [email protected] [email protected]
A: metamail has the tool metasend
metasend -f mysqlbackup.sql.gz -t [email protected] -s Backup -m application/x-gzip -b
A: I once wrote this function for ksh on Solaris (uses Perl for base64 encoding):
# usage: email_attachment to cc subject body attachment_filename
email_attachment() {
to="$1"
cc="$2"
subject="$3"
body="$4"
filename="${5:-''}"
boundary="_====_blah_====_$(date +%Y%m%d%H%M%S)_====_"
{
print -- "To: $to"
print -- "Cc: $cc"
print -- "Subject: $subject"
print -- "Content-Type: multipart/mixed; boundary=\"$boundary\""
print -- "Mime-Version: 1.0"
print -- ""
print -- "This is a multi-part message in MIME format."
print -- ""
print -- "--$boundary"
print -- "Content-Type: text/plain; charset=ISO-8859-1"
print -- ""
print -- "$body"
print -- ""
if [[ -n "$filename" && -f "$filename" && -r "$filename" ]]; then
print -- "--$boundary"
print -- "Content-Transfer-Encoding: base64"
print -- "Content-Type: application/octet-stream; name=$filename"
print -- "Content-Disposition: attachment; filename=$filename"
print -- ""
print -- "$(perl -MMIME::Base64 -e 'open F, shift; @lines=<F>; close F; print MIME::Base64::encode(join(q{}, @lines))' $filename)"
print -- ""
fi
print -- "--${boundary}--"
} | /usr/lib/sendmail -oi -t
}
A: You can use mutt to send the email with attachment
mutt -s "Backup" -a mysqldbbackup.sql [email protected] < message.txt
A: Send a Plaintext body email with one plaintext attachment with mailx:
(
/usr/bin/uuencode attachfile.txt myattachedfilename.txt;
/usr/bin/echo "Body of text"
) | mailx -s 'Subject' [email protected]
Below is the same command as above, without the newlines
( /usr/bin/uuencode /home/el/attachfile.txt myattachedfilename.txt; /usr/bin/echo "Body of text" ) | mailx -s 'Subject' [email protected]
Make sure you have a file /home/el/attachfile.txt defined with this contents:
<html><body>
Government discriminates against programmers with cruel/unusual 35 year prison
sentences for making the world's information free, while bankers that pilfer
trillions in citizens assets through systematic inflation get the nod and
walk free among us.
</body></html>
If you don't have uuencode read this: https://unix.stackexchange.com/questions/16277/how-do-i-get-uuencode-to-work
On Linux, Send HTML body email with a PDF attachment with sendmail:
Make sure you have ksh installed: yum info ksh
Make sure you have sendmail installed and configured.
Make sure you have uuencode installed and available: https://unix.stackexchange.com/questions/16277/how-do-i-get-uuencode-to-work
Make a new file called test.sh and put it in your home directory: /home/el
Put the following code in test.sh:
#!/usr/bin/ksh
export MAILFROM="[email protected]"
export MAILTO="[email protected]"
export SUBJECT="Test PDF for Email"
export BODY="/home/el/email_body.htm"
export ATTACH="/home/el/pdf-test.pdf"
export MAILPART=`uuidgen` ## Generates Unique ID
export MAILPART_BODY=`uuidgen` ## Generates Unique ID
(
echo "From: $MAILFROM"
echo "To: $MAILTO"
echo "Subject: $SUBJECT"
echo "MIME-Version: 1.0"
echo "Content-Type: multipart/mixed; boundary=\"$MAILPART\""
echo ""
echo "--$MAILPART"
echo "Content-Type: multipart/alternative; boundary=\"$MAILPART_BODY\""
echo ""
echo "--$MAILPART_BODY"
echo "Content-Type: text/plain; charset=ISO-8859-1"
echo "You need to enable HTML option for email"
echo "--$MAILPART_BODY"
echo "Content-Type: text/html; charset=ISO-8859-1"
echo "Content-Disposition: inline"
cat $BODY
echo "--$MAILPART_BODY--"
echo "--$MAILPART"
echo 'Content-Type: application/pdf; name="'$(basename $ATTACH)'"'
echo "Content-Transfer-Encoding: uuencode"
echo 'Content-Disposition: attachment; filename="'$(basename $ATTACH)'"'
echo ""
uuencode $ATTACH $(basename $ATTACH)
echo "--$MAILPART--"
) | /usr/sbin/sendmail $MAILTO
Change the export variables on the top of test.sh to reflect your address and filenames.
Download a test pdf document and put it in /home/el called pdf-test.pdf
Make a file called /home/el/email_body.htm and put this line in it:
<html><body><b>this is some bold text</b></body></html>
Make sure the pdf file has sufficient 755 permissions.
Run the script ./test.sh
Check your email inbox, the text should be in HTML format and the pdf file automatically interpreted as a binary file. Take care not to use this function more than say 15 times in a day, even if you send the emails to yourself, spam filters in gmail can blacklist a domain spewing emails without giving you an option to let them through. And you'll find this no longer works, or it only lets through the attachment, or the email doesn't come through at all. If you have to do a lot of testing on this, spread them out over days or you'll be labelled a spammer and this function won't work any more.
A: There are several answers here suggesting mail or mailx so this is more of a background to help you interpret these in context. But there are some practical suggestions near the end.
Historical Notes
The origins of Unix mail go back into the mists of the early history of Bell Labs Unix™ (1969?), and we probably cannot hope to go into its full genealogy here. Suffice it to say that there are many programs which inherit code from or reimplement (or inherit code from a reimplementation of) mail and that there is no single code base which can be unambiguously identified as "the" mail.
However, one of the contenders to that position is certainly "Berkeley Mail" which was originally called Mail with an uppercase M in 2BSD (1978); but in 3BSD (1979), it replaced the lowercase mail command as well, leading to some new confusion. SVR3 (1986) included a derivative which was called mailx. The x was presumably added to make it unique and distinct; but this, too, has now been copied, reimplemented, and mutilated so that there is no single individual version which is definitive.
Back in the day, the de facto standard for sending binaries across electronic mail was uuencode. It still exists, but has numerous usability problems; if at all possible, you should send MIME attachments instead, unless you specifically strive to be able to communicate with the late 1980s.
MIME was introduced in the early 1990s to solve several problems with email, including support for various types of content other than plain text in a single character set which only really is suitable for a subset of English (and, we are told, Hawai'ian). This introduced support for multipart messages, internationalization, rich content types, etc, and quickly gained traction throughout the 1990s.
(The Heirloom mail/mailx history notes were most helpful when composing this, and are certainly worth a read if you're into that sort of thing.)
Current Offerings
As of 2018, Debian has three packages which include a mail or mailx command. (You can search for Provides: mailx.)
debian$ aptitude search ~Pmailx
i bsd-mailx - simple mail user agent
p heirloom-mailx - feature-rich BSD mail(1)
p mailutils - GNU mailutils utilities for handling mail
(I'm not singling out Debian as a recommendation; it's what I use, so I am familiar with it; and it provides a means of distinguishing the various alternatives unambiguously by referring to their respective package names. It is obviously also the distro from which Ubuntu gets these packages.)
*
*bsd-mailx is a relatively simple mailx which does not appear to support sending MIME attachments. See its manual page and note that this is the one you would expect to find on a *BSD system, including MacOS, by default.
*heirloom-mailx is now being called s-nail and does support sending MIME attachments with -a. See its manual page and more generally the Heirloom project
*mailutils aka GNU Mailutils includes a mail/mailx compatibility wrapper which does support sending MIME attachments with -A
With these concerns, if you need your code to be portable and can depend on a somewhat complex package, the simple way to portably send MIME attachments is to use mutt.
If you know what you are doing, you can assemble an arbitrary MIME structure with the help of echo and base64 and e.g. qprint (or homegrown replacements; both base64 and qprint can easily be implemented as Perl one-liners) and pipe it to sendmail; but as several other answers on this page vividly illustrate, you probably don't.
( printf '%s\n' \
"From: myself <[email protected]>" \
"To: backup address <[email protected]>" \
"Subject: Backup of $(date)" \
"MIME-Version: 1.0" \
"Content-type: application/octet-stream; filename=\"mysqldbbackup.sql\"" \
"Content-transfer-encoding: base64" \
""
base64 < mysqldbbackup.sql ) |
sendmail -oi -t
This assumes that sendmail is on your PATH; sometimes it's not (and of course, sometimes it's simply not installed at all). Look in /usr/lib, /usr/sbin, /usr/libexec or etc; or query your package manager. Once you find it, you might want to augment your PATH in the script, or hardcode the full pathname to sendmail (and ditto for any other nonstandard binaries which may or may not be installed on your system).
This still does not attempt to provide any solution for situations where you need to send non-ASCII Unicode text or lines longer than what SMTP allows, etc etc etc. For a robust solution, I would turn to an existing tool like mutt, or a modern scripting language like Python; https://docs.python.org/3/library/email.examples.html has examples for many common use cases.
A: I usually only use the mail command on RHEL. I have tried mailx and it is pretty efficient.
mailx -s "Sending Files" -a First_LocalConfig.conf -a
Second_LocalConfig.conf [email protected]
This is the content of my msg.
.
A: I used
echo "Start of Body" && uuencode log.cfg readme.txt | mail -s "subject" "[email protected]"
and this worked well for me....
A: From source machine
mysqldump --defaults-extra-file=sql.cnf database | gzip | base64 | mail [email protected]
On Destination machine. Save the received mail body as db.sql.gz.b64; then..
base64 -D -i db.sql.gz.b64 | gzip -d | mysql --defaults-extra-file=sql.cnf
A: using mailx command
echo "Message Body Here" | mailx -s "Subject Here" -a file_name [email protected]
using sendmail
#!/bin/ksh
fileToAttach=data.txt
`(echo "To: [email protected]"
echo "Cc: [email protected]"
echo "From: Application"
echo "Subject: your subject"
echo your body
uuencode $fileToAttach $fileToAttach
)| eval /usr/sbin/sendmail -t `;
A: Just to add my 2 cents, I'd write my own PHP Script:
http://php.net/manual/en/function.mail.php
There are lots of ways to do the attachment in the examples on that page.
A: mailx does have a -a option now for attachments.
A: Not a method for sending email, but you can use an online Git server (e.g. Bitbucket or a similar service) for that.
This way, you can use git push commands, and all versions will be stored in a compressed and organized way.
A: the shortest way for me is
file=filename_or_filepath;uuencode $file $file|mail -s "optional subject" email_address
so for your example it'll be
file=your_sql.log;gzip -c $file;uuencode ${file}.gz ${file}|mail -s "file with magnets" [email protected]
the good part is that I can recall it with Ctrl+r to send another file...
A: This is how I am doing with one large log file in CentOS:
#!/bin/sh
MAIL_CMD="$(which mail)"
WHOAMI="$(whoami)"
HOSTNAME="$(hostname)"
EMAIL"[email protected]"
LOGDIR="/var/log/aide"
LOGNAME="$(basename "$0")_$(date "+%Y%m%d_%H%M")"
if cd ${LOGDIR}; then
/bin/tar -zcvf "${LOGDIR}/${LOGNAME}".tgz "${LOGDIR}/${LOGNAME}.log" > /dev/null 2>&1
if [ -n "${MAIL_CMD}" ]; then
# This works too. The message content will be taken from text file below
# echo 'Hello!' >/root/scripts/audit_check.sh.txt
# echo "Attachment" | ${MAIL_CMD} -s "${HOSTNAME} Aide report" -q /root/scripts/audit_check.sh.txt -a ${LOGNAME}.tgz -S from=${WHOAMI}@${HOSTNAME} ${EMAIL}
echo "Attachment" | ${MAIL_CMD} -s "${HOSTNAME} Aide report" -a "${LOGNAME}.tgz" -S from="${WHOAMI}@${HOSTNAME}" "${EMAIL}"
/bin/rm "${LOGDIR}/${LOGNAME}.log"
fi
fi
A: If the file is text, you can send it easiest in the body as:
sendmail [email protected] < message.txt
A: If mutt is not working or not installed,try this-
*#!/bin/sh
FilePath=$1
FileName=$2
Message=$3
MailList=$4
cd $FilePath
Rec_count=$(wc -l < $FileName)
if [ $Rec_count -gt 0 ]
then
(echo "The attachment contains $Message" ; uuencode $FileName $FileName.csv ) | mailx -s "$Message" $MailList
fi*
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "306"
} |
Q: Using GLEW to use OpenGL extensions under Windows I've been using OpenGL extensions on Windows the painful way. Is GLEW the easier way to go? How do I get started with it?
A: Personally I wouldn't use an exit command.
I would throw an exception so you can clear any other initialisation up at the end of the function.
ie:
try
{
// init opengl/directx
// init directaudio
// init directinput
if (GLEW_OK != glewInit())
{
throw std::exception("glewInit failed");
}
}
catch ( const std::exception& ex )
{
// message to screen using ex.what()
// clear up
}
And I agree with OJ - if you want to write tutorials for others, then this is really the wrong place for it. There are already a load of good places for opengl tutorials. Try this one for instance.
A: I lost some time, but finally I managed to get GLEW working.
I'm using Windows7 (x64), Eclipse CDT and MinGW, and the way is that:
Download MSYS (for MinGW) and rember to have MinGW installed correctly (PATH enviroinment variable set correctly):
http://sourceforge.net/projects/mingw/files/MSYS/Base/msys-core/msys-1.0.10/MSYS-1.0.10.exe/download?use_mirror=freefr&download=
Once MSYS installed, go to:
http://glew.sourceforge.net/
and download the TGZ package, which is intended to use with UNIX systems
Then open the package (you can use 7zip as well) and find the "Makefile".
Open it and with a text editor (Notepad should work fine) find the row which contains "GLEW_DEST" and replace it with something like "GLEW_DEST ?= C:/MinGW"
Now you are ready to go, open MSYS (C:\MinGW\msys\1.0\msys.bat in my case) and in the shell opened, go to the folder where the "Makefile" is.
Then write a simple: "make install" and the work is done (at least for me it worked)
PS: I also copy-pasted glew-1.10.0-win32\glew-1.10.0\bin\Release\Win32 file's into my System32 folder, and in Eclipse CDT I added the library "glew32" in the linker option and added a #include <GL/glew.h> before #include <GL/glut.h>
A: Yes, the OpenGL Extension Wrangler Library (GLEW) is a painless way to use OpenGL extensions on Windows. Here's how to get started on it:
Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry.
Check if your graphic card supports the extensions you wish to use. Download and install the latest drivers and SDKs for your graphics card.
Recent versions of NVIDIA OpenGL SDK ship with GLEW. If you're using this, then you don't need to do some of the following steps.
Download GLEW and unzip it.
Add the GLEW bin path to your Windows PATH environment variable. Alternatively, you can also place the glew32.dll in a directory where Windows picks up its DLLs.
Add the GLEW include path to your compiler's include directory list.
Add the GLEW lib path to your compiler's library directory list.
Instruct your compiler to use glew32.lib during linking. If you're using Visual C++ compilers then one way to do this is by adding the following line to your code:
#pragma comment(lib, "glew32.lib")
Add a #include <GL/glew.h> line to your code. Ensure that this is placed above the includes of other GL header files. (You may actually not need the GL header files includes if you include glew.h.)
Initialize GLEW using glewInit() after you've initialized GLUT or GL. If it fails, then something is wrong with your setup.
if (GLEW_OK != glewInit())
{
// GLEW failed!
exit(1);
}
Check if the extension(s) you wish to use are now available through GLEW. You do this by checking a boolean variable named GLEW_your_extension_name which is exposed by GLEW.
Example:
if (!GLEW_EXT_framebuffer_object)
{
exit(1);
}
That's it! You can now use the OpenGL extension calls in your code just as if they existed naturally for Windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How do I open the default mail program with a Subject and Body in a cross-platform way? How do I open the default mail program with a Subject and Body in a cross-platform way?
Unfortunately, this is for a a client app written in Java, not a website.
I would like this to work in a cross-platform way (which means Windows and Mac, sorry Linux). I am happy to execute a VBScript in Windows, or AppleScript in OS X. But I have no idea what those scripts should contain. I would love to execute the user's default program vs. just searching for Outlook or whatever.
In OS X, I have tried executing the command:
open mailto:?subject=MySubject&body=TheBody
URL escaping is needed to replace spaces with %20.
Updated On Windows, you have to play all sorts of games to get start to run correctly. Here is the proper Java incantation:
class Win32 extends OS {
public void email(String subject, String body) throws Exception {
String cmd = "cmd.exe /c start \"\" \"" + formatMailto(subject, body) + "\"";
Runtime.getRuntime().exec(cmd);
}
}
A: 1. Add a Subject Line
You can prefill the subject line in the email by adding the subject preceded by '?subject=' after the email address.
So the link now becomes:
<a href="mailto:[email protected]?subject=Mail from Our Site">Email Us</a>
2. Send to Multiple Recipients
Mail can be sent to additional recipients either as carbon copies (cc) or blind carbon copies (bcc).
This is done in a similar way, by placing '[email protected]' after the initial address.
So the link looks like this:
<a href="mailto:[email protected][email protected]">Email Us</a>
cc can simply be replaced by bcc if you wish to send blind carbon copies.
This can be very useful if you have links on pages with different subjects. You might have the email on each page go to the appropriate person in a company but with a copy of all mails sent to a central address also.
You can of course specify more than one additional recipient, just separate your list of recipients with a comma.
<a href="mailto:[email protected][email protected], [email protected], [email protected]">Email Us</a>
Sourced from Getting More From 'mailto' which now 404s. I retrieved the content from waybackmachine.
3. Combining Code
You can combine the various bits of code above by the addition of an '&' between each.
Thus adding
[email protected]?subject=Hello&[email protected]&[email protected]
would send an email with the subject 'Hello' to me, you and her.
4. Write the Email
You can also prefill the body of the email with the start of a message, or write the whole message if you like! To add some thing to the body of the email it is again as simple as above - '?body=' after the email address. However formatting that email can be a little tricky. To create spaces between words you will have to use hex code - for example '%20' between each word, and to create new lines will mean adding '%0D'. Similarly symbols such as $ signs will need to be written in hex code.
If you also wish to add a subject line and send copies to multiple recipients, this can make for a very long and difficult to write bit of code.
It will send a message to three people, with the subject and the message filled in, all you need to do is add your name.
Just look at the code!
<a href="mailto:[email protected][email protected]
&[email protected]&Subject=Please%2C%20I%20insist
%21&Body=Hi%0DI%20would%20like%20to%20send%20you%20
%241000000%20to%20divide%20as%20you%20see%20fit%20among
%20yourselves%20and%20all%20the%20moderators.%0DPlease%
20let%20me%20know%20to%20whom%20I%20should%20send
%20the%20check.">this link</a>
Note: Original source URL where I found this is now 404ing so I grabbed to content from waybackmachine and posted it here so it doesn't get lost. Also, the OP stated it was not for a website, which is what these examples are, but some of these techniques may still be useful.
A: start works fine in Windows (see below). I would use Java's built in UrlEscape then just run a second replacement for '+' characters.
start mailto:"?subject=My%20Subject&body=The%20Body"
A: Never use Runtime.exec(String) on Mac OS X or any other operating system. If you do that, you'll have to figure out how to properly quote all argument strings and so on; it's a pain and very error-prone.
Instead, use Runtime.exec(String[]) which takes an array of already-separated arguments. This is much more appropriate for virtually all uses.
A:
I had to re-implement URLencode
because Java's would use + for space
and Mail took those literally.
I don't know if Java has some built-in method for urlencoding the string, but this link http://www.permadi.com/tutorial/urlEncoding/ shows some of the most common chars to encode:
; %3B
? %3F
/ %2F
: %3A
# %23
& %24
= %3D
+ %2B
$ %26
, %2C
space %20 or +
% %25
< %3C
> %3E
~ %7E
% %25
A:
I don't know if Java has some built-in method for urlencoding the string, but this link http://www.permadi.com/tutorial/urlEncoding/ shows some of the most common chars to encode:
For percent-encoding mailto URI hnames and hvalues, I use the rules at http://shadow2531.com/opera/testcases/mailto/modern_mailto_uri_scheme.html#encoding. Under http://shadow2531.com/opera/testcases/mailto/modern_mailto_uri_scheme.html#implementations, there's a Java example that may help.
Basically, I use:
private String encodex(final String s) {
try {
return java.net.URLEncoder.encode(s, "utf-8").replaceAll("\\+", "%20").replaceAll("\\%0A", "%0D%0A");
} catch (Throwable x) {
return s;
}
}
The string that's passed in should be a string with \r\n, and stray \r already normalized to \n.
Also note that just returning the original string on an exception like above is only safe if the mailto URI argument you're passing on the command-line is properly escaped and quoted.
On windows that means:
*
*Quote the argument.
*Escape any " inside the quotes with \.
*Escape any \ that precede a " or the end of the string with \.
Also, on windows, if you're dealing with UTF-16 strings like in Java, you might want to use ShellExecuteW to "open" the mailto URI. If you don't and return s on an exception (where some hvalue isn't completely percent-encoded, you could end up narrowing some wide characters and losing information. But, not all mail clients accept unicode arguments, so ideally, you want to pass a properly percent-encoded-utf8 ascii argument with ShellExecute.
Like 'start', ShellExecute with "open" should open the mailto URI in the default client.
Not sure about other OS's.
A: Mailto isn't a bad route to go. But as you mentioned, you'll need to make sure it is encoded correctly.
The main problem with using mailto is with breaking lines. Use %0A for carriage returns, %20 for spaces.
Also, keep in mind that the mailto is considered the same as a URL of sorts and therefore will have the same limitations for length. See
http://support.microsoft.com/kb/208427, note the maximum URL length of 2083 characters. This is confirmed for mailto as well
in this article: http://support.microsoft.com/kb/279460/en-us. Also, some mail clients can also have a limit (I believe older versions of Outlook Express had a limit of something much smaller like 483 characters or something. If you expect to have a longer string than that then you'll need to look at alternatives.
BTW, you shouldn't have to resort to kicking out a script to do that as long as you can shell out a command from Java (I don't know if you can since I don't do Java).
A: In Java 1.6 you have a stardard way to open the default mailer of the platform:
the Desktop.mail(URI) method.The URI can be used to set all the fields of the mail (sender, recipients, body, subject).
You can check a full example of desktop integration in Java 1.6 on Using the Desktop API in Java SE 6
A: You may use this...
main(string[] args){
String forUri = String.format("mailto:?subject=%s&body=%s", urlEncode(sub), urlEncode(mailBody));
Desktop.getDesktop().mail(new URI(forUri));
}
private static final String urlEncode(String str) {
try {
return URLEncoder.encode(str, "UTF-8").replace("+", "%20");
} catch (UnsupportedEncodingException e) {
throw new RuntimeException(e);
}
}
Also for formatting read A simple way of sending emails in Java: mail-to links
A: I have implemented this, and it works well on OS X. (Ryan's mention of the max URL length has not been codified.)
public void email(String subject, String body) throws Exception {
String cmd = "open mailto:";
cmd += "?subject=" + urlEncode(subject);
cmd += "&body=" + urlEncode(body);
Runtime.getRuntime().exec(cmd);
}
private static String urlEncode(String s) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < s.length(); i++) {
char ch = s.charAt(i);
if (Character.isLetterOrDigit(ch)) {
sb.append(ch);
}
else {
sb.append(String.format("%%%02X", (int)ch));
}
}
return sb.toString();
}
I had to re-implement URLencode because Java's would use + for space and Mail took those literally. Haven't tested on Windows yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How would you implement FORM based authentication without a backing database? I have a PHP script that runs as a CGI program and the HTTP Authenticate header gets eaten and spit out. So I would like to implement some kind of FORM based authentication. As an added constraint, there is no database so no session data can be stored.
I am very open to having a master username and password. I just need to protect the application from an intruder who doesn't know these credentials.
So how would you implement this?
Cookies?
I could present the form and if it validates, I can send back a cookie that is a hash of the IP address come secret code. Then I can prevent pages from rendering unless the thing decrypts correctly. But I have no idea how to implement that in PHP.
A: A few ways you could do this.
*
*htaccess -- have your webserver handle securing the pages in question (not exactly cgi form based though).
*Use cookies and some sort of hashing algorithm (md5 is good enough) to store the passwords in a flat file where each line in the file is username:passwordhash. Make sure to salt your hashes for extra security vs rainbow tables. (This method is a bit naive... be very careful with security if you go this route)
*use something like a sqlite database just to handle authentication. Sqlite is compact and simple enough that it may still meet your needs even if you don't want a big db backend.
Theoretically, you could also store session data in a flat file, even if you can't have a database.
A: If you're currently using Authenticate, then you may already have an htpasswd file. If you would like to continue using that file, but switch to using FORM based authentication rather than via the Authenticate header, you can use a PHP script to use the same htpasswd file and use sessions to maintain the authentication status.
A quick Google search for php htpasswd reveals this page with a PHP function to check credentials against an htpasswd. You could integrate it (assuming you have sessions set to autostart) with some code like this:
// At the top of your 'private' page(s):
if($_SESSION['authenticated'] !== TRUE) {
header('Location: /login.php');
die();
}
// the target of the POST form from login.php
if(http_authenticate($_POST['username'], $_POST['password']))
$_SESSION['authenticated'] = TRUE;
A: Do you really need a form? No matter what you do, you're limited by the username and password being known. If they know that, they get your magic cookie that lets them. You want to prevent them seeing the pages if they don't know the secret, and basic authorization does that, is easy to set up, and doesn't require a lot of work on your part.
Do you really need to see the Authorization header if the web server takes care of the access control for you?
Also, if you're providing the application to a known list of people (rather than the public), you can provide web-server-based access on other factors, such as incoming IP address, client certificates, and many other things that are a matter of configuration rather than programming. If you explained your security constraints, we might be able to offer a better solution.
Good luck, :)
A: ... About salt, add the username in your hash salt will prevent someone who knows your salt and have access to your password file to write a rainbow table and crack number of your users's password.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Privatizing a BlogEngine.Net Installation I have a blogengine.net install that requires privatization.
I'm doing research work at the moment, but I have to keep my blog/journal private until certain conditions are met.
How can I privatize my blogEngine.net install so that readers must log in to read my posts?
A: From: BlogEngine.NET 2.5 - Private Blogs
If you go into the control panel, Users tab, Roles sub-tab (right side), for "Anonymous" on the right-side Tools area, hover over that and select "Rights".
You are now on the Rights page for the Anonymous role. Uncheck everything, in particular "View Public Posts". HOWEVER, you do need to keep at least one item checked, otherwise everything reverts back to the default. For example, you could keep "View Ratings on Posts" checked. Then Save.
Then anyone who is not logged in should automatically be redirected to the Login page no matter where what page they try to enter the site at.
A: lomaxx's answer didn't work, so I decided to avoid making blogengine.net perform auth for readers.
on iis, i disabled anonymous access and added a guest users to the win2k3 user list.
A: We created a simple tool that gives certain users access to certain posts according to their ASP.NET Membership Roles to acheive a somewhat similar result.
http://blog.lavablast.com/post/2008/08/BlogEnginenet-Post-Security.aspx
A: I use this extension. Just save the file as RequireLogin.cs in your App_Code\Extensions folder and make sure the extension is activated.
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using BlogEngine.Core;
using BlogEngine.Core.Web.Controls;
using System.Collections.Generic;
/// <summary>
/// Summary description for PostSecurity
/// </summary>
[Extension("Checks to see if a user can see this blog post.",
"1.0", "<a href=\"http://www.lavablast.com\">LavaBlast.com</a>")]
public class RequireLogin
{
static protected ExtensionSettings settings = null;
public RequireLogin()
{
Post.Serving += new EventHandler<ServingEventArgs>(Post_Serving);
ExtensionSettings s = new ExtensionSettings("RequireLogin");
// describe specific rules for entering parameters
s.Help = "Checks to see if the user has any of those roles before displaying the post. ";
s.Help += "You can associate a role with a specific category. ";
s.Help += "All posts having this category will require that the user have the role. ";
s.Help += "A parameter with only a role without a category will enable to filter all posts to this role. ";
ExtensionManager.ImportSettings(s);
settings = ExtensionManager.GetSettings("PostSecurity");
}
protected void Post_Serving(object sender, ServingEventArgs e)
{
MembershipUser user = Membership.GetUser();
if(HttpContext.Current.Request.RawUrl.Contains("syndication.axd"))
{
return;
}
if (user == null)
{
HttpContext.Current.Response.Redirect("~/Login.aspx");
}
}
}
A: I would think it's possible to do this in the web config file by doing something like the following:
<system.web>
<authorization>
<allow roles="Admin" />
<deny users="*" />
</authorization>
</system.web>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you separate game logic from display? How can you make the display frames per second be independent from the game logic? That is so the game logic runs the same speed no matter how fast the video card can render.
A: Koen Witters has a very detailed article about different game loop setups.
He covers:
*
*FPS dependent on Constant Game Speed
*Game Speed dependent on Variable FPS
*Constant Game Speed with Maximum FPS
*Constant Game Speed independent of Variable FPS
(These are the headings pulled from the article, in order of desirability.)
A: You could make your game loop look like:
int lastTime = GetCurrentTime();
while(1) {
// how long is it since we last updated?
int currentTime = GetCurrentTime();
int dt = currentTime - lastTime;
lastTime = currentTime;
// now do the game logic
Update(dt);
// and you can render
Draw();
}
Then you just have to write your Update() function to take into account the time differential; e.g., if you've got an object moving at some speed v, then update its position by v * dt every frame.
A: I think the question reveals a bit of misunderstanding of how game engines should be designed. Which is perfectly ok, because they are damn complex things that are difficult to get right ;)
You are under the correct impression that you want what is called Frame Rate Independence. But this does not only refer to Rendering Frames.
A Frame in single threaded game engines is commonly referred to as a Tick. Every Tick you process input, process game logic, and render a frame based off of the results of the processing.
What you want to do is be able to process your game logic at any FPS (Frames Per Second) and have a deterministic result.
This becomes a problem in the following case:
Check input:
- Input is key: 'W' which means we move the player character forward 10 units:
playerPosition += 10;
Now since you are doing this every frame, if you are running at 30 FPS you will move 300 units per second.
But if you are instead running at 10 FPS, you will only move 100 units per second. And thus your game logic is not Frame Rate Independent.
Happily, to solve this problem and make your game play logic Frame Rate Independent is a rather simple task.
First, you need a timer which will count the time each frame takes to render. This number in terms of seconds (so 0.001 seconds to complete a Tick) is then multiplied by what ever it is that you want to be Frame Rate Independent. So in this case:
When holding 'W'
playerPosition += 10 * frameTimeDelta;
(Delta is a fancy word for "Change In Something")
So your player will move some fraction of 10 in a single Tick, and after a full second of Ticks, you will have moved the full 10 units.
However, this will fall down when it comes to properties where the rate of change also changes over time, for example an accelerating vehicle. This can be resolved by using a more advanced integrator, such as "Verlet".
Multithreaded Approach
If you are still interested in an answer to your question (since I didn't answer it but presented an alternative), here it is. Separating Game Logic and Rendering into different threads. It has it's draw backs though. Enough so that the vast majority of Game Engines remain single threaded.
That's not to say there is only ever one thread running in so called single threaded engines. But all significant tasks are usually in one central thread. Some things like Collision Detection may be multithreaded, but generally the Collision phase of a Tick blocks until all the threads have returned, and the engine is back to a single thread of execution.
Multithreading presents a whole, very large class of issues, even some performance ones since everything, even containers, must be thread safe. And Game Engines are very complex programs to begin with, so it is rarely worth the added complication of multithreading them.
Fixed Time Step Approach
Lastly, as another commenter noted, having a Fixed size time step, and controlling how often you "step" the game logic can also be a very effective way of handling this with many benefits.
Linked here for completeness, but the other commenter also links to it:
Fix Your Time Step
A: There was an excellent article on flipcode about this back in the day. I would like to dig it up and present it for you.
http://www.flipcode.com/archives/Main_Loop_with_Fixed_Time_Steps.shtml
It's a nicely thought out loop for running a game:
*
*Single threaded
*At a fixed game clock
*With graphics as fast as possible using an interpolated clock
Well, at least that's what I think it is. :-) Too bad the discussion that pursued after this posting is harder to find. Perhaps the wayback machine can help there.
time0 = getTickCount();
do
{
time1 = getTickCount();
frameTime = 0;
int numLoops = 0;
while ((time1 - time0) TICK_TIME && numLoops < MAX_LOOPS)
{
GameTickRun();
time0 += TICK_TIME;
frameTime += TICK_TIME;
numLoops++;
// Could this be a good idea? We're not doing it, anyway.
// time1 = getTickCount();
}
IndependentTickRun(frameTime);
// If playing solo and game logic takes way too long, discard pending
time.
if (!bNetworkGame && (time1 - time0) TICK_TIME)
time0 = time1 - TICK_TIME;
if (canRender)
{
// Account for numLoops overflow causing percent 1.
float percentWithinTick = Min(1.f, float(time1 - time0)/TICK_TIME);
GameDrawWithInterpolation(percentWithinTick);
}
}
while (!bGameDone);
A: Enginuity has a slightly different, but interesting approach: the Task Pool.
A: Single-threaded solutions with time delays before displaying graphics are fine, but I think the progressive way is to run game logic in one thread, and displaying in other thread.
But you should synchronize threads right way ;) It'll take a long time to implement, so if your game is not too big, single-threaded solution will be fine.
Also, extracting GUI into separate thread seems to be great approach. Have you ever seen "Mission complete" pop-up message while units are moving around in RTS games? That's what I'm talking about :)
A: This doesn' cover the higher program abstraction stuff, i.e. state machines etc.
It's fine to control movement and acceleration by adjusting those with your frame time lapse.
But how about stuff like triggering a sound 2.55 seconds after this or that, or changing
game level 18.25 secs later, etc.
That can be tied up to an elapsed frame time accumulator (counter), BUT these timings can
get screwed up if your frame rate falls below your state script resolution
i.e if your higher logic needs 0.05 sec granularity and you fall below 20fps.
Determinism can be kept if the game logic is run on a separate "thread"
(at the software level, which I would prefer for this, or OS level) with a fixed time-slice, independent of fps.
The penalty might be that you might waste cpu time in-between frames if not much is happening,
but I think it's probably worth it.
A: From my experience (not much) Jesse and Adam's answers should put you on the right track.
If you are after further information and insight into how this works, i found that the sample applications for TrueVision 3D were very useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Language Conversion Testing We created a tool which converts a language called P2, a language similar to assembly which I think exists only in Japan, to C#. There are at least a hundred modules written in P2 and we want to verify if the conversion to C# is right? How can we test this?
A: You don't test the converter, you test the final code.
If the code doesn't compile, clearly your converter is failing. If the code compiles and your functionality tests fail, then you can tweak the code so that it passes the test. If you are fairly successful you should see that you only need to fix the modules that actually fail.
Goodluck!
A: Short of a formal mathematical proof (which I imagine would be difficult), the proof of the pudding is in the unit tests. You have to find a way to wrap the converted C# snippets, compile the and run them under a similar environment, then compare the output against the original. Unless you're rigorous in your testing, there's no way you can be confident of the result.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: When should you use 'friend' in C++? I have been reading through the C++ FAQ and was curious about the friend declaration. I personally have never used it, however I am interested in exploring the language.
What is a good example of using friend?
Reading the FAQ a bit longer I like the idea of the << >> operator overloading and adding as a friend of those classes. However I am not sure how this doesn't break encapsulation. When can these exceptions stay within the strictness that is OOP?
A: You control the access rights for members and functions using Private/Protected/Public right?
so assuming the idea of each and every one of those 3 levels is clear, then it should be clear that we are missing something...
The declaration of a member/function as protected for example is pretty generic. You are saying that this function is out of reach for everyone (except for an inherited child of course). But what about exceptions? every security system lets you have some type of 'white list" right?
So friend lets you have the flexibility of having rock solid object isolation, but allows for a "loophole" to be created for things that you feel are justified.
I guess people say it is not needed because there is always a design that will do without it. I think it is similar to the discussion of global variables: You should never use them, There is always a way to do without them... but in reality, you see cases where that ends up being the (almost) most elegant way... I think this is the same case with friends.
It doesn't really do any good, other than let you access a member variable without using a setting function
well that is not exactly the way to look at it.
The idea is to control WHO can access what, having or not a setting function has little to do with it.
A: I found handy place to use friend access: Unittest of private functions.
A: The creator of C++ says that isn't broking any encapsulation principle, and I will quote him:
Does "friend" violate encapsulation?
No. It does not. "Friend" is an explicit mechanism for granting access, just like membership. You cannot (in a standard conforming program) grant yourself access to a class without modifying its source.
Is more than clear...
A: Friend comes handy when you are building a container and you want to implement an iterator for that class.
A: @roo: Encapsulation is not broken here because the class itself dictates who can access its private members. Encapsulation would only be broken if this could be caused from outside the class, e.g. if your operator << would proclaim “I'm a friend of class foo.”
friend replaces use of public, not use of private!
Actually, the C++ FAQ answers this already.
A: We had an interesting issue come up at a company I previously worked at where we used friend to decent affect. I worked in our framework department we created a basic engine level system over our custom OS. Internally we had a class structure:
Game
/ \
TwoPlayer SinglePlayer
All of these classes were part of the framework and maintained by our team. The games produced by the company were built on top of this framework deriving from one of Games children. The issue was that Game had interfaces to various things that SinglePlayer and TwoPlayer needed access to but that we did not want expose outside of the framework classes. The solution was to make those interfaces private and allow TwoPlayer and SinglePlayer access to them via friendship.
Truthfully this whole issue could have been resolved by a better implementation of our system but we were locked into what we had.
A: The short answer would be: use friend when it actually improves encapsulation. Improving readability and usability (operators << and >> are the canonical example) is also a good reason.
As for examples of improving encapsulation, classes specifically designed to work with the internals of other classes (test classes come to mind) are good candidates.
A: Firstly (IMO) don't listen to people who say friend is not useful. It IS useful. In many situations you will have objects with data or functionality that are not intended to be publicly available. This is particularly true of large codebases with many authors who may only be superficially familiar with different areas.
There ARE alternatives to the friend specifier, but often they are cumbersome (cpp-level concrete classes/masked typedefs) or not foolproof (comments or function name conventions).
Onto the answer;
The friend specifier allows the designated class access to protected data or functionality within the class making the friend statement. For example in the below code anyone may ask a child for their name, but only the mother and the child may change the name.
You can take this simple example further by considering a more complex class such as a Window. Quite likely a Window will have many function/data elements that should not be publicly accessible, but ARE needed by a related class such as a WindowManager.
class Child
{
//Mother class members can access the private parts of class Child.
friend class Mother;
public:
string name( void );
protected:
void setName( string newName );
};
A: The canonical example is to overload operator<<. Another common use is to allow a helper or admin class access to your internals.
Here are a couple of guidelines I heard about C++ friends. The last one is particularly memorable.
*
*Your friends are not your child's friends.
*Your child's friends are not your friends.
*Only friends can touch your private parts.
A: To do TDD many times I've used 'friend' keyword in C++.
Can a friend know everything about me?
Updated: I found this valuable answer about "friend" keyword from Bjarne Stroustrup site.
"Friend" is an explicit mechanism for granting access, just like membership.
A: Another use: friend (+ virtual inheritance) can be used to avoid deriving from a class (aka: "make a class underivable") => 1, 2
From 2:
class Fred;
class FredBase {
private:
friend class Fred;
FredBase() { }
};
class Fred : private virtual FredBase {
public:
...
};
A: You have to be very careful about when/where you use the friend keyword, and, like you, I have used it very rarely. Below are some notes on using friend and the alternatives.
Let's say you want to compare two objects to see if they're equal. You could either:
*
*Use accessor methods to do the comparison (check every ivar and determine equality).
*Or, you could access all the members directly by making them public.
The problem with the first option, is that that could be a LOT of accessors, which is (slightly) slower than direct variable access, harder to read, and cumbersome. The problem with the second approach is that you completely break encapsulation.
What would be nice, is if we could define an external function which could still get access to the private members of a class. We can do this with the friend keyword:
class Beer {
public:
friend bool equal(Beer a, Beer b);
private:
// ...
};
The method equal(Beer, Beer) now has direct access to a and b's private members (which may be char *brand, float percentAlcohol, etc. This is a rather contrived example, you would sooner apply friend to an overloaded == operator, but we'll get to that.
A few things to note:
*
*A friend is NOT a member function of the class
*It is an ordinary function with special access to the private members of the class
*Don't replace all accessors and mutators with friends (you may as well make everything public!)
*Friendship isn't reciprocal
*Friendship isn't transitive
*Friendship isn't inherited
*Or, as the C++ FAQ explains: "Just because I grant you friendship access to me doesn't automatically grant your kids access to me, doesn't automatically grant your friends access to me, and doesn't automatically grant me access to you."
I only really use friends when it's much harder to do it the other way. As another example, many vector maths functions are often created as friends due to the interoperability of Mat2x2, Mat3x3, Mat4x4, Vec2, Vec3, Vec4, etc. And it's just so much easier to be friends, rather than have to use accessors everywhere. As pointed out, friend is often useful when applied to the << (really handy for debugging), >> and maybe the == operator, but can also be used for something like this:
class Birds {
public:
friend Birds operator +(Birds, Birds);
private:
int numberInFlock;
};
Birds operator +(Birds b1, Birds b2) {
Birds temp;
temp.numberInFlock = b1.numberInFlock + b2.numberInFlock;
return temp;
}
As I say, I don't use friend very often at all, but every now and then it's just what you need. Hope this helps!
A: As the reference for friend declaration says:
The friend declaration appears in a class body and grants a function or another class access to private and protected members of the class where the friend declaration appears.
So just as a reminder, there are technical errors in some of the answers which say that friend can only visit protected members.
A:
edit: Reading the faq a bit longer I like the idea of the << >> operator overloading and adding as a friend of those classes, however I am not sure how this doesn't break encapsulation
How would it break encapsulation?
You break encapsulation when you allow unrestricted access to a data member. Consider the following classes:
class c1 {
public:
int x;
};
class c2 {
public:
int foo();
private:
int x;
};
class c3 {
friend int foo();
private:
int x;
};
c1 is obviously not encapsulated. Anyone can read and modify x in it. We have no way to enforce any kind of access control.
c2 is obviously encapsulated. There is no public access to x. All you can do is call the foo function, which performs some meaningful operation on the class.
c3? Is that less encapsulated? Does it allow unrestricted access to x? Does it allow unknown functions access?
No. It allows precisely one function to access the private members of the class. Just like c2 did. And just like c2, the one function which has access is not "some random, unknown function", but "the function listed in the class definition". Just like c2, we can see, just by looking at the class definitions, a complete list of who has access.
So how exactly is this less encapsulated? The same amount of code has access to the private members of the class. And everyone who has access is listed in the class definition.
friend does not break encapsulation. It makes some Java people programmers feel uncomfortable, because when they say "OOP", they actually mean "Java". When they say "Encapsulation", they don't mean "private members must be protected from arbitrary accesses", but "a Java class where the only functions able to access private members, are class members", even though this is complete nonsense for several reasons.
First, as already shown, it is too restricting. There's no reason why friend methods shouldn't be allowed to do the same.
Second, it is not restrictive enough. Consider a fourth class:
class c4 {
public:
int getx();
void setx(int x);
private:
int x;
};
This, according to aforesaid Java mentality, is perfectly encapsulated.
And yet, it allows absolutely anyone to read and modify x. How does that even make sense? (hint: It doesn't)
Bottom line:
Encapsulation is about being able to control which functions can access private members. It is not about precisely where the definitions of these functions are located.
A: With regards to operator<< and operator>> there is no good reason to make these operators friends. It is true that they should not be member functions, but they don't need to be friends, either.
The best thing to do is create public print(ostream&) and read(istream&) functions. Then, write the operator<< and operator>> in terms of those functions. This gives the added benefit of allowing you to make those functions virtual, which provides virtual serialization.
A: I'm only using the friend-keyword to unittest protected functions. Some will say that you shouldn't test protected functionality. I, however, find this very useful tool when adding new functionality.
However, I don't use the keyword in directly in the class declarations, instead I use a nifty template-hack to achive this:
template<typename T>
class FriendIdentity {
public:
typedef T me;
};
/**
* A class to get access to protected stuff in unittests. Don't use
* directly, use friendMe() instead.
*/
template<class ToFriend, typename ParentClass>
class Friender: public ParentClass
{
public:
Friender() {}
virtual ~Friender() {}
private:
// MSVC != GCC
#ifdef _MSC_VER
friend ToFriend;
#else
friend class FriendIdentity<ToFriend>::me;
#endif
};
/**
* Gives access to protected variables/functions in unittests.
* Usage: <code>friendMe(this, someprotectedobject).someProtectedMethod();</code>
*/
template<typename Tester, typename ParentClass>
Friender<Tester, ParentClass> &
friendMe(Tester * me, ParentClass & instance)
{
return (Friender<Tester, ParentClass> &)(instance);
}
This enables me to do the following:
friendMe(this, someClassInstance).someProtectedFunction();
Works on GCC and MSVC atleast.
A: In C++ "friend" keyword is useful in Operator overloading and Making Bridge.
1.) Friend keyword in operator overloading :Example for operator overloading is: Let say we have a class "Point" that has two float variable"x"(for x-coordinate) and "y"(for y-coordinate). Now we have to overload "<<"(extraction operator) such that if we call "cout << pointobj" then it will print x and y coordinate (where pointobj is an object of class Point). To do this we have two option:
1.Overload "operator <<()" function in "ostream" class.
2.Overload "operator<<()" function in "Point" class.
Now First option is not good because if we need to overload again this operator for some different class then we have to again make change in "ostream" class.
That's why second is best option. Now compiler can call
"operator <<()" function:
1.Using ostream object cout.As: cout.operator<<(Pointobj) (form ostream class). 2.Call without an object.As: operator<<(cout, Pointobj) (from Point class).
Beacause we have implemented overloading in Point class. So to call this function without an object we have to add"friend" keyword because we can call a friend function without an object.
Now function declaration will be As:
"friend ostream &operator<<(ostream &cout, Point &pointobj);"
2.) Friend keyword in making bridge :
Suppose we have to make a function in which we have to access private member of two or more classes ( generally termed as "bridge" ) .
How to do this:
To access private member of a class it should be member of that class. Now to access private member of other class every class should declare that function as a friend function. For example :
Suppose there are two class A and B. A function "funcBridge()" want to access private member of both classes. Then both class should declare "funcBridge()" as:
friend return_type funcBridge(A &a_obj, B & b_obj);I think this would help to understand friend keyword.
A: At work we use friends for testing code, extensively. It means we can provide proper encapsulation and information hiding for the main application code. But also we can have separate test code that uses friends to inspect internal state and data for testing.
Suffice to say I wouldn't use the friend keyword as an essential component of your design.
A: The friend keyword has a number of good uses. Here are the two uses immediately visible to me:
Friend Definition
Friend definition allows to define a function in class-scope, but the function will not be defined as a member function, but as a free function of the enclosing namespace, and won't be visible normally except for argument dependent lookup. That makes it especially useful for operator overloading:
namespace utils {
class f {
private:
typedef int int_type;
int_type value;
public:
// let's assume it doesn't only need .value, but some
// internal stuff.
friend f operator+(f const& a, f const& b) {
// name resolution finds names in class-scope.
// int_type is visible here.
return f(a.value + b.value);
}
int getValue() const { return value; }
};
}
int main() {
utils::f a, b;
std::cout << (a + b).getValue(); // valid
}
Private CRTP Base Class
Sometimes, you find the need that a policy needs access to the derived class:
// possible policy used for flexible-class.
template<typename Derived>
struct Policy {
void doSomething() {
// casting this to Derived* requires us to see that we are a
// base-class of Derived.
some_type const& t = static_cast<Derived*>(this)->getSomething();
}
};
// note, derived privately
template<template<typename> class SomePolicy>
struct FlexibleClass : private SomePolicy<FlexibleClass> {
// we derive privately, so the base-class wouldn't notice that,
// (even though it's the base itself!), so we need a friend declaration
// to make the base a friend of us.
friend class SomePolicy<FlexibleClass>;
void doStuff() {
// calls doSomething of the policy
this->doSomething();
}
// will return useful information
some_type getSomething();
};
You will find a non-contrived example for that in this answer. Another code using that is in this answer. The CRTP base casts its this pointer, to be able to access data-fields of the derived class using data-member-pointers.
A: Another common version of Andrew's example, the dreaded code-couplet
parent.addChild(child);
child.setParent(parent);
Instead of worrying if both lines are always done together and in consistent order you could make the methods private and have a friend function to enforce consistency:
class Parent;
class Object {
private:
void setParent(Parent&);
friend void addChild(Parent& parent, Object& child);
};
class Parent : public Object {
private:
void addChild(Object& child);
friend void addChild(Parent& parent, Object& child);
};
void addChild(Parent& parent, Object& child) {
if( &parent == &child ){
wetPants();
}
parent.addChild(child);
child.setParent(parent);
}
In other words you can keep the public interfaces smaller and enforce invariants that cut across classes and objects in friend functions.
A: The tree example is a pretty good example :
Having an object implemented in a few different class without
having an inheritance relationship.
Maybe you could also need it to have a constructor protected and force
people to use your "friend" factory.
... Ok, Well frankly you can live without it.
A:
To do TDD many times I've used 'friend' keyword in C++.Can a friend know everything about me?
No, its only a one way friendship :`(
A: One specific instance where I use friend is when creating Singleton classes. The friend keyword lets me create an accessor function, which is more concise than always having a "GetInstance()" method on the class.
/////////////////////////
// Header file
class MySingleton
{
private:
// Private c-tor for Singleton pattern
MySingleton() {}
friend MySingleton& GetMySingleton();
}
// Accessor function - less verbose than having a "GetInstance()"
// static function on the class
MySingleton& GetMySingleton();
/////////////////////////
// Implementation file
MySingleton& GetMySingleton()
{
static MySingleton theInstance;
return theInstance;
}
A: Friend functions and classes provide direct access to private and protected members of class to avoid breaking encapsulation in the general case. Most usage is with ostream: we would like to be able to type:
Point p;
cout << p;
However, this may require access to the private data of Point, so we define the overloaded operator
friend ostream& operator<<(ostream& output, const Point& p);
There are obvious encapsulation implications, however. First, now the friend class or function has full access to ALL members of the class, even ones that do not pertain to its needs. Second, the implementations of the class and the friend are now enmeshed to the point where an internal change in the class can break the friend.
If you view the friend as an extension of the class, then this is not an issue, logically speaking. But, in that case, why was it necessary to spearate out the friend in the first place.
To achieve the same thing that 'friends' purport to achieve, but without breaking encapsulation, one can do this:
class A
{
public:
void need_your_data(B & myBuddy)
{
myBuddy.take_this_name(name_);
}
private:
string name_;
};
class B
{
public:
void print_buddy_name(A & myBuddy)
{
myBuddy.need_your_data(*this);
}
void take_this_name(const string & name)
{
cout << name;
}
};
Encapsulation is not broken, class B has no access to the internal implementation in A, yet the result is the same as if we had declared B a friend of A.
The compiler will optimize away the function calls, so this will result in the same instructions as direct access.
I think using 'friend' is simply a shortcut with arguable benefit, but definite cost.
A: When implementing tree algorithms for class, the framework code the prof gave us had the tree class as a friend of the node class.
It doesn't really do any good, other than let you access a member variable without using a setting function.
A: You could adhere to the strictest and purest OOP principles and ensure that no data members for any class even have accessors so that all objects must be the only ones that can know about their data with the only way to act on them is through indirect messages, i.e., methods.
But even C# has an internal visibility keyword and Java has its default package level accessibility for some things. C++ comes actually closer to the OOP ideal by minimizinbg the compromise of visibility into a class by specifying exactly which other class and only other classes could see into it.
I don't really use C++ but if C# had friends I would that instead of the assembly-global internal modifier, which I actually use a lot. It doesn't really break incapsulation, because the unit of deployment in .NET is an assembly.
But then there's the InternalsVisibleToAttribute(otherAssembly) which acts like a cross-assembly friend mechanism. Microsoft uses this for visual designer assemblies.
A: You may use friendship when different classes (not inheriting one from the other) are using private or protected members of the other class.
Typical use cases of friend functions are operations that are
conducted between two different classes accessing private or protected
members of both.
from http://www.cplusplus.com/doc/tutorial/inheritance/ .
You can see this example where non-member method accesses the private members of a class. This method has to be declared in this very class as a friend of the class.
// friend functions
#include <iostream>
using namespace std;
class Rectangle {
int width, height;
public:
Rectangle() {}
Rectangle (int x, int y) : width(x), height(y) {}
int area() {return width * height;}
friend Rectangle duplicate (const Rectangle&);
};
Rectangle duplicate (const Rectangle& param)
{
Rectangle res;
res.width = param.width*2;
res.height = param.height*2;
return res;
}
int main () {
Rectangle foo;
Rectangle bar (2,3);
foo = duplicate (bar);
cout << foo.area() << '\n';
return 0;
}
A: Probably I missed something from the answers above but another important concept in encapsulation is hiding of implementation. Reducing access to private data members (the implementation details of a class) allows much easier modification of the code later. If a friend directly accesses the private data, any changes to the implementation data fields (private data), break the code accessing that data. Using access methods mostly eliminates this. Fairly important I would think.
A: This may not be an actual use case situation but may help to illustrate the use of friend between classes.
The ClubHouse
class ClubHouse {
public:
friend class VIPMember; // VIP Members Have Full Access To Class
private:
unsigned nonMembers_;
unsigned paidMembers_;
unsigned vipMembers;
std::vector<Member> members_;
public:
ClubHouse() : nonMembers_(0), paidMembers_(0), vipMembers(0) {}
addMember( const Member& member ) { // ...code }
void updateMembership( unsigned memberID, Member::MembershipType type ) { // ...code }
Amenity getAmenity( unsigned memberID ) { // ...code }
protected:
void joinVIPEvent( unsigned memberID ) { // ...code }
}; // ClubHouse
The Members Class's
class Member {
public:
enum MemberShipType {
NON_MEMBER_PAID_EVENT, // Single Event Paid (At Door)
PAID_MEMBERSHIP, // Monthly - Yearly Subscription
VIP_MEMBERSHIP, // Highest Possible Membership
}; // MemberShipType
protected:
MemberShipType type_;
unsigned id_;
Amenity amenity_;
public:
Member( unsigned id, MemberShipType type ) : id_(id), type_(type) {}
virtual ~Member(){}
unsigned getId() const { return id_; }
MemberShipType getType() const { return type_; }
virtual void getAmenityFromClubHouse() = 0
};
class NonMember : public Member {
public:
explicit NonMember( unsigned id ) : Member( id, MemberShipType::NON_MEMBER_PAID_EVENT ) {}
void getAmenityFromClubHouse() override {
Amenity = ClubHouse::getAmenity( this->id_ );
}
};
class PaidMember : public Member {
public:
explicit PaidMember( unsigned id ) : Member( id, MemberShipType::PAID_MEMBERSHIP ) {}
void getAmenityFromClubHouse() override {
Amenity = ClubHouse::getAmenity( this->id_ );
}
};
class VIPMember : public Member {
public:
friend class ClubHouse;
public:
explicit VIPMember( unsigned id ) : Member( id, MemberShipType::VIP_MEMBERSHIP ) {}
void getAmenityFromClubHouse() override {
Amenity = ClubHouse::getAmenity( this->id_ );
}
void attendVIPEvent() {
ClubHouse::joinVIPEvent( this->id );
}
};
Amenities
class Amenity{};
If you look at the relationship of these classes here; the ClubHouse holds a variety of different types of memberships and membership access. The Members are all derived from a super or base class since they all share an ID and an enumerated type that are common and outside classes can access their IDs and Types through access functions that are found in the base class.
However through this kind of hierarchy of the Members and its Derived classes and their relationship with the ClubHouse class the only one of the derived class's that has "special privileges" is the VIPMember class. The base class and the other 2 derived classes can not access the ClubHouse's joinVIPEvent() method, yet the VIP Member class has that privilege as if it has complete access to that event.
So with the VIPMember and the ClubHouse it is a two way street of access where the other Member Classes are limited.
A: Seems I'm about 14 years late to the party. But here goes.
TLDR TLDR
Friend classes are there so that you can extend encapsulation to the group of classes which comprise your data structure.
TLDR
Your data structure in general consists of multiple classes. Similarly to a traditional class (supported by your programming language), your data structure is a generalized class which also has data and invariants on that data which spans across objects of multiple classes. Encapsulation protects those invariants against accidental modification of the data from the outside, so that the data-structure's operations ("member functions") work correctly. Friend classes extend encapsulation from classes to your generalized class.
The too long
A class is a datatype together with invariants which specify a subset of the values of the datatype, called the valid states. An object is a valid state of a class. A member function of a class moves a given object from a valid state to another.
It is essential that object data is not modified from outside of the class member functions, because this could break the class invariants (i.e. move the object to an invalid state). Encapsulation prohibits access to object data from outside of the class. This is an important safety feature of programming languages, because it makes it hard to inadvertedly break class invariants.
A class is often a natural choice for implementing a data structure, because the properties (e.g. performance) of a data structure is dependent on invariants on its data (e.g. red-black tree invariants). However, sometimes a single class is not enough to describe a data structure.
A data structure is any set of data, invariants, and functions which move that data from a valid state to another. This is a generalization of a class. The subtle difference is that the data may be scattered over datatypes rather than be concentrated on a single datatype.
Data structure example
A prototypical example of a data structure is a graph which is stored using separate objects for vertices (class Vertex), edges (class Edge), and the graph (class Graph). These classes do not make sense independently. The Graph class creates Vertexs and Edges by its member functions (e.g. graph.addVertex() and graph.addEdge(aVertex, bVertex)) and returns pointers (or similar) to them. Vertexs and Edges are similarly destroyed by their owning Graph (e.g. graph.removeVertex(vertex) and graph.removeEdge(edge)). The collection of Vertex objects, Edge objects and the Graph object together encode a mathematical graph. In this example the intention is that Vertex/Edge objects are not shared between Graph objects (other design choices are also possible).
A Graph object could store a list of all its vertices and edges, while each Vertex could store a pointer to its owning Graph. Hence, the Graph object represents the whole mathematical graph, and you would pass that around whenever the mathematical graph is needed.
Invariant example
An invariant for the graph data structure then would be that a Vertex is listed in its owner Graph's list. This invariant spans both the Vertex object and the Graph object. Multiple objects of multiple types can take part in a given invariant.
Encapsulation example
Similarly to a class, a data structure benefits from encapsulation which protects against accidental modification of its data. This is because the data structure needs to preserve invariants to be able to function in promised manner, exactly like a class.
In the graph data structure example, you would state that Vertex is a friend of Graph, and also make the constructors and data-members of Vertex private so that a Vertex can only be created and modified by Graph. In particular, Vertex would have a private constructor which accepts a pointer to its owning graph. This constructor is called in graph.addVertex(), which is possible because Vertex is a friend of Graph. (But note that Graph is not a friend of Vertex: there is no need for Vertex to be able to access Graph's vertex-list, say.)
Terminology
The definition of a data structure acts itself like a class. I propose that we start using the term 'generalized class' for any set of data, invariants, and functions which move that data from a valid state to another. A C++ class is then a specific kind of a generalized class. It is then self-evident that friend classes are the precise mechanism for extending encapsulation from C++ classes to generalized classes.
(In fact, I'd like the term 'class' to be replaced with the concept of 'generalized class', and use 'native class' for the special case of a class supported by the programming language. Then when teaching classes you would learn of both native classes and these generalized classes. But perhaps that would be confusing.)
A: Friends are also useful for callbacks. You could implement callbacks as static methods
class MyFoo
{
private:
static void callback(void * data, void * clientData);
void localCallback();
...
};
where callback calls localCallback internally, and the clientData has your instance in it. In my opinion,
or...
class MyFoo
{
friend void callback(void * data, void * callData);
void localCallback();
}
What this allows is for the friend to be a defined purely in the cpp as a c-style function, and not clutter up the class.
Similarly, a pattern I've seen very often is to put all the really private members of a class into another class, which is declared in the header, defined in the cpp, and friended. This allows the coder to hide a lot of the complexity and internal working of the class from the user of the header.
In the header:
class MyFooPrivate;
class MyFoo
{
friend class MyFooPrivate;
public:
MyFoo();
// Public stuff
private:
MyFooPrivate _private;
// Other private members as needed
};
In the cpp,
class MyFooPrivate
{
public:
MyFoo *owner;
// Your complexity here
};
MyFoo::MyFoo()
{
this->_private->owner = this;
}
It becomes easier to hide things that the downstream needn't see this way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "385"
} |
Q: Will server-side JavaScript take off? Which implementation is most stable? Does anyone see server-side JavaScript taking off? There are a couple of implementations out there, but it all seems to be a bit of a stretch (as in, "doing it BECAUSE WE CAN" type of attitude).
I'm curious to know if anyone actually writes JavaScript for the server-side and what their experiences with it have been to date.
Also, which implementation is generally seen as the most stable?
A:
Why would you want to process
something in Javascript when you can
process it in PHP or ASP.NET which are
designed specifically for this task?
Perhaps because JavaScript is a more powerful programming language than those two? For example, it has functions as first-class data types and support for closures.
Steve Yegge has blogged about porting Ruby on Rails to server-side JavaScript as an internal project within Google ("Rhino on Rails"). He did it because he likes Rails but using Ruby isn't allowed within Google.
A: Before it was acquired by Google, JotSpot used server-side JavaScript to let you query their database and display your pages. They used Rhino to do it. CouchDB uses server-side JavaScript to create views of their database.
As you can see from these examples, a great way to use JavaScript on the server is for plugins. One of the reasons it's used is that you can create a very isolated sandbox for people to run their code in. Also, because of the way that JavaScript as a language works, you can provide a user tooling specifically honed to the tasks your users need to complete. If you do this right, users don't need to learn a new language to complete their tasks, a quick glance at your API and examples is enough to get them on their way. Compare this to many of the other languages and you can see why using server-side JavaScript to provide a plugin architecture is so enticing.
A secondary popular solution, one which can be seen through a project like Jaxer, is that a common problem of web applications that do client-side validation is that, since JavaScript is easily bypassed in the browser, validation has to be run once again on the server. A system like Jaxer allows you to write some validation functionality that is reusable between both server and client.
A: Support for JS on the server has been getting stronger and the number of frameworks is getting bigger even faster.
Just recently the serversideJS group was founded. They have a lot of smart people that have been working on serverside JS for years (some of them more then 10).
The goal for this project is to create
a standard library that will
ultimately allow web developers to
choose among any number of web
frameworks and tools and run that code
on the platform that makes the most
sense for their application.
to the people who say "why would you choose JS over java or any other language?" - you should read this Re-Introduction by Crockford and forget about the DOM - the DOM is superugly, but that's not JS fault and JS is not the DOM.
A: I like to read Googler Steve Yegge's blog, and recently I came across this article of his where he argues that Mozilla Rhino is a good solution for server-side JS. It's a somewhat sloppy transcript, you might prefer to watch the video of the talk. It also offers a little bit of insight on why he thinks server-side JS is a good idea in the first place (or rather, why he thinks that it's a good idea to use a dynamic language to script Java). I thought the points he makes were convincing, so you might want to check it out.
A while earlier, he also posted something about dynamic languages in general (he's a big fan of them), just in case you were wondering why to use JS at all.
A: I've never even heard of this, but it strikes me as using the wrong tool for the job. Since programming languages are just tools designed to help us solve some problem.
Why would you want to process something in Javascript when you can process it in PHP or ASP.NET which are designed specifically for this task?
Sure you can pound a nail in with a screw driver, but a hammer works much better because it was actually designed for it...
So no, I don't see it taking off.
A: Well, plain ol' ASP supported JavaScript server-side years ago and everyone onad their dog used VBShiate instead. But I have to agree with the others: JS does not seem to be the right tool here - and I love to do client-side JS :)
A: I personally did a whole site in server side JavaScript using ASP. I found it quite enjoyable because I was able to have some good code reuse. This included:
*
*validation of parameters
*object modeling
*object transport
Coupled with a higher-level modeling tool and code gen, I had fun with that project.
I have no numbers on perf unfortunately, since it is used only on an intranet. However, I have to assume performance is on par with VBScript backed ASP sites.
A: It seems like most of you are put off by this idea because of how unpleasant the various client-side implementations of Javascript have been. I would check out existing solutions before passing judgment, though, because remember that no particular SS/JS solution is tied to the JS implementations currently being used in browsers. Javascript is based on ECMAScript, remember, a spec that is currently in a fairly mature state. I suspect that a SS/JS solution that supports more recent ECMA specs would be no more cumbersome than using other scripting languages for the task. Remember, Ruby wasn't written to be a "web language" originally, either.
A:
Does anyone see Server-side Javascript
taking off?
Try looking at http://www.appjet.com a startup doing hosted JavaScript applications to get a feel for what you can do. I especially like the learning process which gently nudges the user to build things with a minimal overhead ~ http://appjet.com/learn-to-program/lessons/intro
Now it might seem a weird idea at the moment to use JavaScript but think back when PC's started coming out. Every nerd I knew of was typing away at their new Trash-80's, Commodore64's, Apple ]['s typing in games or simple apps in BASIC.
Where is todays basic for the younger hacker?
It is just possible that JavaScript could do for Web based server side apps as BASIC did for the PC.
A: *
*XChat can run Javascript plugins.
*I've some accounting software completely written in Javascript.
*There's this interesting IO library for V8: http://tinyclouds.org/node/
*CouchDB is a document database with 'queries' written in Javascript (TraceMonkey).
Considering this, i believe, server-side Javascript did take off.
A: Server-side programming has been around for a lot longer than client side, and has lots of good solutions already.
JavaScript has survived and become popular purely because developers have very little choice in the matter - it's the only language that can interact with a DOM. Its only competition on the client side is from things like Flash and Silverlight which have a very different model.
This is also why JavaScript has received so much effort to smart it up and add modern features. If it were possible for the whole browser market to drop JavaScript and replace it with something designed properly for the task, I'm sure they would. As it stands Javascript has strange prototype-based objects, a few neat functional programming features, limited and quirky collections and very few libraries.
For small scripts it's fine, but it's a horrible language for writing large complicated systems. That things like Firefox and Gmail are (partly) written in it is a heroic accomplishment on their part, not a sign that the language is ready for real application development.
A: Flash Media Server is scripted by using Server Side Action Script, which is really just javascript (ECMAScript). So, I do it a lot. In fact, most of my day was dealing with SSAS.
And I hate it. Though to be fair, a bunch of that is more related to the (not so great) codebase I inherited than the actual language.
A: I think server-side Javascript is guarenteed to take off. Its only a matter of time.
Mozilla, Google, and Adobe have so much vested interest for Javascript that it would take a miracle to dislodge it from the browser world. The next logical step is to move this into the server-side.
This is a step towards moving away from the hodge podge of Internet technology that usually includes all of these
*
*HTML
*CSS
*Javascript
*Serverside Language J2EE/ASP/Ruby/Python/PHP
*SQL
I haven't heard much about the current state of Javascript Server frameworks, except that they are mostly incomplete.
A: I see server-side js will offer considerable advantages in future applications. Why? Web apps that can go offline, client-side db store, google gears, etc...
Following this trend, more and more logic are moving into the client-side. Use an ORM that works for client-side, and use another on server-side (be it PHP / Ruby / whatever), write your synchronization logic twice in two different languages, write your business logic twice in two different languages?
How about use js on the client AND the server side and write the code once?
Convincing?
A: Personaly i've been developing and using my own JavaScript framework for about 4 years
now.
The good thing about JS on serverside is that implemented in ASP Classic you don't need
any other plugin or software installed, besides i'm also using my javascript (client)
framework on my server, that allows me to enjoy of the same functionality and proven
performance of my functions at both environments client and serverside.
Not only for data validation, but also lets say HTML or CSS dynamic constructions
can be done client or serverside, at least with my framework.
So far it works fast, i have nothing to complain or regret except its great usability
and scalability that i have been enjoying during this past 4 years, until the point
that i'm changing my ASP Classic code to javascript code.
You can see it in pratice at http://www.laferia.com.do
A: Node.js has taken off and proven that server-side JavaScript is here to stay =)
A: I can't see most developers getting over their distaste for client-side JavaScript programming. I'd rather go to Java for server-side stuff before choosing JavaScript.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: IE6 - can't load a normal JPG Try loading this normal .jpg file in Internet Explorer 6.0. I get an error saying the picture won't load. Try it in any other browser and it works fine. What's wrong? The .jpg file is just a normal picture sitting on the web server. I can even create a simple web page:
<a href="http://www.zodiacwheels.com/images/wheels/blackout_thumb.jpg">blah</a>
and use right click + save target as with IE6 to save it to my desktop, and it's a valid JPG file. However, it won't load in the browser!
Why?!
I even tried checking the header response and MIME type and it looks fine:
andy@debian:~$ telnet www.zodiacwheels.com 80
Trying 72.167.174.247...
Connected to zodiacwheels.com.
Escape character is '^]'.
HEAD /images/wheels/blackout_thumb.jpg HTTP/1.1
Host: www.zodiacwheels.com
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2008 06:19:04 GMT
Server: Apache
Last-Modified: Wed, 20 Aug 2008 00:29:36 GMT
ETag: "1387402-914ac-48ab6570"
Accept-Ranges: bytes
Content-Length: 595116
Content-Type: image/jpeg
The site needs to be able to work with IE6, how come it won't load a simple .jpg file?
A: It won't load in IE7 on my Vista x64 box. Also Paint.net won't save the file, saying "There was an unspecified error while saving the file."
EDIT:
In paint.net I did a Select All, New File, Paste, Save, and now it works fine. I'm guessing that file has some weird corruption.
A: The JPG you uploaded is in CMYK, IE and Firefox versions before 3 can't read these. Open it using Photoshop (or anything similar, I'm sure GIMP would work too) and resave it in RGB.
edit: Further Googling makes me suspect that CMYK isn't really a part of the jpeg standard, but can be shoehorned in there. That's why some software does not consider the file valid. It does however open just fine in Photoshop CS3, and shows a cmyk colorspace.
A: You can use jpeginfo to find out if a jpeg file is OK or not.
$jpeginfo -c blackout_thumb.jpg
blackout_thumb.jpg 240 x 240 32bit
Exif N 595116 Unsupported color
conversion request [ERROR]
In your case the file is corrupted which explain why some browsers cannot display it.
A: Maybe it is related to this: http://photo.net/bboard/q-and-a-fetch-msg?msg_id=003j8d
A: The file is probably not a fully valid JPG and IE6/7/8 (I tested on IE8 and it wont load). Other browsers are a bit more defensive and can load it, but perhaps IE team choose not to load it as it could be invalid in a way that causes a security hole.
As Ryan Fox says, open it in an editor and re-save it ... where did the image come from, if it came from an editor dont use that editor again.
Edit: I opened it an Paint Shop Pro and it had an unknown color palette so had to convert it ... perhaps that is the problem. You could report it as a bug to the IE team and see what they say.
A: It is possible for other applications to register themselves as a handler for files with a particular extension. Quicktime has (or at least had) a tendency to do this with .png files, so a .png file would display fine inline in an HTML page, but with an URL referring directly to the .png file, IE would immediately delegate all responsibility for handling the file to Quicktime.
Might this be what is happening to your .jpg files? Is it only this .jpg file that you're having a problem with?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Is there a way to prevent a method from being overridden in subclasses? Is anyone aware of a language feature or technique in C++ to prevent a child class from over riding a particular method in the parent class?
class Base {
public:
bool someGuaranteedResult() { return true; }
};
class Child : public Base {
public:
bool someGuaranteedResult() { return false; /* Haha I broke things! */ }
};
Even though it's not virtual, this is still allowed (at least in the Metrowerks compiler I'm using), all you get is a compile time warning about hiding non-virtual inherited function X.
A: Sounds like what you're looking for is the equivalent of the Java language final keyword that prevents a method from being overridden by a subclass.
As others here have suggested, you really can't prevent this. Also, it seems that this is a rather frequently asked question.
A: When you can use the final specifier for virtual methods (introduced with C++11), you can do it. Let me quote my favorite doc site:
When used in a virtual function declaration, final specifies that the function may not be overridden by derived classes.
Adapted to your example that'd be like:
class Base {
public:
virtual bool someGuaranteedResult() final { return true; }
};
class Child : public Base {
public:
bool someGuaranteedResult() { return false; /* Haha I broke things! */ }
};
When compiled:
$ g++ test.cc -std=c++11
test.cc:8:10: error: virtual function ‘virtual bool Child::someGuaranteedResult()’
test.cc:3:18: error: overriding final function ‘virtual bool Base::someGuaranteedResult()’
When you are working with a Microsoft compiler, also have a look at the sealed keyword.
A: (a) I dont think making function private is the solution because that will just hide the base class function from the derived class.The derived class can always define a new function with the same signature.
(b) Making the function non virtual is also not a complete solution because, if the derived class redefines the same function , one can always call the derived class function by compile time binding i.e obj.someFunction() where obj is an instance of the derived class.
I dont think there is a way of doing this.Also,i would like to know the reason for your decision to prohibit derived classes from overriding base class functions.
A:
a compile time warning about hiding non-virtual inherited function X.
change your compiler settings to make it a error instead of warning.
A: I guess what the compiler warns you about is hiding !! Is it actually being overridden ?
compiler might give you a warning, but at runtime, the parent class method will be called if the pointer is of type parent class, regardless of the actual type of the object it points to.
This is interesting. Try making a small standalone test program for your compiler.
A: A couple of ideas:
*
*Make your function private.
*Do not make your function virtual. This doesn't actually prevent the function from being shadowed by another definition though.
Other than that, I'm not aware of a language feature that will lock away your function in such a way which prevents it from being overloaded and still able to be invoked through a pointer/reference to the child class.
Good luck!
A: For clarification, most of you misunderstood his question. He is not asking about "overriding" a method, he is asking whether there is a way to prevent "hiding" or not. And the simple answer is that "there is none!".
Here's his example once again
Parent class defines a function:
int foo() { return 1; }
Child class, inheriting the Parent defines the same function AGAIN (not overriding):
int foo() { return 2; }
You can do this on all programming languages. There is nothing to prevent this code from compiling (except a setting on the compiler). The best you'll get is a warning that you are hiding the parent's method. If you call the child class and invoke the foo method, you'll get 2. You have practically broken the code.
This is what he is asking.
A: If you address the child class as a type of its parent, then a non-virtual function will call the parent class's version.
ie:
Parent* obj = new Child();
A: Unless you make the method virtual, the child class cannot override it. If you want to keep child classes from calling it, make it private.
So by default C++ does what you want.
A: Trying to prevent someone from using the same name as your function in a subclass isn't much different than trying to prevent someone from using the same global function name as you have declared in a linked library.
You can only hope that users that mean to use your code, and not others', will be careful with how they reference your code and that they use the right pointer type or use a fully qualified scope.
A: In your example, no function is overridden. It is instead hidden (it is a kind of degenerated case of overloading).
The error is in the Child class code. As csmba suggested, all you can do is changing your compiler settings (if possible) ; it should be fine as long as you don't use a third party library that hides its own functions.
A: Technically u can prevent virtual functions to be be overridden. But you will never ever been able to change or add more. That is not help full. Better to use comment in front of function as faq lite suggests.
A: C++ methods are private and un-overridable by default.
*
*You cannot override a private method
*You cannot override a non-virtual method
Are you perhaps referring to overloading?
A: I was searching for same and yesterday came to this [rather old] question.
Today I found a neat c++11 keyword : final . I thought it may be useful for next readers.
http://en.cppreference.com/w/cpp/language/final
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Why are DispatcherObject.CheckAccess() and VerifyAccess() hidden from Intellisense? The System.Windows.Threading.DispatcherObject class (which DependencyObject is based on) contains a useful function, called CheckAccess(), that determines whether or not the code is running on the UI thread.
When I wanted to use it yesterday, I was puzzled to find out that Intellisense didn't show the function (nor VerifyAccess(), which throws an exception when not on the UI thread), even though the MSDN library lists it. I decided to investigate the class using Reflector. It seems that the function in question has an EditorBrowsable(EditorBrowsableState.Never) attribute attached to it. The Dispatcher class, which is used by DispatcherObject, has the same attribute attached to CheckAccess() and VerifyAccess():
public abstract class DispatcherObject
{
// ...
[EditorBrowsable(EditorBrowsableState.Never)]
public bool CheckAccess();
[EditorBrowsable(EditorBrowsableState.Never)]
public void VerifyAccess();
// ...
[EditorBrowsable(EditorBrowsableState.Advanced)]
public Dispatcher Dispatcher { get; }
}
public sealed class Dispatcher
{
// ...
[EditorBrowsable(EditorBrowsableState.Never)]
public bool CheckAccess();
[EditorBrowsable(EditorBrowsableState.Never)]
public void VerifyAccess();
// ...
}
I don't believe that the application of that attribute is random (or a joke), so my question is: why is it there? Should those methods not be called directly? Then why aren't they protected (or internal, like some of the most useful methods in the WPF)?
A: A Microsoft employee recently stated CheckAccess is used only for "advanced scenarios", so they hid it from Intellisense.
"CheckAccess and VerifyAccess have
always been marked to be not visible,
maybe IntelliSense wasn't respecting
it. You can use Reflector to confirm.
The idea here is that CheckAccess and
VerifyAccess are advances scenarios,
that normal developers don't need.
However, I do think that
EditorBrowsableState.Advanced would
have been a more appropriate level."
There's a Microsoft Connect case for this shortcoming. Vote for it if it's important to you.
A: I can't find any documentation that says you shouldn't use those methods directly, but I haven't looked very long.
Also you refer to the EditorVisibleAttribute, which doesn't exist. According to Reflector it's the EditorBrowsableAttribute.
Reflector disassembly:
[EditorBrowsable(EditorBrowsableState.Never)]
public bool CheckAccess()
{
//CODE
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How to modify the style property of a font on Windows?
Note that this question continues from Is it possible to coax Visual Studio 2008 into using italics for comments?
If the long question title got you, here's the problem:
How to convert the style property of the Consolas Italic font to Bold without actually modifying any of its actual glyphs? That is, we want the font to be still the same (i.e., Italic) we merely want the OS to believe that it's now a Bold font.
Please just don't mention the name of a tool (Ex: fontforge), but describe the steps to achieve this or point to such a description.
A: I did the italics-as-bold trick on Consolas back in July 2007 and posted a screenshot of it on my blog.
I used FontLab which does a great job but a custom tool to copy and set the header would be the best bet as you can't modify and redistribute Consolas and FontLab costs $699.
If you want to go down the FontLab route the open up the regular and italic versions and go into the File > Font Info... menu option and use the Names and Copyright section.
In there set them both fonts Family Name to a new name then flip the checkboxes on the italic version to indicate bold instead of italic and select Normal from the Weight list box and Italic in the Style Name list box.
Save and install :)
A: Alright, I've successfully used FontForge to create a copy of Consolas (although this should work with any font) with the bold style actually being italics.
These are the steps that I followed:
*
*Install FontForge. It's a lot easier to do this on linux than on windows/cygwin. I used a Ubuntu VM ("sudo apt-get install fontforge").
*Open Consola.ttf (the "normal" style font) in FontForge.
*Select Element -> Font Info.
*Change the Fontname, Family Name, and Name for Humans, all to the same thing. I used 'ConsolasVS'.
*Click Ok. Click 'Yes' to let FontForge generate a new GUID for the font.
*Select File -> Generate Fonts. Make sure you've got "TrueType" selected. Uncheck "Validate before saving". Click Save.
*Now open Consolai.ttf (the italic style font) in FontForge.
*Go back to Element -> Font Info.
*Change the Font names as before, and where it currently says "Italic", change that to "Bold".
*Go to the OS/2 tab, change the font weight to "700 Bold".
*Go to the Mac tab, change the style set to Bold.
*Click Ok. Allow a new GUID to be generated again.
*File -> Generate Fonts, as before.
Copy your two new ttf files into your \Windows\FONTS\ folder.
You can now have nice italic comments with Consolas in VS2008. Hooray!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: ASP.NET Custom Controls - Composites Summary
Hi All,
OK, further into my adventures with custom controls...
In summary, here is that I have learned of three main "classes" of custom controls. Please feel free to correct me if any of this is wrong!
*
*UserControls - Which inherit from UserControl and are contained within an ASCX file. These are pretty limited in what they can do, but are a quick and light way to get some UI commonality with designer support.
*Custom Composite Controls - These are controls that inherit from WebControl where you add pre-existing controls to the control within the CreateChildControls method. This provides great flexibility, but lack of designer support without additional coding. They are highly portable though since they can be compiled into a DLL.
*Custom Rendered Controls - Similar to Custom Composite Controls, these are added to a Web Control Library project. The rendering of the control is completely controlled by the programmer by overriding the Render method.
My Thoughts..
OK, so while playing with custom composites, I found the following:
*
*You have little/no control over the HTML output making it difficult to "debug".
*The CreateChildControls (and subsequent methods) can get real busy with Controls.Add(myControl) everywhere.
*I found rendering tables (be it for layout or content) to be considerably awkward.
The Question(s)..
So, I admit, I am new to this so I could be way off-base with some of my points noted above..
*
*Do you use Composites?
*Do you have any neat tricks to control the HTML output?
*Do you just say "to hell with it" and go ahead and create a custom rendered control?
Its something I am keen to get really firm in my mind since I know how much good control development can cut overall development time.
I look forward to your answers ^_^
A: I say go ahead with the custom rendered control. I find that in most cases the composite can be easier done and used in a UserControl, but anything beyond that and you'd need to have a finer degree of control (pun unintended) to merit your own rendering strategy.
There maybe controls that are simple enough to merit a composite (e.g., a textbox combined with a javascript/dhtml based datepicker, for example) but beyond that one example, it looks like custom rendered controls are the way to go.
A: Here's another extension method that I use for custom rendering:
public static void WriteControls
(this HtmlTextWriter o, string format, params object[] args)
{
const string delimiter = "<2E01A260-BD39-47d0-8C5E-0DF814FDF9DC>";
var controls = new Dictionary<string,Control>();
for(int i =0; i < args.Length; ++i)
{
var c = args[i] as Control;
if (c==null) continue;
var guid = Guid.NewGuid().ToString();
controls[guid] = c;
args[i] = delimiter+guid+delimiter;
}
var _strings = string.Format(format, args)
.Split(new string[]{delimiter},
StringSplitOptions.None);
foreach(var s in _strings)
{
if (controls.ContainsKey(s))
controls[s].RenderControl(o);
else
o.Write(s);
}
}
Then, to render a custom composite in the RenderContents() method I write this:
protected override void RenderContents(HtmlTextWriter o)
{
o.WriteControls
(@"<table>
<tr>
<td>{0}</td>
<td>{1}</td>
</tr>
</table>"
,Text
,control1);
}
A: Rob, you are right. The approach I mentioned is kind of a hybrid. The advantage of having ascx files around is that on every project I've seen, designers would feel most comfortable with editing actual markup and with the ascx you and a designer can work separately. If you don't plan on actual CSS/markup/design changes on the controls themselves later, you can go with a custom rendered control. As I said, my approach is only relevant for more complicated scenarios (and these are probably where you need a designer :))
A: I often use composite controls. Instead of overriding Render or RenderContents, just assign each Control a CssClass and use stylesheets. For multiple Controls.Add, I use an extension method:
//Controls.Add(c1, c2, c3)
static void Add(this ControlCollection coll, params Control[] controls)
{ foreach(Control control in controls) coll.Add(control);
}
For quick and dirty rendering, I use something like this:
writer.Render(@"<table>
<tr><td>{0}</td></tr>
<tr>
<td>", Text);
control1.RenderControl(writer);
writer.Render("</td></tr></table>");
For initializing control properties, I use property initializer syntax:
childControl = new Control { ID="Foo"
, CssClass="class1"
, CausesValidation=true;
};
A: Using custom composite controls has a point in a situation where you have a large web application and want to reuse large chunks in many places. Then you would only add child controls of the ones you are developing instead of repeating yourself.
On a large project I've worked recently what we did is the following:
*
*Every composite control has a container. Used as a wrapped for everything inside the control.
*Every composite control has a template. An ascx file (without the <%Control%> directive) which only contains the markup for the template.
*The container (being a control in itself) is initialized from the template.
*The container exposes properties for all other controls in the template.
*You only use this.Controls.Add([the_container]) in your composite control.
In fact you need a base class that would take care of initializing a container with the specified template and also throw exceptions when a control is not found in the template. Of course this is likely to be an overkill in a small application. If you don't have reused code and markup and only want to write simple controls, you're better off using User Controls.
A: You might be able to make use of this technique to make design-time easier:
http://aspadvice.com/blogs/ssmith/archive/2007/10/19/Render-User-Control-as-String-Template.aspx
Basically you create an instance of a user control at runtime using the LoadControl method, then hand it a statebag of some kind, then attach it to the control tree. So your composite control would actually function like more of a controller, and the .ascx file would be like a view.
This would save you the trouble of having to instantiate the entire control tree and style the control in C#!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Request Windows Vista UAC elevation if path is protected? For my C# app, I don't want to always prompt for elevation on application start, but if they choose an output path that is UAC protected then I need to request elevation.
So, how do I check if a path is UAC protected and then how do I request elevation mid-execution?
A: I'm not sure if it is of any help for you but you can take a look at this blog post:
http://haishibai.blogspot.com/2010/01/tiy-try-out-windows-7-uac-using-c-part_26.html
A: The best way to detect if they are unable to perform an action is to attempt it and catch the UnauthorizedAccessException.
However as @DannySmurf correctly points out you can only elevate a COM object or separate process.
There is a demonstration application within the Windows SDK Cross Technology Samples called UAC Demo. This demonstration application shows a method of executing actions with an elevated process. It also demonstrates how to find out if a user is currently an administrator.
A: Requesting elevation mid-execution requires that you either:
*
*Use a COM control that's elevated, which will put up a prompt
*Start a second process that is elevated from the start.
In .NET, there is currently no way to elevate a running process; you have to do one of the hackery things above, but all that does is give the user the appearance that the current process is being elevated.
The only way I can think of to check if a path is UAC elevated is to try to do some trivial write to it while you're in an un-elevated state, catch the exception, elevate and try again.
A: You may want to notify the user that the path is protected and ask them to output the file to a "safer" area. This way your app will not need elevation. I'm sure it depends on your users and what you are trying to do, however I don't think it's too much to kindly let the user know you don't feel ok dumping xyz into the Windows/System32 folder.
A: If your secondary drive has it's own file permissions, like say you have an other copy of windows installed on it. It will prompt.
It will also prompt if files are in use, which sometimes occurs if you have windows explorer open to the same directory and the file selected with a file previewer displaying the contents... there are some other oddities, but generally you get asked for file permission if the file is in use or it's a sensitive directory.
If you do loop the FolderBrowserDialog , make sure to notify the user why, so they dont get mad at your app.
Note: it does stink there is no .net way of asking for permission, maybe p/invoke the win32 api...?
A: UAC can elevate object based on their GUID, this would (In theory) mean that any class with a GUID can be elevated, The UACDemo should also show how to do this
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Recommendations for browser add-on tools to help with development Can anyone suggest some good browser add-on tools/extensions to help with development?
I have firebug installed in Firefox which is great for dealing with CSS, HTML and javascript; any other suggestions?
Firebug
A: For Firefox:
Firebug is awesome for investigation and development.
Web Developer Toolbar is good also. Really helps with CSS and page layout stuff as well as much more.
I also use Live HTTP Headers (I think it is called, but it is on my work machine, so can't find the link now). Which has helped us out with caching issues and the like.
I do a lot of mobile phone development, so I also use UserAgent Switcher. Very helpful for pretending to be different mobile phones.
I tend to only use Firefox for development, and just test in other browsers as most do not have the extensive range of plugins to aid development that Firefox does.
A: Firefox:
*
*Inspect This if you use the DOM Inspector at all
*Measure It for telling you pixel distances (if you need that)
*IE View or Safari View for ease of testing in other browsers
*HTML Validator if you care about validation
*Console2 to improve your js error console
*The Javascript Shell bookmarklet is also handy (and look at the others there as well)
Edit: This is in addition to the Web Development Toolbar mentioned by others
A: The other must-have for Firefox is Chris Pederick's Web Developer Toolbar.
A: You should definitely install Safari. It has a number of tools built-in. I use it in combination with other browsers all the time.
*
*Network Timeline
*Error Console
*Web Inspector
*Snippet Editor
Plus it lets you set the user agent for your requests.
Consider this, it has a separate top-level menu called Develop.
A: In case of IE, next tools can be useful
*
*Microsoft Developer Toolbar - dom|styles viewer
*Fiddler HTTP Debugger - http monitor
*Instant source - dom|styles viewer
*Companion.JS - dom|styles viewer, extended error console
The "uber" extension for IE - "Developer Tools", provided as a part of IE8
A: Opera has:
Dragonfly (tools -> advanced -> developer tools)
Debug Menu
UserJS methods for intercepting things
opera:config#CompatMode%20Override for forcing quirks or standards mode
Web developer widgets
You can view source of files, edit them, apply changes and reload from cache.
A: Developer Console and DOM Snapshot for Opera:
http://dev.opera.com/tools/
Awesomeness is that these are bookmarklets implemented with JS. Suckiness is that they require the Internetz.
A:
Firefox:
Inspect This if you use the DOM Inspector at all
Measure It for telling you pixel distances (if you need that)
IE View or Safari View for ease of testing in other browsers
HTML Validator if you care about validation
Console2 to improve your js error console
The Javascript Shell bookmarklet is also handy (and look at the others there as well)
This is in addition to the Web Development Toolbar mentioned by others
This list by Cebjyre is nearly complete (since FireBug was already mentioned in the question). I would only add Tamperdata. From time to time it is very useful.
A: Here's my development oriented add-ons for Firefox 3:
*
*Web Developer
*Firebug
*
*Firecookie
*FirePHP
*Rainbow
*TamperData
*Poster
*FireFTP
*ReloadEvery
*Selenium IDE
A: YSlow is a sweet Firebug addon for troubleshooting a page's load time.
A: Other than the excellent tools already mentioned, I find Charles to be extremely useful. Especially since I do alot of work with Flash Remoting which it handles excellently.
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
Charles can act as a man-in-the-middle for HTTP/SSL communication, enabling you to debug the content of your HTTPS sessions.
It's crossplatform, costs $50, but there's a "30 minute per session"-evaluation you can download.
A: Here's what I use:
Firefox:
*
*DOM Inspector: I use this more than anything else for web development
*Launchy: for opening sites in other browsers/apps
*Tamper Data: this can be helpful for debugging GET/POST requests
*Web Developer Toolbar: this has so many handy features for debugging: the W3C validation tools, built-in ruler, resizing tools, source manipulation, easy cache/css/script tools
IE:
*
*Internet Explorer Developer Toolbar: nowhere near as handy as the Firefox one, but at least it gives you a decent DOM Inspector
Misc:
*
*Jesse's handy bookmarklets: the shell bookmarklet is especially handy
*I also install Safari and Opera, but mostly just use them for testing and benchmarking since their dev tools aren't as robust as Firefox, and they aren't as buggy as IE.
*Lynx: I use this to make sure that any JS-heavy sites still work so that I'm sure they'll look OK to google, screen readers, and any other bot-like app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Non Public Members for C# Interfaces In C#, when you implement an interface, all members are implicitly public. Wouldn't it be better if we could specify the accessibility modifier (protected, internal, except private of course), or should we just use an abstract class instead?
A: Would not make sense. An Interface is a contract with the public that you support those methods and properties. Stick with abstract classes.
A: All the answers here more or less say that's how interfaces are meant to be, they are universal public specifications.
This being the most discussed thread, let me post two excellent answers I found on SO when this question surfaced my mind.
This answer gives an example of how it can be nonsensical to have non uniform access specifiers for interface members in derived classes. Code always better than technical descriptions.
To me the most damning thing about forced public interface members are that the interface itself can be internal to an assembly but the members it exposes have to be public. Jon Skeet explains here that's by design sadly.
That raises the question why weren't interfaces designed to have non-public definitions for members. That can make the contract flexible. This is pretty useful when writing assemblies where you dont want specific members of classes to be exposed to outside the assembly. I do not know why.
A: If an interface is internal, all its members will be internal to the assembly. If a nested interface is protected, only the subclasses of the outer class could access that interface.
Internal members for an interface outside of its declaring assembly would be pointless, as would protected members for an interface outside of its declaring outer class.
The point of an interface is to describe a contract between a implementing type and users of the interface. Outside callers aren't going to care and shouldn't have to care about implementation, which is what internal and protected members are for.
For protected members that are called by a base class, abstract classes are the way to go for specifying a contract between base classes and classes that inherit from them. But in this case, implementation details are usually very relevant, unless it's a degenerate pure abstract class (where all members are abstract) in which case protected members are useless. In that case, go with an interface and save the single base class for implementing types to choose.
A: You can hide the implementation of an interface by explicitly stating the interface name before the method name:
public interface IInterface {
public void Method();
}
public class A : IInterface {
public void IInterface.Method() {
// Do something
}
}
public class Program {
public static void Main() {
A o = new A();
o.Method(); // Will not compile
((IInterface)o).Method(); // Will compile
}
}
A: An interface is a contract that all implementing classes adhere to. This means that they must adhere to all of it or none of it.
If the interface is public then every part of that contact has to be public, otherwise it would mean one to friend/internal classes and a different thing to everything else.
Either use an abstract base class or (if possible and practical) an internal extension method on the interface.
A: You can hide almost all of the code implemented by interfaces to external assemblies.
interface IVehicle
{
void Drive();
void Steer();
void UseHook();
}
abstract class Vehicle // :IVehicle // Try it and see!
{
/// <summary>
/// Consuming classes are not required to implement this method.
/// </summary>
protected virtual void Hook()
{
return;
}
}
class Car : Vehicle, IVehicle
{
protected override void Hook() // you must use keyword "override"
{
Console.WriteLine(" Car.Hook(): Uses abstracted method.");
}
#region IVehicle Members
public void Drive()
{
Console.WriteLine(" Car.Drive(): Uses a tires and a motor.");
}
public void Steer()
{
Console.WriteLine(" Car.Steer(): Uses a steering wheel.");
}
/// <summary>
/// This code is duplicated in implementing classes. Hmm.
/// </summary>
void IVehicle.UseHook()
{
this.Hook();
}
#endregion
}
class Airplane : Vehicle, IVehicle
{
protected override void Hook() // you must use keyword "override"
{
Console.WriteLine(" Airplane.Hook(): Uses abstracted method.");
}
#region IVehicle Members
public void Drive()
{
Console.WriteLine(" Airplane.Drive(): Uses wings and a motor.");
}
public void Steer()
{
Console.WriteLine(" Airplane.Steer(): Uses a control stick.");
}
/// <summary>
/// This code is duplicated in implementing classes. Hmm.
/// </summary>
void IVehicle.UseHook()
{
this.Hook();
}
#endregion
}
This will test the code.
class Program
{
static void Main(string[] args)
{
Car car = new Car();
IVehicle contract = (IVehicle)car;
UseContract(contract); // This line is identical...
Airplane airplane = new Airplane();
contract = (IVehicle)airplane;
UseContract(contract); // ...to the line above!
}
private static void UseContract(IVehicle contract)
{
// Try typing these 3 lines yourself, watch IDE behavior.
contract.Drive();
contract.Steer();
contract.UseHook();
Console.WriteLine("Press any key to continue...");
Console.ReadLine();
}
}
A: Interfaces do not have access modifiers in their methods, leaving them open to whichever access modifier is appropriate. This has a purpose: it allows other types to infer what methods and properties are available for an object following an interface. Giving them protected/internal accessors defeats the purpose of an interface.
If you are adamant that you need to provide an access modifier for a method, either leave it out of the interface, or as you said, use an abstract class.
A: I'm familiar with Java rather than C#, but why an earth would you want a private member within an interface? It couldn't have any implementation and would be invisible to implementing classes, so would be useless. Interfaces exist to specify behaviour. If you need default behaviour than use an abstract class.
A: In my opintion this violates encapsulation. I have to implement a methos as public then I implement an interface. I see no reason to force public in a class that impletements the interface. (c#)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Best word wrap algorithm? Word wrap is one of the must-have features in a modern text editor.
How word wrap be handled? What is the best algorithm for word-wrap?
If text is several million lines, how can I make word-wrap very fast?
Why do I need the solution? Because my projects must draw text with various zoom level and simultaneously beautiful appearance.
The running environment is Windows Mobile devices. The maximum 600 MHz speed with very small memory size.
How should I handle line information? Let's assume original data has three lines.
THIS IS LINE 1.
THIS IS LINE 2.
THIS IS LINE 3.
Afterwards, the break text will be shown like this:
THIS IS
LINE 1.
THIS IS
LINE 2.
THIS IS
LINE 3.
Should I allocate three lines more? Or any other suggestions?
A: I don't know of any specific algorithms, but the following could be a rough outline of how it should work:
*
*For the current text size, font, display size, window size, margins, etc., determine how many characters can fit on a line (if fixed-type), or how many pixels can fit on a line (if not fixed-type).
*Go through the line character by character, calculating how many characters or pixels have been recorded since the beginning of the line.
*When you go over the maximum characters/pixels for the line, move back to the last space/punctuation mark, and move all text to the next line.
*Repeat until you go through all text in the document.
In .NET, word wrapping functionality is built into controls like TextBox. I am sure that a similar built-in functionality exists for other languages as well.
A: With or without hyphenation?
Without it's easy. Just encapsulate your text as wordobjects per word and give them a method getWidth(). Then start at the first word adding up the rowlength until it is greater than the available space. If so, wrap the last word and start counting again for the next row starting with this one, etc.
With hyphenation you need hyphenation rules in a common format like: hy-phen-a-tion
Then it's the same as the above except you need to split the last word which has caused the overflow.
A good example and tutorial of how to structure your code for an excellent text editor is given in the Gang of Four Design Patterns book. It's one of the main samples on which they show the patterns.
A: Here is a word-wrap algorithm I've written in C#. It should be fairly easy to translate into other languages (except perhaps for IndexOfAny).
static char[] splitChars = new char[] { ' ', '-', '\t' };
private static string WordWrap(string str, int width)
{
string[] words = Explode(str, splitChars);
int curLineLength = 0;
StringBuilder strBuilder = new StringBuilder();
for(int i = 0; i < words.Length; i += 1)
{
string word = words[i];
// If adding the new word to the current line would be too long,
// then put it on a new line (and split it up if it's too long).
if (curLineLength + word.Length > width)
{
// Only move down to a new line if we have text on the current line.
// Avoids situation where wrapped whitespace causes emptylines in text.
if (curLineLength > 0)
{
strBuilder.Append(Environment.NewLine);
curLineLength = 0;
}
// If the current word is too long to fit on a line even on it's own then
// split the word up.
while (word.Length > width)
{
strBuilder.Append(word.Substring(0, width - 1) + "-");
word = word.Substring(width - 1);
strBuilder.Append(Environment.NewLine);
}
// Remove leading whitespace from the word so the new line starts flush to the left.
word = word.TrimStart();
}
strBuilder.Append(word);
curLineLength += word.Length;
}
return strBuilder.ToString();
}
private static string[] Explode(string str, char[] splitChars)
{
List<string> parts = new List<string>();
int startIndex = 0;
while (true)
{
int index = str.IndexOfAny(splitChars, startIndex);
if (index == -1)
{
parts.Add(str.Substring(startIndex));
return parts.ToArray();
}
string word = str.Substring(startIndex, index - startIndex);
char nextChar = str.Substring(index, 1)[0];
// Dashes and the likes should stick to the word occuring before it. Whitespace doesn't have to.
if (char.IsWhiteSpace(nextChar))
{
parts.Add(word);
parts.Add(nextChar.ToString());
}
else
{
parts.Add(word + nextChar);
}
startIndex = index + 1;
}
}
It's fairly primitive - it splits on spaces, tabs and dashes. It does make sure that dashes stick to the word before it (so you don't end up with stack\n-overflow) though it doesn't favour moving small hyphenated words to a newline rather than splitting them. It does split up words if they are too long for a line.
It's also fairly culturally specific, as I don't know much about the word-wrapping rules of other cultures.
A: I wondered about the same thing for my own editor project. My solution was a two-step process:
*
*Find the line ends and store them in an array.
*For very long lines, find suitable break points at roughly 1K intervals and save them in the line array, too. This is to catch the "4 MB text without a single line break".
When you need to display the text, find the lines in question and wrap them on the fly. Remember this information in a cache for quick redraw. When the user scrolls a whole page, flush the cache and repeat.
If you can, do loading/analyzing of the whole text in a background thread. This way, you can already display the first page of text while the rest of the document is still being examined. The most simple solution here is to cut the first 16 KB of text away and run the algorithm on the substring. This is very fast and allows you to render the first page instantly, even if your editor is still loading the text.
You can use a similar approach when the cursor is initially at the end of the text; just read the last 16 KB of text and analyze that. In this case, use two edit buffers and load all but the last 16 KB into the first while the user is locked into the second buffer. And you'll probably want to remember how many lines the text has when you close the editor, so the scroll bar doesn't look weird.
It gets hairy when the user can start the editor with the cursor somewhere in the middle, but ultimately it's only an extension of the end-problem. Only you need to remember the byte position, the current line number, and the total number of lines from the last session, plus you need three edit buffers or you need an edit buffer where you can cut away 16 KB in the middle.
Alternatively, lock the scrollbar and other interface elements while the text is loading; that allows the user to look at the text while it loads completely.
A: Donald E. Knuth did a lot of work on the line breaking algorithm in his TeX typesetting system. This is arguably one of the best algorithms for line breaking - "best" in terms of visual appearance of result.
His algorithm avoids the problems of greedy line filling where you can end up with a very dense line followed by a very loose line.
An efficient algorithm can be implemented using dynamic programming.
A paper on TeX's line breaking.
A: I had occasion to write a word wrap function recently, and I want to share what I came up with.
I used a TDD approach almost as strict as the one from the Go example. I started with the test that wrapping the string "Hello, world!" at 80 width should return "Hello, World!". Clearly, the simplest thing that works is to return the input string untouched. Starting from that, I made more and more complex tests and ended up with a recursive solution that (at least for my purposes) quite efficiently handles the task.
Pseudocode for the recursive solution:
Function WordWrap (inputString, width)
Trim the input string of leading and trailing spaces.
If the trimmed string's length is <= the width,
Return the trimmed string.
Else,
Find the index of the last space in the trimmed string, starting at width
If there are no spaces, use the width as the index.
Split the trimmed string into two pieces at the index.
Trim trailing spaces from the portion before the index,
and leading spaces from the portion after the index.
Concatenate and return:
the trimmed portion before the index,
a line break,
and the result of calling WordWrap on the trimmed portion after
the index (with the same width as the original call).
This only wraps at spaces, and if you want to wrap a string that already contains line breaks, you need to split it at the line breaks, send each piece to this function and then reassemble the string. Even so, in VB.NET running on a fast machine, this can handle about 20 MB/second.
A: I cant claim the bug-free-ness of this, but I needed one that word wrapped and obeyed boundaries of indentation. I claim nothing about this code other than it has worked for me so far. This is an extension method and violates the integrity of the StringBuilder but it could be made with whatever inputs / outputs you desire.
public static void WordWrap(this StringBuilder sb, int tabSize, int width)
{
string[] lines = sb.ToString().Replace("\r\n", "\n").Split('\n');
sb.Clear();
for (int i = 0; i < lines.Length; ++i)
{
var line = lines[i];
if (line.Length < 1)
sb.AppendLine();//empty lines
else
{
int indent = line.TakeWhile(c => c == '\t').Count(); //tab indents
line = line.Replace("\t", new String(' ', tabSize)); //need to expand tabs here
string lead = new String(' ', indent * tabSize); //create the leading space
do
{
//get the string that fits in the window
string subline = line.Substring(0, Math.Min(line.Length, width));
if (subline.Length < line.Length && subline.Length > 0)
{
//grab the last non white character
int lastword = subline.LastOrDefault() == ' ' ? -1 : subline.LastIndexOf(' ', subline.Length - 1);
if (lastword >= 0)
subline = subline.Substring(0, lastword);
sb.AppendLine(subline);
//next part
line = lead + line.Substring(subline.Length).TrimStart();
}
else
{
sb.AppendLine(subline); //everything fits
break;
}
}
while (true);
}
}
}
A: Here is mine that I was working on today for fun in C:
Here are my considerations:
*
*No copying of characters, just printing to standard output. Therefore, since I don't like to modify the argv[x] arguments, and because I like a challenge, I wanted to do it without modifying it. I did not go for the idea of inserting '\n'.
*I don't want
This line breaks here
to become
This line breaks
here
so changing characters to '\n' is not an option given this objective.
*If the linewidth is set at say 80, and the 80th character is in the middle of a word, the entire word must be put on the next line. So as you're scanning, you have to remember the position of the end of the last word that didn't go over 80 characters.
So here is mine, it's not clean; I've been breaking my head for the past hour trying to get it to work, adding something here and there. It works for all edge cases that I know of.
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
int isDelim(char c){
switch(c){
case '\0':
case '\t':
case ' ' :
return 1;
break; /* As a matter of style, put the 'break' anyway even if there is a return above it.*/
default:
return 0;
}
}
int printLine(const char * start, const char * end){
const char * p = start;
while ( p <= end )
putchar(*p++);
putchar('\n');
}
int main ( int argc , char ** argv ) {
if( argc <= 2 )
exit(1);
char * start = argv[1];
char * lastChar = argv[1];
char * current = argv[1];
int wrapLength = atoi(argv[2]);
int chars = 1;
while( *current != '\0' ){
while( chars <= wrapLength ){
while ( !isDelim( *current ) ) ++current, ++chars;
if( chars <= wrapLength){
if(*current == '\0'){
puts(start);
return 0;
}
lastChar = current-1;
current++,chars++;
}
}
if( lastChar == start )
lastChar = current-1;
printLine(start,lastChar);
current = lastChar + 1;
while(isDelim(*current)){
if( *current == '\0')
return 0;
else
++current;
}
start = current;
lastChar = current;
chars = 1;
}
return 0;
}
So basically, I have start and lastChar that I want to set as the start of a line and the last character of a line. When those are set, I output to standard output all the characters from start to end, then output a '\n', and move on to the next line.
Initially everything points to the start, then I skip words with the while(!isDelim(*current)) ++current,++chars;. As I do that, I remember the last character that was before 80 chars (lastChar).
If, at the end of a word, I have passed my number of chars (80), then I get out of the while(chars <= wrapLength) block. I output all the characters between start and lastChar and a newline.
Then I set current to lastChar+1 and skip delimiters (and if that leads me to the end of the string, we're done, return 0). Set start, lastChar and current to the start of the next line.
The
if(*current == '\0'){
puts(start);
return 0;
}
part is for strings that are too short to be wrapped even once. I added this just before writing this post because I tried a short string and it didn't work.
I feel like this might be doable in a more elegant way. If anyone has anything to suggest I'd love to try it.
And as I wrote this I asked myself "what's going to happen if I have a string that is one word that is longer than my wraplength" Well it doesn't work. So I added the
if( lastChar == start )
lastChar = current-1;
before the printLine() statement (if lastChar hasn't moved, then we have a word that is too long for a single line so we just have to put the whole thing on the line anyway).
I took the comments out of the code since I'm writing this but I really feel that there must be a better way of doing this than what I have that wouldn't need comments.
So that's the story of how I wrote this thing. I hope it can be of use to people and I also hope that someone will be unsatisfied with my code and propose a more elegant way of doing it.
It should be noted that it works for all edge cases: words too long for a line, strings that are shorter than one wrapLength, and empty strings.
A: @ICR, thanks for sharing the C# example.
I did not succeed using it, but I came up with another solution. If there is any interest in this, please feel free to use this:
WordWrap function in C#. The source is available on GitHub.
I've included unit tests / samples.
A: I may as well chime in with a perl solution that I made, because gnu fold -s was leaving trailing spaces and other bad behavior. This solution does not (properly) handle text containing tabs or backspaces or embedded carriage returns or the like, although it does handle CRLF line-endings, converting them all to just LF. It makes minimal change to the text, in particular it never splits a word (doesn't change wc -w), and for text with no more than single space in a row (and no CR) it doesn't change wc -c (because it replaces space with LF rather than inserting LF).
#!/usr/bin/perl
use strict;
use warnings;
my $WIDTH = 80;
if ($ARGV[0] =~ /^[1-9][0-9]*$/) {
$WIDTH = $ARGV[0];
shift @ARGV;
}
while (<>) {
s/\r\n$/\n/;
chomp;
if (length $_ <= $WIDTH) {
print "$_\n";
next;
}
@_=split /(\s+)/;
# make @_ start with a separator field and end with a content field
unshift @_, "";
push @_, "" if @_%2;
my ($sep,$cont) = splice(@_, 0, 2);
do {
if (length $cont > $WIDTH) {
print "$cont";
($sep,$cont) = splice(@_, 0, 2);
}
elsif (length($sep) + length($cont) > $WIDTH) {
printf "%*s%s", $WIDTH - length $cont, "", $cont;
($sep,$cont) = splice(@_, 0, 2);
}
else {
my $remain = $WIDTH;
{ do {
print "$sep$cont";
$remain -= length $sep;
$remain -= length $cont;
($sep,$cont) = splice(@_, 0, 2) or last;
}
while (length($sep) + length($cont) <= $remain);
}
}
print "\n";
$sep = "";
}
while ($cont);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Email queueing in php What is the most proper way to sending email of minimal 1000 or more in PHP? Any reliable email queuing technique that is capable to handle that?
A: You could just insert your emails into a Mail Queue database table, and have a separate process check the queue and batch send a certain number at once.
A: There's a tested solution for that: PEAR Mail_Queue
Works fine for me.
A: as mercutio suggested, i would insert a new record into a mail queue table for each email waiting to be sent and then use a separate process (like a CRON) to check the table periodically for any queued items.
if any emails are queued (and the email is not customised for each recipient) i would then group the emails by domain and send blocks together to reduce the total number of emails that have to be sent, i.e. if you have 1000 emails queued and 250 are to gmail accounts i would send the 250 in 25 blocks of 10 (remember to Bcc recipients to avoid them seeing each other).
to actually send the mail i would use PEAR mail over php's mail() function
after sending the email either delete record(s) from the queue or change a status flag to show it was sent and loop - i would also add a counter to keep track of emails that failed to send and remove them after x failed attempts
to overcome timeout issues i would either,(depending on the situation)
- set the set_time_limit() to x seconds and keep track of the script execution time (killing the script after (x-1) seconds)
- call the script from the command line to avoid timeouts
- set a limit to the number of emails the script could send in one execution
A: Sure, the database table might be an idea. But what about sending 1000 e-mails with a 2mb attachment? you'd have to take that into account as well. I had the problem myself, and I eventually resorted to writing the e-mail to the database, and the files to the filesystem. The e-mail script I used then read the database records, and tried to fetch the attachments to send.
A: Are you sure you need do this mail queuing yourself?
Just deliver all mail to the local machine's mail transfer agent (sendmail...) and let that take care of the queuing and sending. After all, that's what was designed for.
In other words: don't worry about it!
A: I created Emailqueue, which is a server that allows you to add emails to a queue so your app get relieved of the stress of the mailing, and also provides useful additional options, like the ability to program emails to be sent in the future, or setting per-email sending priorities. I think this might very well be what you're searching for.
Emailqueue is available here: https://github.com/tin-cat/emailqueue
And there is also a Docker version that allows you to set up a working Emailqueue server in just a few minutes, here: https://github.com/tin-cat/emailqueue-docker
A: I've generally relied on a hack.
I have a database list of email addresses and then use a meta-redirect to self with an increasing 'offset' parameter that specifies which row in the database I am up to. Server redirects cause problems because browsers assume that the time taken indicates an infinite loop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you place a file in recycle bin instead of delete? Programmatic solution of course...
A: http://www.daveamenta.com/2008-05/c-delete-a-file-to-the-recycle-bin/
From above:
using Microsoft.VisualBasic;
string path = @"c:\myfile.txt";
FileIO.FileSystem.DeleteDirectory(path,
FileIO.UIOption.OnlyErrorDialogs,
RecycleOption.SendToRecycleBin);
A: You need to delve into unmanaged code. Here's a static class that I've been using:
public static class Recycle
{
private const int FO_DELETE = 3;
private const int FOF_ALLOWUNDO = 0x40;
private const int FOF_NOCONFIRMATION = 0x0010;
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto, Pack = 1)]
public struct SHFILEOPSTRUCT
{
public IntPtr hwnd;
[MarshalAs(UnmanagedType.U4)]
public int wFunc;
public string pFrom;
public string pTo;
public short fFlags;
[MarshalAs(UnmanagedType.Bool)]
public bool fAnyOperationsAborted;
public IntPtr hNameMappings;
public string lpszProgressTitle;
}
[DllImport("shell32.dll", CharSet = CharSet.Auto)]
static extern int SHFileOperation(ref SHFILEOPSTRUCT FileOp);
public static void DeleteFileOperation(string filePath)
{
SHFILEOPSTRUCT fileop = new SHFILEOPSTRUCT();
fileop.wFunc = FO_DELETE;
fileop.pFrom = filePath + '\0' + '\0';
fileop.fFlags = FOF_ALLOWUNDO | FOF_NOCONFIRMATION;
SHFileOperation(ref fileop);
}
}
Addendum:
*
*Tsk tsk @ Jeff for "using Microsoft.VisualBasic" in C# code.
*Tsk tsk @ MS for putting all the goodies in VisualBasic namespace.
A: The best way I have found is to use the VB function FileSystem.DeleteFile.
Microsoft.VisualBasic.FileIO.FileSystem.DeleteFile(file.FullName,
Microsoft.VisualBasic.FileIO.UIOption.OnlyErrorDialogs,
Microsoft.VisualBasic.FileIO.RecycleOption.SendToRecycleBin);
It requires adding Microsoft.VisualBasic as a reference, but this is part of the .NET framework and so isn't an extra dependency.
Alternate solutions require a P/Invoke to SHFileOperation, as well as defining all the various structures/constants. Including Microsoft.VisualBasic is much neater by comparison.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: What is the easiest way using T-SQL / MS-SQL to append a string to existing table cells? I have a table with a 'filename' column.
I recently performed an insert into this column but in my haste forgot to append the file extension to all the filenames entered. Fortunately they are all '.jpg' images.
How can I easily update the 'filename' column of these inserted fields (assuming I can select the recent rows based on known id values) to include the '.jpg' extension?
A: MattMitchell's answer is correct if the column is a CHAR(20), but is not true if it was a VARCHAR(20) and the spaces hadn't been explicitly entered.
If you do try it on a CHAR field without the RTRIM function you will get a "String or binary data would be truncated" error.
A: Nice easy one I think.
update MyTable
set filename = filename + '.jpg'
where ...
Edit: Ooh +1 to @MattMitchell's answer for the rtrim suggestion.
A: The solution is:
UPDATE tablename SET [filename] = RTRIM([filename]) + '.jpg' WHERE id > 50
RTRIM is required because otherwise the [filename] column in its entirety will be selected for the string concatenation i.e. if it is a varchar(20) column and filename is only 10 letters long then it will still select those 10 letters and then 10 spaces. This will in turn result in an error as you try to fit 20 + 3 characters into a 20 character long field.
A: If the original data came from a char column or variable (before being inserted into this table), then the original data had the spaces appended before becoming a varchar.
DECLARE @Name char(10), @Name2 varchar(10)
SELECT
@Name = 'Bob',
@Name2 = 'Bob'
SELECT
CASE WHEN @Name2 = @Name THEN 1 ELSE 0 END as Equal,
CASE WHEN @Name2 like @Name THEN 1 ELSE 0 END as Similiar
Life Lesson : never use char.
A: The answer to the mystery of the trailing spaces can be found in the ANSI_PADDING
For more information visit: SET ANSI_PADDING (Transact-SQL)
The default is ANSI_PADDIN ON. This will affect the column only when it is created but not to existing columns.
Before you run the update query, verify your data. It could have been compromised.
Run the following query to find compromised rows:
SELECT *
FROM tablename
WHERE LEN(RTRIM([filename])) > 46
-- The column size varchar(50) minus 4 chars
-- for the needed file extension '.jpg' is 46.
These rows either have lost some characters or there is not enough space for adding the file extension.
A: I wanted to adjust David B's "Life Lesson". I think it should be "never use char for variable length string values" -> There are valid uses for the char data type, just not as many as some people think :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Differences between unix and windows files Am I correct in assuming that the only difference between "windows files" and "unix files" is the linebreak?
We have a system that has been moved from a windows machine to a unix machine and are having troubles with the format.
I need to automate the translation between unix/windows before the files get delivered to the system in our "transportsystem". I'll probably need something to determine the current format and something to transform it into the other format.
If it's just the newline thats the big difference then I'm considering just reading the files with the java.io. As far as I know, they are able to handle both with readLine. And then just write each line back with
while (line = readline)
print(line + NewlineInOtherFormat)
....
Summary:
samjudson:
This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.
to which Cebjyre elaborates:
OS X uses LF, the same as UNIX - MacOS 9 and below did use CR though
Mo
There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.
McDowell
In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.
Cheekysoft
However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.
Sadie
On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.
File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.
There exists tools to help with the problem:
pauldoo
If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.
Cheekysoft
As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.
Help for java coding
Cheekysoft
When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain
A: There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.
But this could be another source of trouble (apart from the different linebreaks).
What are your problems? The linebreak-related problems can be easily corrected with the programs dos2unix or unix2dos on the unix-machine
A: If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.
(Of course there are many other things that make unix and windows files different, but I don't think you're interested in those other differences right now.)
A: In addition to the answers given, you may find issues with the different file systems:
*
*On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.
*File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.
A: This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.
Binary files there should be no difference (i.e. a JPEG on a windows machine will be byte for byte the same as the same JPEG on a unix box.)
A: In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.
A: As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.
However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.
Running the command locale on your *nix box will tell you what the system locale is. If this is different to the encoding used in the text files that have been transferred over from the windows machine, then this can sometimes cause issues, depending on the usage of those files. You can use the very powerful recode command to try and convert between the different charsets as well as any line ending issues. recode -l will show you all of the formats and encodings that the tool can convert between. It is likely to be a VERY long list.
When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Retrieving the PC Name of a Client? (Windows Auth) I have an ASP.net Application that runs on the internal network (well, actually it's running on Sharepoint 2007).
I just wonder:
Can I somehow retrieve the name of the PC the Client is using? I would have access to Active Directory if that helps. The thing is, people use multiple PCs. So, I cannot use any manual/static mapping.
If possible, I do not want to use any client-side (read: JavaScript) code, but if it cannot be done server-side, JavaScript would be OK as well. ActiveX is absolutely out of question.
A: System.Web.HttpRequest.UserHostname as suggested in this answer just returns the IP :-(
But I just found this:
System.Net.Dns.GetHostEntry(Page.Request.UserHostAddress).HostName
That only works if there is actually a DNS Server to resolve the name, which is the case for my network.
A: Does System.Web.HttpRequest.UserHostname provide what you're looking for?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Suitable alternative to CryptEncrypt We have a situation in our product where for a long time some data has been stored in the application's database as SQL string (choice of MS SQL server or sybase SQL anywhere) which was encrypted via the Windows API function CryptEncrypt. (direct and de-cryptable)
The problem is that CryptEncrypt can produce NULL's in the output, meaning that when it's stored in the database, the string manipulations will at some point truncate the CipherText.
Ideally we'd like to use an algo that will produce CipherText that doesn't contain NULLs as that will cause the least amount of change to the existing databases (changing a column from string to binary and code to deal with binary instead of strings) and just decrypt existing data and re-encrypt with the new algorithm at database upgrade time.
The algorithm doesn't need to be the most secure, as the database is already in a reasonably secure environment (not an open network / the inter-webs) but does need to be better than ROT13 (which I can almost decrypt in my head now!)
edit: btw, any particular reason for changing ciphertext to cyphertext? ciphertext seems more widely used...
A: Any semi-decent algorithm will end up with a strong chance of generating a NULL value somewhere in the resulting ciphertext.
Why not do something like base-64 encode your resulting binary blob before persisting to the DB? (sample implementation in C++).
A: Storing a hash is a good idea. However, please definitely read Jeff's You're Probably Storing Passwords Incorrectly.
A: That's an interesting route OJ.
We're looking at the feasability of a non-reversable method (still making sure we don't explicitly retrieve the data to decrypt) e.g. just store a Hash to compare on a submission
A: It seems that the developer handling this is going to wrap the existing encryption with yEnc to preserve the table integrity as the data needs to be retrievable, and this save all that messy mucking about with infinite-improbab.... uhhh changing column types on entrenched installations.
Cheers Guys
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why does Guid.ToString() reverse the byte order? We're storing some Guid's in a MS SQL database. There's some legacy code that does Guid.ToString() and then passes them in to a varchar(64) and there's some newer code that passes them in using a unique identifier parameter. When you look at the results using MS SQL Management studio they look different. The byte order of the first three blocks is reversed but the last one remains the same. Why?
A: Uniqueidentifier fields in Sql server can be indexed, and so are 'backwards'.
Guids can be generated from both machine specific info and 'event-time' information.
The default Guid in .Net is random, but you can get sequential Guids from it with an extern call:
[DllImport( "rpcrt4.dll", SetLastError = true )]
static extern int UuidCreateSequential( out Guid guid );
This will get you Guids based on your MAC address (MSDN docs) that are sequential.
If you .ToString() these sequential guids then you will see the first part of the string varies, while the rest stays constant.
This makes equality checks between Guids quicker (as the differences will be at the start) and improves the variation for truncated ones.
For searching columns SqlServer builds indexes in a similar way to a telephone directory or dictionary. It is much quicker to search for words starting with "Over*" than it would be to find ones ending in "*flow".
This means that for Sql server any sequential Guids need to be stored with the repeating value first, so it stores them back to front.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: WebSVN with VisualSVN Server, anyone gotten authentication to work? I have a VisualSVN Server installed on a Windows server, serving several repositories.
Since the web-viewer built into VisualSVN server is a minimalistic subversion browser, I'd like to install WebSVN on top of my repositories.
The problem, however, is that I can't seem to get authentication to work. Ideally I'd like my current repository authentication as specified in VisualSVN to work with WebSVN, so that though I see all the repository names in WebSVN, I can't actually browse into them without the right credentials.
By visiting the cached copy of the topmost link on this google query you can see what I've found so far that looks promising.
(the main blog page seems to have been destroyed, domain of the topmost page I'm referring to is the-wizzard.de)
There I found some php functions I could tack onto one of the php files in WebSVN. I followed the modifications there, but all I succeeded in doing was make WebSVN ask me for a username and password and no matter what I input, it won't let me in.
Unfortunately, php and apache is largely black magic to me.
So, has anyone successfully integrated WebSVN with VisualSVN hosted repositories?
A: I got WebSVN authentication working with VisualSVN server, albeit with a lot of hacking/trial-error customization of my own.
Here's how I did it:
*
*If you haven't already, install PHP manually by downloading the zip file and going through the online php manual install instructions. I installed PHP to C:\PHP
*Extract the websvn folder to C:\Program Files\VisualSVN Server\htdocs\
*Go through the steps of configuring the websvn directory, i.e. rename configdist.php to config, etc. My repositories were located in C:\SVNRepositories, so to configure the authentication file, I set the config.php line so: $config->useAuthenticationFile('C:/SVNRepositories/authz'); // Global access file
*Add the following to C:\Program Files\VisualSVN Server\conf\httpd-custom.conf :
# For PHP 5 do something like this:
LoadModule php5_module "c:/php/php5apache2_2.dll"
AddType application/x-httpd-php .php
# configure the path to php.ini
PHPIniDir "C:/php"
<IfModule dir_module>
DirectoryIndex index.html index.php
</IfModule>
<Location /websvn/>
Options FollowSymLinks
AuthType Basic
AuthName "Subversion Repository"
Require valid-user
AuthUserFile "C:/SVNRepositories/htpasswd"
AuthzSVNAccessFile "C:/SVNRepositories/authz"
SVNListParentPath on
SVNParentPath "C:/SVNRepositories/"
</Location>
This worked for me, and websvn will only show those directories that are authorized for a given user. Note that in order for it to work right, you have to provide "Main Level" access to everybody, and then disable access to certain sub-directories for certain users. For example, I have one user who doesn't have main level access, but does have access to a sub-level. Unfortunately, this person can't see anything in websvn, even if he links directly to filedetails.php for a file he's authorized to see. In my case it's not a big deal because I don't want him accessing websvn anyway, but it's something you'll want to know.
Also, this sets the server up for an ssl connection, so once you've set it up, the address will be and https:// address, not the regular http://.
A: I'm using VisualSVN Server and I just got done installing Trac. My goal was to get a better web-based repository browser, and Trac is definitely one of the better ones I've seen for Subversion. Go to http://www.visualsvn.com/server/trac/ installation is really quite straightforward. Yes, Trac has a ticket tracking and a wiki system, which you may not be looking for, but the repository and log browser sell it for me.
Now, I have found that it is possible to disable the wiki and ticket tracking systems that come with Trac through simply appending
[components]
trac.ticket.* = disabled
trac.wiki.* = disabled
to the end of the trac.ini configuration file. This causes the start page of the wiki to throw an error that the wiki module cannot be found so you have to set Trac to open with either the Timeline (log view) or Repository Browser on startup by editing the trac.ini again by adding the following under the [trac] heading:
for the log timeline as default
default_handler = TimelineModule
for the repository browser as default
default_handler = BrowserModule
A: I got this to work with windows authentication (which is actually AuthType VisualSVN) The trick is to comment out the svn auth and replace it with the same sort of auth text found in the main config file. Thanks to Anthony Johnson for working out all the other details.
# For PHP 5 do something like this:
LoadModule php5_module "F:/wamp/bin/php/php5.3.0/php5apache2_2.dll"
AddType application/x-httpd-php .php
# configure the path to php.ini
PHPIniDir "f:/wamp/bin/php/php5.3.0/"
<IfModule dir_module>
DirectoryIndex index.html index.php
</IfModule>
#Alias /websvn/ "F:/Program Files/VisualSVN Server/htdocs/websvn-2.3.1/"
<Location /websvn-2.3.1/>
Options FollowSymLinks
AuthName "Subversion Repositories"
AuthType VisualSVN
AuthzVisualSVNAccessFile "F:/Repositories/authz-windows"
AuthnVisualSVNBasic on
AuthnVisualSVNIntegrated off
AuthnVisualSVNUPN Off
Require valid-user
SVNListParentPath on
SVNParentPath "f:/Repositories/"
</Location>
A: If you are looking for a web-based repository browser which is more feature-rich than the default one and you use VisualSVN Server, then upgrade to VisualSVN Server 3.2 or newer.
VisualSVN Server has a rich web interface for Subversion repositories. Unlike WebSVN, VisualSVN Server's built-in web client works out of the box and does not require an administrator to perform any configuration tasks.
You can see the live demo here: http://demo-server.visualsvn.com/!/
A: I am the author of the article you mentioned. The information I published was only meant for WebSVN running on IIS. It is my understanding that the software should "just work" when you use PHP on Apache, although I have never set it up in that environment. Have you tried doing some "echo"-debugging (for the lack of a better term) to see where exactly the authentication fails?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Unix subsystem for windows One of the bullet point features for Windows Vista Enterprize and Ultimate is the Unix subsystem for windows, which allows you to write posix... stuff? Anyway I'm outa my league talking about it... Anyone use this feature? Or explain it...
I know next to nothing about Unix programming.
A: It's probably best not to try to use the Posix subsystem for Windows. It was never really complete and is just a useless marketing tick box.
If you're truly interested in programming stuff for Unix, download one of the many Linux distributions (ie. Ubuntu) and VirtualBox. Install and start playing.
A: You might like Cygwin for having a Linux environment on your windows machine. Otherwise, definitely go for an isolated environment (virtual machines) like the others have suggested.
A: I don't want to discourage you from trying linux. But in this context it should be pointed out, that Linux is not completely posix compliant!
Wikipedia has a list of fully posix compliant operating systems
From that list, Solaris is probably the best to get started.
But anyway - for most of your posix-needs Linux should be the best choice (especially for beginners!)
A: The Posix subsystem in Windows is not only incomplete, but also slower in many cases than the "native" windows functions for the same thing. This is true for I/O for example.
A: In addition to Cygwin mentioned by another poster you should also consider MinGW.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: When is a file just a file? So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to.
My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together?
Some examples: user profile photos, job application CVs, related documents on CMS pages, etc.
A: From your example, there is an argument for two tables, as you have files that can be associated with two different things.
*
*CVs, photos are associated with a user.
*attachments are associated with a CMS page.
If you put these in one table, (and you want to allow users to have more than one photo or cv) then you need two link-tables to associate files->users and files->cms_pages. Arguably this implies a HABTM relationship, which is not correct and allows for inconsistent data.
The two table approach is slightly cleaner and only allows files to be associated with the correct type of entity with a simple belongsTo relationship.
But I don't think there is any "right" answer to this question, unless you need to store different types of metadata for different filetypes.
Also be sure to store, or be able to calculate, the mimetype for each file so it can be served correctly back to the browser, with the correct HTTP headers.
A: From what you've said I would just store files with random (UUID or what-not) filenames in one place. I would then have a 'attachments' table or something that contains references to all your external files. This table would also contain the meta-data for that file, so what type of file it is (picture, CV etc) and so on.
There may be hard limits to the number of files in one directory though, depending on what FS you are using.
A: There might be various reasons for storing different files in different locations.
Firstly, a restriction on the number of files in one directory might be a consideration.
Secondly security might be an issue - if some are to be publicly viewable (such as profile photos for example) but others are not (such as CVs) then placing them in different directories would be easier to manage.
Thirdly, simple admin tasks may be easier if files are split, browsing in a file explorer for example, or managing backups, or modifying the application to split file storage across multiple locations.
There is also the issue of filename conflicts, but if you rename everything to match the database id field (for example) then this wouldn't be an issue.
But at the end of the day it probably depends on volumes and your own preference.
A: A different table for each file type only becomes relevant if you store other metadata (and therefore, additional columns) for each type of file. If your tables for each file type only contain the same columns (e.g., filename, filetype, dateuploaded, etc) then it would make sense to have them all on one table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Migrating from MySQL to PostgreSQL We are currently using MySQL for a product we are building, and are keen to move to PostgreSQL as soon as possible, primarily for licensing reasons.
Has anyone else done such a move? Our database is the lifeblood of the application and will eventually be storing TBs of data, so I'm keen to hear about experiences of performance improvements/losses, major hurdles in converting SQL and stored procedures, etc.
Edit: Just to clarify to those who have asked why we don't like MySQL's licensing. We are developing a commercial product which (currently) depends on MySQL as a database back-end. Their license states we need to pay them a percentage of our list price per installation, and not a flat fee. As a startup, this is less than appealing.
A: We did a move from a MySQL3 to PostgreSQL 8.2 then 8.3. PostgreSQL has the basic of SQL and a lot more so if your MYSQL do not use fancy MySQL stuff you will be OK.
From my experience, our MySQL database (version 3) doesn't have Foreign Key... PostgreSQL lets you have them, so we had to change that... and it was a good thing and we found some mistake.
The other thing that we had to change was the coding (C#) connector that wasn't the same in MySQL. The MySQL one was more stable than the PostgreSQL one. We still have few problems with the PostgreSQL one.
A: Steve, I had to migrate my old application the way around, that is PgSQL->MySQL. I must say, you should consider yourself lucky ;-)
Common gotchas are:
*
*SQL is actually pretty close to language standard, so you may suffer from MySQL's dialect you already know
*MySQL quietly truncates varchars that exceed max length, whereas Pg complains - quick workaround is to have these columns as 'text' instead of 'varchar' and use triggers to truncate long lines
*double quotes are used instead of reverse apostrophes
*boolean fields are compared using IS and IS NOT operators, however MySQL-compatible INT(1) with = and <> is still possible
*there is no REPLACE, use DELETE/INSERT combo
*Pg is pretty strict on enforcing foreign keys integrity, so don't forget to use ON DELETE CASCADE on references
*if you use PHP with PDO, remember to pass a parameter to lastInsertId() method - it should be sequence name, which is created usually this way: [tablename]_[primarykeyname]_seq
I hope that helps at least a bit. Have lots of fun playing with Postgres!
A: I have done a similar conversion, but for different reasons. It was because we needed better ACID support, and the ability to have web users see the same data they could via other DB tools (one ID for both).
Here are the things that bit us:
*
*MySQL does not enforce constraints
as strictly as PostgreSQL.
*There are different date handling routines. These will need to be manually converted.
*Any code that does not expect ACID
compliance may be an issue.
That said, once it was in place and tested, it was much nicer. With correct locking for safety reasons and heavy concurrent use, PostgreSQL performed better than MySQL. On the things where locking was not needed (read only) the performance was not quite as good, but it was still faster than the network card, so it was not an issue.
Tips:
*
*The automated scripts in the contrib
directory are a good starting point
for your conversion, but will need
to be touched a little usually.
*I would highly recommend that you
use the serializable isolation
level as a default.
*The pg_autodoc tool is good to
really see your data structures and
help find any relationships you
forgot to define and enforce.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Experience with Hadoop? Have any of you tried Hadoop? Can it be used without the distributed filesystem that goes with it, in a Share-nothing architecture? Would that make sense?
I'm also interested into any performance results you have...
A: Yes, you can use Hadoop on a local filesystem by using file URIs instead of hdfs URIs in various places. I think a lot of the examples that come with Hadoop do this.
This is probably fine if you just want to learn how Hadoop works and the basic map-reduce paradigm, but you will need multiple machines and a distributed filesystem to get the real benefits of the scalability inherent in the architecture.
A: Hadoop MapReduce can run ontop of any number of file systems or even more abstract data sources such as databases. In fact there are a couple of built-in classes for non-HDFS filesystem support, such as S3 and FTP. You could easily build your own input format as well by extending the basic InputFormat class.
Using HDFS brings certain advantages, however. The most potent advantage is that the MapReduce job scheduler will attempt to execute maps and reduces on the physical machines that are storing the records in need of processing. This brings a performance boost as data can be loaded straight from the local disk instead of transferred over the network, which depending on the connection may be orders of magnitude slower.
A: As Joe said, you can indeed use Hadoop without HDFS. However, throughput depends on the cluster's ability to do computation near where data is stored. Using HDFS has 2 main benefits IMHO 1) computation is spread more evenly across the cluster (reducing the amount of inter-node communication) and 2) the cluster as a whole is more resistant to failure due to data unavailability.
If your data is already partitioned or trivially partitionable, you may want to look into supplying your own partitioning function for your map-reduce task.
A: The best way to wrap your head around Hadoop is to download it and start exploring the include examples. Use a Linux box/VM and your setup will be much easier than Mac or Windows. Once you feel comfortable with the samples and concepts, then start to see how your problem space might map into the framework.
A couple resources you might find useful for more info on Hadoop:
Hadoop Summit Videos and Presentations
Hadoop: The Definitive Guide: Rough Cuts Version - This is one of the few (only?) books available on Hadoop at this point. I'd say it's worth the price of the electronic download option even at this point ( the book is ~40% complete ).
A: Parallel/ Distributed computing = SPEED << Hadoop makes this really really easy and cheap since you can just use a bunch of commodity machines!!!
Over the years disk storage capacities have increased massively but the speeds at which you read the data have not kept up. The more data you have on one disk, the slower the seeks.
Hadoop is a clever variant of the divide an conquer approach to problem solving.
You essentially break the problem into smaller chunks and assign the chunks to several different computers to perform processing in parallel to speed things up rather than overloading one machine. Each machine processes its own subset of data and the result is combined in the end. Hadoop on a single node isn't going to give you the speed that matters.
To see the benefit of hadoop, you should have a cluster with at least 4 - 8 commodity machines (depending on the size of your data) on a the same rack.
You no longer need to be a super genius parallel systems engineer to take advantage of distributed computing. Just know hadoop with Hive and your good to go.
A: yes, hadoop can be very well used without HDFS. HDFS is just a default storage for Hadoop. You can replace HDFS with any other storage like databases. HadoopDB is an augmentation over hadoop that uses Databases instead of HDFS as a data source. Google it, you will get it easily.
A: If you're just getting your feet wet, start out by downloading CDH4 & running it. You can easily install into a local Virtual Machine and run in "pseudo-distributed mode" which closely mimics how it would run in a real cluster.
A: Yes You can Use local file system using file:// while specifying the input file etc and this would work also with small data sets.But the actual power of hadoop is based on distributed and sharing mechanism. But Hadoop is used for processing huge amount of data.That amount of data cannot be processed by a single local machine or even if it does it will take lot of time to finish the job.Since your input file is on a shared location(HDFS) multiple mappers can read it simultaneously and reduces the time to finish the job. In nutshell You can use it with local file system but to meet the business requirement you should use it with shared file system.
A: Great theoretical answers above.
To change your hadoop file system to local, you can change it in "core-site.xml" configuration file like below for hadoop versions 2.x.x.
<property>
<name>fs.defaultFS</name>
<value>file:///</value>
</property>
for hadoop versions 1.x.x.
<property>
<name>fs.default.name</name>
<value>file:///</value>
</property>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Subsets and Splits