text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Large, Complex Objects as a Web Service Result Hello again ladies and gents!
OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on.
Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application.
Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other.
In this case, that is something that I would really, really, really! like to avoid!
So, it got me thinking, how else could we do this?
My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the .NET XML Serializer.
What are your thoughts on this?
A: The .Net XML (de)serialisation is pretty nicely implemented. At first thought, I don't think this is a bad idea at all.
If the two applications import the same C# class(es) definition(s), then this is a relatively nice way of getting copy-constructor behaviour for free. If the class structure changes, then everything will work when both sides get the new class definition, without needing to make any additional changes on the web-service consumption/construction side.
There's a slight overhead in marshalling and demarshalling the XML, but that is probably dwarved by the overhead of the remote web service call. .Net XML serialisation is well understood by most programmers and should produce an easy to maintain solution.
A: I'm loving JSON for this kind of thing. I just finished a POC drop-things type portal for my company using jQuery to contact web services with script service enabled. The messages are lightweight and parsing etc is pretty much handled. The jQuery ajax stuff I read was here (loving it!) : jquery ajax article
A: I had some great answers on a very similar topic yesterday that might be useful for you:
Communication between javascript and the server
A: Rob, in looking at your other question as well as this one, it's sounds like the exact situation we have in our environment. What we've done, however, is move away from ASP.Net web services to WCF web services and in the process solved (for the most part) this problem.
If there is any chance your web service could be implemented as a WCF web service, this might work for you as well. I should mention, that at the same time, we've maintained backwards compatibility with some client applications that need the "ASP.Net web service style" of implementation by using the WCF basichttp binding for the service transport. The end result is that our "newer" client applications are able to use our real business objects (through referencing an assembly containing only these shared objects) as the return types from the web service calls because they make actual WCF calls.
We do this by not utilizing the auto-generated proxy classes and constructing our own client channel to communicate with the WCF service.
If you can possibly use WCF, let me know I can post some additional information.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: When should assertions stay in production code? There's a discussion going on over at comp.lang.c++.moderated about whether or not assertions, which in C++ only exist in debug builds by default, should be kept in production code or not.
Obviously, each project is unique, so my question here is not so much whether assertions should be kept, but in which cases this is recommendable/not a good idea.
By assertion, I mean:
*
*A run-time check that tests a condition which, when false, reveals a bug in the software.
*A mechanism by which the program is halted (maybe after really minimal clean-up work).
I'm not necessarily talking about C or C++.
My own opinion is that if you're the programmer, but don't own the data (which is the case with most commercial desktop applications), you should keep them on, because a failing asssertion shows a bug, and you should not go on with a bug, with the risk of corrupting the user's data. This forces you to test strongly before you ship, and makes bugs more visible, thus easier to spot and fix.
What's your opinion/experience?
See related question here
Responses and Updates
An assertion is error, pure and simple and therefore should be handled like one.
Since an error should be handled in release mode then you don't really need assertions.
That's why I prefer the word "bug" when talking about assertions. It makes things much clearer. To me, the word "error" is too vague. A missing file is an error, not a bug, and the program should deal with it. Trying to dereference a null pointer is a bug, and the program should acknowledge that something smells like bad cheese.
Hence, you should test the pointer with an assertion, but the presence of the file with normal error-handling code.
Slight off-topic, but an important point in the discussion.
As a heads-up, if your assertions break into the debugger when they fail, why not. But there are plenty of reasons a file could not exist that are completely outside of the control of your code: read/write rights, disk full, USB device unplugged, etc. Since you don't have control over it, I feel assertions are not the right way to deal with that.
Yes, I have Code Complete, and must say I strongly disagree with that particular advice.
Say your custom memory allocator screws up, and zeroes a chunk of memory that is still used by some other object. I happens to zero a pointer that this object dereferences regularly, and one of the invariants is that this pointer is never null, and you have a couple of assertions to make sure it stays that way. What do you do if the pointer suddenly is null. You just if() around it, hoping that it works?
Remember, we're talking about product code here, so there's no breaking into the debugger and inspecting the local state. This is a real bug on the user's machine.
A: Unless profiling shows that the assertions are causing performance problems, I say they should stay in the production release as well.
However, I think this also requires that you handle assertion failures somewhat gracefully. For example, they should result in a general type of dialog with the option of (automatically) reporting the issue to the developers, and not just quit or crash the program. Also, you should be careful not to use assertions for conditions that you actually do allow, but possibly don't like or consider unwanted. Those conditions should be handled by other parts of the code.
A: Allow me to quote Steve McConnell's Code Complete. The section on Assertions is 8.2.
Normally, you don't want users to see assertion messages in production code; assertions are primarily for use during development and maintenance. Assertions are normally compiled into the code at development time and compiled out of the code for production.
However, later in the same section, this advice is given:
For highly robust code, assert and then handle the error anyway.
I think that as long as performance is not an issue, leave the assertion in, but rather than display a message, have it write to a log file. I think that advice is also in Code Complete, but I'm not finding it right now.
A: In my C++ I define REQUIRE(x) which is like assert(x) except that it throws an exception if the assertion fails in a release build.
Since a failed assertion indicates a bug, it should be treated seriously even in a Release build. When my code's performance matters, I will often use REQUIRE() for higher-level code and assert() for lower-level code that must run fast. I also use REQUIRE instead of assert if the failure condition may be caused by data passed in from code written by a third party, or by file corruption (optimally I would design the code specifically to be well behaved in case of file corruption, but we don't always have time to do that.)
They say you shouldn't show those assert messages to end-users because they won't understand them. So? End users may send you an email with a screen shot or some text of the error message, which helps you debug. If the user simply says "it crashed", you have no ability to fix it. It would be better to send the assertion-failure messages to yourself automatically, but that only works if (1) the software runs on a server you control/monitor or (2) the user has internet access and you can get their permission to send a bug report.
A: Leave assertions turned on in production code, unless you have measured that the program runs significantly faster with them turned off.
if it's not worth measuring to prove it's more efficient, then it's not worth sacrificing clarity for a performance gamble." - Steve McConnell 1993
http://c2.com/cgi/wiki?ShipWithAssertionsOn
A: If you want to keep them replace them with error handling. Nothing worse than a program just disappearing. I see nothing wrong with treating certain errors as serious bugs, but they should be directed to a section of your program that is equipped to deal with them by collecting data, logging it, and informing the user that your app has had some unwanted condition and is exiting.
A: If you're even thinking of leaving assertions on in production, you're probably thinking about them wrong. The whole point of assertions is that you can turn them off in production, because they are not a part of your solution. They are a development tool, used to verify that your assumptions are correct. But the time you go into production, you should already have confidence in your assumptions.
That said, there is one case where I will turn assertions on in production: If we encounter a reproducible bug in production that we're having a hard time reproducing in a test environment, it may be helpful to reproduce the bug with assertions turned on in production, to see if they provide useful information.
A more interesting question is this: In your testing phase, when do you turn assertions off?
A: Provided they are handled just as any other error, I don't see a problem with it. Do bear in mind though that failed assertions in C, as with other languages, will just exit the program, and this isn't usually sufficient for production systems.
There are some exceptions - PHP, for instance, allows you to create a custom handler for assertion failures so that you can display custom errors, do detailed logging, etc. instead of just exiting.
A: Our database server software contains both production and debug assertions. Debug assertions are just that -- they are removed in production code. Production assertions only happen if (a) some condition exists that should never exist and (b) it is not possible to reliably recover from this condition. A production assertion indicates that a bug in the software has been encountered or some kind of data corruption has occurred.
Since this is a database system and we are storing potentially enterprise-critical data, we do whatever we can to avoid corrupted data. If a condition exists that may cause us to store incorrect data, we immediately assert, rollback all transactions, and stop the server.
Having said that, we also try to avoid production assertions in performance-critical routines.
A: Suppose a piece of code is in production, and it hits an assertion that would normally be triggered. The assertion has found a bug! Except it hasn't, because the assertion is turned off.
So what happens now? Either the program will (1) crash in an uninformative way at a point further removed from the source of the problem, or (2) run merrily to completion, likely giving the wrong result.
Neither scenario is inviting. Leave assertions active even in production.
A: Assertions should never stay in production code. If a particular assertion seems like it might be useful in production code, then it should not be an assertion; it should be a run time error check, i.e. something coded like this: if( condition != expected ) throw exception.
The term 'assertion' has come to mean "a development-time-only check which will not be performed on the field."
If you start thinking that assertions might make it to the field then you will inevitably also start making other dangerous thoughts, like wondering whether any given assertion is really worth making. There is no assertion which is not worth making. You should never be asking yourself "should I assert this or not?" You should only be asking yourself "Is there anything I forgot to assert?"
A: Assertions are comments that do not become outdated. They document which theoretical states are intended, and which states should not occur. If code is changed so states allowed change, the developer is soon informed and needs to update the assertion.
A: I see asserts as in-line unit tests. Useful for a quick test while developing, but ultimately those assertions should be refactored out to be tested externally in unit tests.
A: I find it best to handle all errors that are in scope, and use assertions for assumptions that we're asserting ARE true.
i.e., if your program is opening/reading/closing a file, then not being able to open the file is in scope -- it's a real possibility, which would be negligent to ignore, in other words. So, that should have error-checking code associated with it.
However, let's say your fopen() is documented as always returning a valid, open file handle. You open the file, and pass it to your readfile() function.
That readfile function, in this context, and probably according to its design specification, can pretty much assume it's going to get a valid file ptr. So, it would be wasteful to add error-handling code for the negative case, in such a simple program. However, it should at least document the assumption, somehow -- ensure somehow --- that this is actually the case, before continuing its execution. It should not ACTUALLY assume that will always be valid, in case it's called incorrectly, or it's copy/pasted into some other program, for example.
So, readfile() { assert(fptr != NULL); .. } is appropriate in this case, whilst full-blown error handling is not (ignoring the fact that actually reading the file would require some error handling system anyway).
And yes, those assertions should stay in production code, unless its absolutely necessary to disable them. Even then, you should probably disable them only within performance-critical sections.
A: I rarely use assertions for anything other that compile time type checking. I would use an exception instead of an assertion just because most languages are built to handle them.
I offer an example
file = create-some-file();
_throwExceptionIf( file.exists() == false, "FILE DOES NOT EXIST");
against
file = create-some-file();
ASSERT(file.exists());
How would the application handle the assertion? I prefer the old try catch method of dealing with fatal errors.
A: Most of the time, when i use assertion in java (the assert keyword) I automatically add some production codes after. According to the case, it can be a logging message, an exception... or nothing.
According to me, all your assertions are critical in dev release, not in production relase. Some of them must be kept, other must be discarded.
A: ASSERTIONS are not errors and should not be handled as errors. When an assertion is thrown, this means that there is a bug in your code or alternatively in the code calling your code.
There are a few points to avoid enabling assertions in production code:
1. You don't want your end user to see a message like "ASSERTION failed MyPrivateClass.cpp line 147. The end user is NOT you QA engineer.
2. ASSERTION might influence performance
However, there is one strong reason to leave assertions:
ASSERTION might influence performance and timing, and sadly this sometimes matter (especially in embedded systems).
I tend to vote for leaving the assertion on in production code but making sure that these assertions print outs are not exposed to end user.
~Yitzik
A: An assertion is error, pure and simple and therefore should be handled like one.
Since an error should be handled in release mode then you don't really need assertions.
The main benefit I see for assertions is a conditional break - they are much easier to setup than drilling through VC's windows to setup something that takes 1 line of code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "189"
} |
Q: Repository organisation When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them.
So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release.
Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion.
A: See these two questions on SO for more information:
*
*What does branch, tag and trunk really mean?
*Subversion question
A: What I do and normally see as a standard is:
The trunk should contain your main line of development, your unstable version.
You should create release branches for your releases.
Something like:
/trunk (here your are developing version 2.0)
/branches/RB-1.0 (this is the release branch for 1.0)
/branches/RB-1.5
When you find a bug in 1.5, you fix it in the RB branch and then merge to the trunk.
I also recommend this book.
A: Eric has an excellent series of articles on Source Control use and organisational best practices.
Chapter 7 deals with branches (and yes, it recommends the /trunk/ and /branches/ directories you suggest).
A: I have used Perforce for a long time, and so my comments may be a little Perforce-centric, but the basic principles apply to any SCM software that has half decent branching.
I'm a very strong believer in using branched development practices. I have a "main" (aka "mainline") that represents the codebase from now to eternity. The aim is that this is, most of the time, stable and, if push came to shove, you could cut a release anytime that would reflect the current functionality of the system. Those pesky sales guys keep asking....
Developments happen in branches that are branched from MAIN (normally - occasionally you may want to branch from an existing dev branch). Integrate from MAIN to your dev branches as often as you can, to stop things diverging too much - or you can simply budget for a bigger integration period later. Only integrate your arse kicking new feature to MAIN when you are sure that it will go out in a forthcoming release.
Finally, you have a RELEASE line, which the option of different branches for different releases. There's some choices depending on the labelling capabilities of your SCM software,and how different major/minor revisions are likely to be. So you may opt, for example, for a release branch for every point release, or only for major rev number. Your mileage may vary.
Generally, branch from MAIN to release as late as possible. Bugfixes and last minute changes can either go straight into RELEASE for later integration to MAIN, or into MAIN for immediate integration back up. There's no hard and fast rule - do what works best. If, however, you have changes that may be submitted to MAIN (e.g. from a dev branch, or "little tweaks" by someone on MAIN), then do the former. It depends on how your team works, what your release cycles are etc.
E.g. I would have something like this:
//MYPROJECT/MAIN/... - the top level folder for a complete build of all the product in main.
//MYPROJECT/DEV/ArseKickingFeature/... - a branch from MAIN where developers work.
//MYPROJECT/RELEASE/1.0/...
//MYPROJECT/RELEASE/2.0/...
A non-trivial project will probably have a number of DEV branches active at once. When a development has been integrated into MAIN so that it is now part of the core project, kill off the old DEV branch as soon as you can. Many engineers will treat a DEV branch as their own personal space, and reuse it for different features over time. Discourage this.
If, after release, you have to fix a bug, then do that in the corresponding release branch. If the bug has been previously fixed in MAIN, then integrate across, unless the code has changed so much in MAIN the fix is different.
What really differentiates the codelines is the policies you use to manage them. For example, what tests get run, who reviews pre/post a change, what action happens if a build breaks. Typically policies - and therefore overhead - are strongest in release branches, and weakest in DEV. There's an article here that goes through some scenarios, and links to other useful things.
Finally, I recommend going with a simple structure to start with, and only introduce extra dev & release ones as needed.
Hope that helps, and is not stating-the-bleedin'-obvious too much.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Tracking Useful Information What do the clever programmers here do to keep track of handy programming tricks and useful information they pick up over their many years of experience? Things like useful compiler arguments, IDE short-cuts, clever code snippets, etc.
I sometimes find myself frustrated when looking up something that I used to know a year or two ago. My IE favorites probably represent a good chunk of the Internet in the late 1990s, so clearly that isn't effective (at least for me). Or am I just getting old?
So.. what do you do?
A: Two Things I do:
*
*I blog about it - this allows me to go back and search my own blog.
*We use the code snippet feature in Visual Studio.
Cheers.
A: I use:
*
*Google Notebook - I take notes for projects, books I'm reading, etc
*Delicious + Firefox plug in - Every time I see a good page I mark it.
*Windows Journal (in tablet pc) - When I need to draw something and then copy/cut/paste it. I have more distractions here, the web is always very close :)
*Small Moleskine paper notebook - Its always with me.
*Big paper notebook - When I need more space to write and less distractions.
Obviously these are for all useful information, not just for snippets or tips and tricks.
A: Why not set up a Wiki?
If you are on windows, i know that ScrewTurn wiki is pretty simple to deploy on a desktop/laptop. No database to fuss around with.
A: Blog about it.
One of the nice side-effects of blogging is that if you use a sensible categorization or tagging system, it's quite easy to search for stuff within your blog. The fact that you wrote about it also makes it easier to remember problems you have encountered before ("hey, I blogged about that!").
That's a great benefit aside from, of course, being able to share this information publicly so that others might be able to find your solution to a particular problem using Google.
A: A number of people I know swear by Google Notebook
A: I send them to my gmail account, that way I have them where ever I go, and they can be put into appropriate folders for later.
A: I second the blog about it technique...even Jeff said that's a major reason he blogs.
Also, regarding the wiki idea, if you set one up at work, be sure to encourage your coworkers to do the same. When someone finds something of interest they can just write a little "article" explaining what it is and how to do it... that way, not only are your own things easily available and quickly searchable, but you'll often find out things you never knew from other people in your group. That way it benefits everyone not just you.
A: I agree with emailing, the wiki and the blog. Emailing is the most useful. If you can't use GMail and you're on windows, install a desktop search utility (Windows search, Google Desktop, Copernic, etc)
I also like to jot it into a textfile and save it in my documents folder. Whatever desktop search utility you use will be able to find it easily. e.g.
//print spool stop.notes.txt
If the printer spooler stops, start it again by
- Services > Provision Networks > Restart Service
tags: printer provision no printer spooler cannot print remote desktop
A: Subscribe in Google Reader and then search later.
A: At my last place of work they wouldn't let me set up a wiki or anything - so I just made various word documents full of tips and instructions and gave that to my successor when I left.
Now though I'd use a private wiki, or maybe a blog.
A: For many years I've kept a Word doc named Knowledgebase.doc that contains all my notes with a decent table of contents. I like to keep everything in one searchable doc.
I use a sync tool to make sure the file is copied to all the machines I want it on.
A: I use TiddlyWiki stored in my DropBox account. Although, recently, Evernote is getting my atention; it has a really useful feature: you send a twitter direct message to evernote user (myen) and it adds a note with your message (a really quick way to add notes or URL's for post-processing). Imagine, you can use a command-line twitter client to create notes! (or any twitter client). I really like this feature.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Anyone know a quick way to get to custom attributes on an enum value? This is probably best shown with an example. I have an enum with attributes:
public enum MyEnum {
[CustomInfo("This is a custom attrib")]
None = 0,
[CustomInfo("This is another attrib")]
ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)]
ValueB,
}
I want to get to those attributes from an instance:
public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string
// the .ToString() on enums is very slow
FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field
return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ).
FirstOrDefault() //Linq method to get first or null
as CustomInfoAttribute; //use as operator to convert
}
As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it.
Does anyone have a better way?
A: I generally find reflection to be quite speedy as long as you don't dynamically invoke methods.
Since you are just reading the Attributes of an enum, your approach should work just fine without any real performance hit.
And remember that you generally should try to keep things simple to understand. Over engineering this just to gain a few ms might not be worth it.
A: This is probably the easiest way.
A quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well.
For example code to emit a getter from a property
public delegate object FastPropertyGetHandler(object target);
private static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type)
{
if (type.IsValueType)
{
ilGenerator.Emit(OpCodes.Box, type);
}
}
public static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo)
{
// generates a dynamic method to generate a FastPropertyGetHandler delegate
DynamicMethod dynamicMethod =
new DynamicMethod(
string.Empty,
typeof (object),
new Type[] { typeof (object) },
propInfo.DeclaringType.Module);
ILGenerator ilGenerator = dynamicMethod.GetILGenerator();
// loads the object into the stack
ilGenerator.Emit(OpCodes.Ldarg_0);
// calls the getter
ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null);
// creates code for handling the return value
EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType);
// returns the value to the caller
ilGenerator.Emit(OpCodes.Ret);
// converts the DynamicMethod to a FastPropertyGetHandler delegate
// to get the property
FastPropertyGetHandler getter =
(FastPropertyGetHandler)
dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler));
return getter;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: What Certificate Authority Software is Available? I am running a number of SSL-encrypted websites, and need to generate certificates to run on these. They are all internal applications, so I don't need to purchase a certificate, I can create my own.
I have found it quite tedious to do everything using openssl all the time, and figure this is the kind of thing that has probably been done before and software exists for it.
My preference is for linux-based systems, and I would prefer a command-line system rather than a GUI.
Does anyone have some suggestions?
A: An option that doesn't require your own CA is to get certificates from CAcert (they're free).
I find it convenient to add the two CAcert root certificates to my client machines, then I can manage all the SSL certificates through CAcert.
A: It's likely that self-signing will give you what you need; here is a page (link resurrected by web.archive.org) that provides a decent guide to self-signing if you would like to know the ins and outs of how it's done and how to create your own script.
The original script link from this response is unfortunately dead and I was unable to find an archive of it, but there are many alternatives for pre-rolled shell scripts out there.
If you're looking for something to support fairly full-featured self-signing, then this guide for 802.1x authentication from tldp.org recommends using the helper scripts for self-signing from FreeRADIUS. Or, if you just need quick-and-dirty, then Ron Bieber offers up his "brain-dead script" for self-signing on his blog at bieberlabs.com.
Of course there are many alternative scripts out there but this seems to give a good range of choices, and with a little additional info from the guide you should be able to tailor these to do whatever you need.
It's also worth checking the SSL Certificates HOWTO. It's quite old now (last updated 2002) but its content is still relevant: it explains how to use the CA Perl / Bash script provided with OpenSSL software.
A: I know you said you prefer the command line, but for others who are interested in this, TinyCA is a very easy to use GUI CA software. I have used this both in Linux, and also in OSX.
A: The XCA software appears reasonably well maintained (copyright 2012, uses Qt4), with a well-documented and simple enough user interface and has packages on debian, ubuntu and fedora.
Don't judge the website at first sight:
http://xca.sourceforge.net/
Rather, check this nice walkthrough to add a new CA:
http://xca.sourceforge.net/xca-14.html#ss14.1
You can see a screenshot of the application there: http://sourceforge.net/projects/xca/
It is GUI-based though, not command-line.
A: There's a simple webpage solution: https://www.ibm.com/developerworks/mydeveloperworks/blogs/soma/entry/a_pki_in_a_web_page10
A: I like to use the easy-rsa scripts provided with OpenVPN. This is a collection of command line tools used to create the PKI environment required for OpenVPN.
But with a slight change of the (also provided) openssl.cnf file you can create pretty much anything you want with it.
I use that for self signing ssl server certificates as well as with Bacula backup and for creating private keys/csr's for "real" certificates.
just download the OpenVPN community edition source tarball and copy the easy-rsa folder to your linux machine. you'll find lots of documentation on the openvpn community pages.
I used to use CAcert, it's also nice, but you have to create the CSR yourself, so you have to use openssl again and the certs aer only valid for half a year. this is annoying
A: I created a wrapper script, written in Bash, for OpenSSL that might be useful to you here. To me, the easiest sources of user error when using OpenSSL were:
*
*Keeping a consistent and logical naming scheme for configuration/certs/keys so that I can see how every artifact fits into the entire PKI by just looking at the file name/extension
*Enforcing a folder structure thats consistent across all CA machines that use the script.
*Specifying too many configuration options via CLI and loosing track of some of the details
The strategy is to push all configuration into their own files, saving only execution of a particular action for the CLI. The script also strongly enforces the use of a particular naming scheme for folders/files here which is helpful when looking at any single file.
Use/Fork/PR away! Hope it helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Default Internet connection on Dual LAN Workstation I know this is not programming directly, but it's regarding a development workstation I'm setting up.
I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x
The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN.
I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access?
A: I'm no network expert but I have fiddled with the route command a number of times...
route add 0.0.0.0 MASK 0.0.0.0 <address of gateway on 10.17.x.x net>
Will route all default traffic through the 10.17.x.x gateway, if you find that it still routes through the other interface, you should make sure that the new rule has a lower metric than the existing routes. Do this by adding METRIC 1 for example to the end of the line above.
You could also adjust the metric in the Advanced TCP/IP Settings window of the 10.17.x.x interface, unticking the Automatic Metric checkbox and setting the value to something low, like 1 or 2.
A: If you don't move your network cables around and can assign yourself a static IP address on the 10.16.x.x network, you can refrain from assigning a gateway address on that network. If there is no gateway, internet packets will not be routed on that interface.
If you use DHCP, static record to recognize your MAC address and not provide a gateway IP address.
As for using advanced windows routing, the route you are looking for is the 0.0.0.0 route (default route). The important number is the metric value, which is the cost for the route, where the lower metric tends to be used first. You can set the metric at the interface level directly in the GUI.
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/i/tr/cms/contentPics/tcpip-F.gif
I believe if you set the interface metric to a high value on the 10.16.x.x interface, it will not be used as a gateway.
Personally I use the method where I refrain from defining a gateway IP.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Warning C4341 - 'XX': signed value is out of range for enum constant When compiling my C++ .Net application I get 104 warnings of the type:
Warning C4341 - 'XX': signed value is out of range for enum constant
Where XX can be
*
*WCHAR
*LONG
*BIT
*BINARY
*GUID
*...
I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings.
Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings!
A: This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use
#pragma warning( push )
#pragma warning( disable: 4341 )
// code affected by bug
#pragma warning( pop )
A: In Visual Studio you can always disable specific warnings by going to:
Project settings -> C/C++ -> Advanced -> Disable Specific warnings: 4341
A: Either wait for a compiler fix or dont #include code that triggers it.
[A verbose way of saying you probably can't.]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Getting the Remote Name Address (not IP) I wanted to show the users Name Address (see www.ipchicken.com), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either:
IPAddress ip = IPAddress.Parse(this.lblIp.Text);
string hostName = Dns.GetHostByAddress(ip).HostName;
this.lblHost.Text = hostName;
But HostName is the same as the IP address.
Who know's what I need to do?
Thanks.
Gab.
A: Edit of my previous answer.
Try (in vb.net):
Dim sTmp As String
Dim ip As IPHostEntry
sTmp = MaskedTextBox1.Text
Dim ipAddr As IPAddress = IPAddress.Parse(sTmp)
ip = Dns.GetHostEntry(ipAddr)
MaskedTextBox2.Text = ip.HostName
Dns.resolve appears to be obsolete in later versions of .Net. As stated here before I believe the issue is caused by your IP address not having a fixed name or by it having multiple names. The example above works with Google addresses, but not with an address we use that has a couple of names associated with it.
A: You need the Dns.Resolve() method from System.Net
See this article
A: Stupid me... The code is posted was 100% valid and working... But 10 lines lower I replaced the this.lblHost.Text with another value, which happened to be the ip address.
Sorry.
A: Also remember that reverse lookup won't allways give the same address as the one used in forward DNS lookup.
For example for google.com I get ip 64.233.167.99
but reverse dns lookup for that IP returns py-in-f99.google.com
A: Not all IP addresses need to have hostnames. I think that's what is happening in your case. Try it ouy with more well-known IP/hostname pairs eg:
Name: google.com Address: 72.14.207.99
Name: google.com Address:
64.233.187.99
Name: google.com Address:
64.233.167.99
...I might just be wrong
A: A lot of users have the same shared IP address, so you will not be able to find their hostnames. And a lot of users won't necessarily have DNS records in public DNS for the IPs they are coming from as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Warning: Found conflicts between different versions of the same dependent assembly I am currently developing a .NET application, which consists of 20 projects. Some of those projects are compiled using .NET 3.5, some others are still .NET 2.0 projects (so far no problem).
The problem is that if I include an external component I always get the following warning:
Found conflicts between different versions of the same dependent assembly.
What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)?
A: I just had this warning message and cleaned the solution and recompiled (Build -> Clean Solution) and it went away.
A: I had the same issue and I resolved by changing the following in web.config.
It happened to me because I am running the application using Newtonsoft.Json 4.0
From:
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" />
</dependentAssembly>
To:
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="4.5.0.0" />
</dependentAssembly>
A: Basically this happens when the assemblies you're referencing have "Copy Local" set to "True", meaning that a copy of the DLL is placed in the bin folder along with your exe.
Since Visual Studio will copy all of the dependencies of a referenced assembly as well, it's possible to end up with two different builds of the same assembly being referred to. This is more likely to happen if your projects are in separate solutions, and can therefore be compiled separately.
The way I've gotten around it is to set Copy Local to False for references in assembly projects. Only do it for executables/web applications where you need the assembly for the finished product to run.
Hope that makes sense!
A: This warning means that two projects reference the same assembly (e.g. System.Windows.Forms) but the two projects require different versions. You have a few options:
*
*Recompile all projects to use the same versions (e.g. move all to .Net 3.5). This is the preferred option because all code is running with the versions of dependencies they were compiled with.
*Add a binding redirect. This will suppress the warning. However, your .Net 2.0 projects will (at runtime) be bound to the .Net 3.5 versions of dependent assemblies such as System.Windows.Forms. You can quickly add a binding redirect by double-clicking on error in Visual Studio.
*Use CopyLocal=true. I'm not sure if this will suppress the warning. It will, like option 2 above, mean that all projects will use the .Net 3.5 version of System.Windows.Forms.
Here are a couple of ways to identify the offending reference(s):
*
*You can use a utility such as the one found at
https://gist.github.com/1553265
*Another simple method is to set Build
output verbosity (Tools, Options, Projects and Solutions, Build and
Run, MSBuild project build output verbosity, Detailed) and after
building, search the output window for the warning, and look at the
text just above it. (Hat tip to pauloya who suggested this in the
comments on this answer).
A: I wanted to post pauloya's solution they provided in the comments above. I believe it is the best solution for finding the offending references.
The simplest way to find what are the "offending reference(s)" is to
set Build output verbosity (Tools, Options, Projects and Solutions,
Build and Run, MSBuild project build output verbosity, Detailed) and
after building, search the output window for the warning. See the text
just above it.
For example, when you search the output panel for "conflict" you may find something like this:
3> There was a conflict between "EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" and "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089".
3> "EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" was chosen because it was primary and "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" was not.
As you can see, there is a conflict between EF versions 5 and 6.
A: I have another way to do this if you're using Nuget to manage your dependencies. I've discovered that sometimes VS and Nuget don't match up and Nuget is unable to recognize that your projects are out of sync. The packages.config will say one thing but the path shown in References - Properties will indicate something else.
If you're willing to update your dependencies, do the following:
*
*From Solution Explorer, right click the Project and click 'Manage
Nuget Packages'
*Select 'Installed packages' tab in left pane Record your installed
packages You may want to copy your packages.config to your
desktop first if you have a lot, so you can cross check it with
Google to see what Nuget pkgs are installed
*Uninstall your packages. Its OK, we're going to add them right back.
*Immediately install the packages you need. What Nuget will do is not only get you the latest version, but will alter your references, and also add the binding redirects for you.
*Do this for all of your projects.
*At the solution level, do a Clean and Rebuild.
You may want to start with the lower projects and work your way to the higher level ones, and rebuild each project as you go along.
If you don't want to update your dependencies, then you can use the package manager console, and use the syntax Update-Package -ProjectName [yourProjectName] [packageName] -Version [versionNumber]
A: On Visual Studio if you right click on the solution and Manage nuget packages theres a "Consolidate" tab which sets all the packages to the same version.
A: I had the same problem with one of my projects, however, none of the above helped to solve the warning. I checked the detailed build logfile, I used AsmSpy to verify that I used the correct versions for each project in the affected solution, I double checked the actual entries in each project file - nothing helped.
Eventually it turned out that the problem was a nested dependency of one of the references I had in one project. This reference (A) in turn required a different version of (B) which was referenced directly from all other projects in my solution. Updating the reference in the referenced project solved it.
Solution A
+--Project A
+--Reference A (version 1.1.0.0)
+--Reference B
+--Project B
+--Reference A (version 1.1.0.0)
+--Reference B
+--Reference C
+--Project C
+--Reference X (this indirectly references Reference A, but with e.g. version 1.1.1.0)
Solution B
+--Project A
+--Reference A (version 1.1.1.0)
I hope the above shows what I mean, took my a couple of hours to find out, so hopefully someone else will benefit as well.
A: This actually depends on your external component. When you reference an external component in a .NET application it generates a GUID to identify that component. This error occurs when the external component referenced by one of your projects has the same name and but different version as another such component in another assembly.
This sometimes happens when you use "Browse" to find references and add the wrong version of the assembly, or you have a different version of the component in your code repository as the one you installed in the local machine.
Do try to find which projects have these conflicts, remove the components from the reference list, then add them again making sure that you're pointing to the same file.
A: I just spent sometime debugging the same issue. Note, that issue might not be between different projects, but actually between several references in one project that depend on different versions of the same dll/assembly. In my case, issue was reference FastMember.dll versions mismatch that comes from two different NuGet packages in a single project. When I was given a project, it would not compile because NuGet packages were missing and VS refused to restore missing packages. Through the NuGet menu, I manually update all the NuGets to the latest version, that is when the warning appeared.
In Visual Studio Tools > Options > Build and Run > MSBuld Project build output verbosity: (set to) Diagnostics. Look for the line(s) There was a conflict between in the Output window. Below is the part of output that I got:
1> There was a conflict between "FastMember, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null" and "FastMember, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null". (TaskId:19)
1> "FastMember, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null" was chosen because it was primary and "FastMember, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null" was not. (TaskId:19)
1> References which depend on "FastMember, Version=1.5.0.0, Culture=neutral, PublicKeyToken=null" [C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\FastMember.1.5.0\lib\net461\FastMember.dll]. (TaskId:19)
1> C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\FastMember.1.5.0\lib\net461\FastMember.dll (TaskId:19)
1> Project file item includes which caused reference "C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\FastMember.1.5.0\lib\net461\FastMember.dll". (TaskId:19)
1> FastMember, Version=1.5.0.0, Culture=neutral, processorArchitecture=MSIL (TaskId:19)
1> References which depend on "FastMember, Version=1.3.0.0, Culture=neutral, PublicKeyToken=null" []. (TaskId:19)
1> C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\ClosedXML.0.94.2\lib\net46\ClosedXML.dll (TaskId:19)
1> Project file item includes which caused reference "C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\ClosedXML.0.94.2\lib\net46\ClosedXML.dll". (TaskId:19)
1> ClosedXML, Version=0.94.2.0, Culture=neutral, processorArchitecture=MSIL (TaskId:19)
Notice, that Project file item includes which caused reference "C:\Users\ksd3jvp\Source\Temp\AITool\Misra\AMSAITool\packages\ClosedXML.0.94.2\lib\net46\ClosedXML.dll"
ClosedXML.dll comes from ClosedXML NuGet and it depends on FastMember.dll 1.3.0.0. On top of it, there is also FastMember Nuget in the project, and it has FastMember.dll 1.5.0.0. Mismatch !
I have uninstalled ClosedXML & FastMember NuGets, because I had binding redirect and installed just latest version of ClosedXML That fixed the issue !
A: Also had this problem - in my case it was caused by having the "Specific Version" property on a number of references set to true. Changing this to false on those references resolved the issue.
A: => check there will be some instance of application installed partially.
=> first of all uninstall that instance from uninstall application.
=> then,clean,Rebuild,and try to deploy.
this solved my issue.hope it helps you too.
Best Regards.
A: If using NuGet all I had to do was:
*
*right click project and click Manage NuGet Packages..
*click the cog in top right
*click General tab in NuGet Package Manager above Package Sources
*check "Skip Applying binding redirects" in Binding Redirects
*Clean and rebuild and the warning's gone
Easy peasy
A: This happened to me too. One dll was referenced twice: once directly (in references) and once indirectly (referenced by another referenced project).
I removed direct reference, cleaned & rebuilt solution. Problem fixed.
A: *
*Open "Solution Explorer".
*Click on "Show all files"
*Expand "References"
*You'll see one (or more) reference(s) with slightly different icon than the rest. Typically, it is with yellow box suggesting you to take a note of it. Just remove it.
*Add the reference back and compile your code.
*That's all.
In my case, there was a problem with MySQL reference. Somehow, I could list three versions of it under the list of all available references; for .net 2.0, .net 4.0 and .net 4.5. I followed process 1 through 6 above and it worked for me.
A: Another thing to consider and check is, make sure you don't have any service running that's using that bin folder. if their is stop the service and rebuild solution
A: There seems to be a problem on Mac Visual Studio when editing .resx files.
I don't really know what happened, but I got this problem as soon as I edited some .resx files on my Mac.
I opened the project on Windows, opened the files and they were as if they haven't been edited.
So I edited them, saved, and everything started working again on Mac too.
A: I had such issue when my project had reference to NETStandardLibrary and one of referenced assemblies was published for netcore. Just published it as netstandard and problem was gone
A: Here's the solution, .NET Core 3.0 style:
https://github.com/HTD/ref-check
When you find what conflicts, maybe you would be able to resolve the conflicts.
If the conflicting references are from other packages, you either out of luck, or you need to use sources instead.
In my case, the packages conflicting are often of my own, so I can fix dependency problems and republish them.
A: I had the same problem. In the project's 'obj' folder I renamed the folder 'Debug' to 'Debug_OLD' and rebuilt. A new 'Debug' folder was built automatically, and the problem went away.
A: After some hours of trying to analyze the detailed build log, I discovered that several of the projects in my solution were targeting different .Net versions. I changed them all to .Net 4.7.2 and rebuilt the solution and the error was resolved.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "350"
} |
Q: Recommendation for javascript form validation library Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields.
A: I am about to start implementing javascript validation in my forms using jQuery Validation.
I think that StackOverflow users this jQuery plugin as well. It seems to be a very mature validation library, however it does build on top of jQuery, so it might not fit for you.
Like Tom said, don't forget that server side validation.
A: Personally I just rolled my own because it was much simpler to integrate with my error handling system and how I wanted it displayed on the site. 99% of the time you only care about a couple of things, required fields and comparing fields.
A: I've used this library for a couple of personal projects. It's pretty good, though I have had to make my own modifications to it a couple of times - nothing major, though, and it's easy enough to do so.
I'm sure you already do this, but also validate all of your information on the server-side, as well. Client-side-only validation is rarely, if ever, a good idea.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: best practice for releasing Microsoft dll's in setup I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft.
So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)?
A: Usually, redistributing any of Microsoft DLLs outside of the redistributable package is forbidden by their EULA, so you might first want to check the appropriate EULA for that DLL.
Generally, I would prefer the redist package since that makes sure that it's correctly "registered" into the system, i.e. if you install a newer version of the redist it gets updated (like DirectX) or not overwritten if it's an older version (also like DirectX).
A: Check in the installer if WSE 3.0 is installed and if it isn't alert the person and cancel the install, if it is continue normally. I wouldn't include the DLL in your setup package, because it could get out dated pretty fast, and I don't know if the EULA will allow it.
A: I believe the MS EULA prevents you from redistributing MS code, unless its in a redistributable package.
A proper redistributable should handle any other prerequisites, so its probably the better choice anyways.
A: If you don't include it you should at the very least link to it directly on your site or have your installer open the web browser to it (or even download it automatically). Or better yet, include the redistributable in your software package.
However, if the DLL is not very large and you suspect that few users will have it, in the interest of a better user I would prepackage it in the default installer. However, you can always have an installer that does not include it for those who want a smaller installer... a great deal of other vendors do this all the time.
A: Thanks for the suggestions/comments! After wrestling with windows installer setup I figured out the best way to include the WSE30 redist and pop up a dialog if it is not installed.
I'm aware of it not being best practice (and against Microsoft's EULA as mentioned) to simply include the DLL, which is why I thought it strange that it was trying to include the WSE DLL outside of the redist, especially when the redist is registered with the installer (it shows up as a pre-req under properties).
Thanks again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I learn about parser combinators? I've found a few resources on the subject, but they all require a deep understanding of SmallTalk or Haskell, neither of which I know.
A: Here are some parser combinator libraries in more mainstream languages:
*
*Spirit (C++)
*Jparsec (Java)
A: There are some great articles on the web describing parser combinators in C#, but no maintainable source repository, so I've created one at:
http://code.google.com/p/sprache/
Someone knowledgeable about parser combinators could probably do a lot to improve it (please step forward if this sounds like you :))
A: If you know Python, there's PyParsing.
A: For me this paper was extremly useful. It is almost languange neutral. Just in some small places they are reffering to Gofer.
A: I found an interesting article about implementing a parser combinator in C#. It also references some more general papers on the subject.
The Wikipedia article on the subject also has a general explaination of the concept.
A: Chrise Double wrote a parser combinator in Javascript.
A: I wrote 8 longish blog entries on monadic parser combinators in C# and F#; see here for the first one.
See also FParsec (Parsec for F#)
A: Cay Horstmann has 4 combinator parser lectures in Scala, with exercises. There is an example of parsing external DSLs in Scala here.
A: Here is a link to a talk (slides and script) on monadic parser combinators in C++.
A: http://www.codecommit.com/blog/scala/the-magic-behind-parser-combinators
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Performance comparison of RDF storage vs traditional database Has someone experiment RDF storage solution like Sesame? I'm looking for performance review of this kind of solution compared to the traditional database solution.
A: I've used sesame extensively in my projects at work. I've found it to be speedy and reliable enough for most situations I find myself in. It has definitely outperformed Jena's storage solutions on a variety of fronts. Sesame 1.x has faster query performance than the 2.x version, but the 2.x version has some nice features such as contexts and sparql support.
If you are looking to use a traditional relational database, you could look at something like D2RQ, or something like Owlgres (if you want inferencing).
A: There are plenny scalabity reports and benchmarks on the web about various triple-stores.
Here is a fine scalability report.
W3C itself maintain a wiki with lots of information about Large Triplestores and Benchmarks.
Follow these 3 links and take a time to read it. It's very informative. :)
A: One intuition is that if you have a very large number of entities, tuple stores can save yourself the trouble of having your indexes routinely knocked out of memory as you switch between tables, and instead always have the first couple levels of the tuple index in RAM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Select ..... where .... OR Is there a way to select data where any one of multiple conditions occur on the same field?
Example: I would typically write a statement such as:
select * from TABLE where field = 1 or field = 2 or field = 3
Is there a way to instead say something like:
select * from TABLE where field = 1 || 2 || 3
Any help is appreciated.
A: OR:
SELECT foo FROM bar WHERE baz BETWEEN 1 AND 3
A: Sure thing, the simplest way is this:
select foo from bar where baz in (1,2,3)
A: select * from TABLE where field in (1, 2, 3)
A: WHERE field IN (1, 2, 3)
A: select * from TABLE where field IN (1,2,3)
You can also conveniently combine this with a subquery that only returns one field:
select * from TABLE where field IN (SELECT boom FROM anotherTable)
A: You can still use in for
select *
from table
where field = '1' or field = '2' or field = '3'
its just
select * from table where field in ('1','2','3')
A: while in is a shortcut for or and I wasn't sure how I could combine in with and, I did it this way
SELECT * FROM table
WHERE column1='x' AND (column2='y' OR column2='z');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How to encrypt connection string in WinForms 1.1 app.config? Just looking for the first step basic solution here that keeps the honest people out.
Thanks,
Mike
A: This might help you along the way:
http://msdn.microsoft.com/en-us/library/aa302403.aspx
http://msdn.microsoft.com/en-us/library/aa302406.aspx
The articles are aimed at ASP.NET but the principles are the same.
A: The second piece of the puzzle is detecting an unencrypted connection string, encrypting it, and writing it back out to the config file. Writing to config files located in your exe dir is generally a very bad idea, but can be very useful during development. The pros and cons are very well described here. Be sure to read all the comments.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Listen for events in another application Suppose I have two applications written in C#. The first is a third party application that raises an event called "OnEmailSent".
The second is a custom app that I've written that I would like to somehow subscribe to the "OnEmailSent" even of the first application.
Is there any way that I could somehow attach the second application to an instance of the first application to listen for "OnEmailSent" event?
So for further clarification, my specific scenario is that we have a custom third party application written in c# that raises an "OnEmailSent" event. We can see the event exists using reflector.
What we want to do is have some other actions take place when this component sends an email.
The most efficient way we can think of would be to be able to use some form of IPC as anders has suggested and listen for the OnEmailSent event being raised by the third party component.
Because the component is written in C# we are toying with the idea of writing another C# application that can attach itself to the executing process and when it detect the OnEmailSent event has been raise it will execute it's own event handling code.
I might be missing something, but from what I understand of how remoting works is that there would need to be a server defining some sort of contract that the client can subscribe to.
I was more thinking about a scenario where someone has written a standalone application like outlook for example, that exposes events that I would like to subscribe to from another application.
I guess the scenario I'm thinking of is the .net debugger and how it can attach to executing assemblies to inspect the code whilst it's running.
A: You can try Managed Spy and for programmatic access ManagedSpyLib
ManagedSpyLib introduces a class
called ControlProxy. A ControlProxy is
a representation of a
System.Windows.Forms.Control in
another process. ControlProxy allows
you to get or set properties and
subscribe to events as if you were
running inside the destination
process. Use ManagedSpyLib for
automation testing, event logging for
compatibility, cross process
communication, or whitebox testing.
But this might not work for you, depends whether ControlProxy can somehow access the event you're after within your third-party application.
You could also use Reflexil
Reflexil allows
IL modifications by using the powerful
Mono.Cecil library written by Jb
EVAIN. Reflexil runs as Reflector plug-in and
is directed especially towards IL code
handling. It accomplishes this by
proposing a complete instruction
editor and by allowing C#/VB.NET code
injection.
A: In order for two applications (separate processes) to exchange events, they must agree on how these events are communicated. There are many different ways of doing this, and exactly which method to use may depend on architecture and context. The general term for this kind of information exchange between processes is Inter-process Communication (IPC). There exists many standard ways of doing IPC, the most common being files, pipes, (network) sockets, remote procedure calls (RPC) and shared memory. On Windows it's also common to use window messages.
I am not sure how this works for .NET/C# applications on Windows, but in native Win32 applications you can hook on to the message loop of external processes and "spy" on the messages they are sending. If your program generates a message event when the desired function is called, this could be a way to detect it.
If you are implementing both applications yourself you can chose to use any IPC method you prefer. Network sockets and higher-level socket-based protocols like HTTP, XML-RPC and SOAP are very popular these days, as they allow you do run the applications on different physical machines as well (given that they are connected via a network).
A: You can either use remoting or WCF. See http://msdn.microsoft.com/en-us/library/aa730857(VS.80).aspx#netremotewcf_topic7.
A: What's the nature of that OnEmailSent event from that third party application? I mean, how do you know the application is triggering such an event?
If you are planning on doing interprocess communication, the first question you should ask yourself is: Is it really necessary?
Without questioning your motives, if you really need to do interprocess communication, you will need some sort of mechanism. The list is long, very long. From simple WM_DATA messages to custom TCP protocols to very complex Web services requiring additional infrastructures.
This brings the question, what is it you are trying to do exactly? What is this third party application you have no control over?
Also, the debugger has a very invasive way of debugging processes. Don't expect that to be the standard interprocess mechanism used by all other applications. As a matter of fact, it isn't.
A: You can implement a similar scenario with SQL Server 2005 query change notifications by maintaing a persistent SqlConnection with a .NET application that blocks until data changes in the database.
See http://www.code-magazine.com/article.aspx?quickid=0605061.
A: also WM_COPYDATA might be possible, see https://social.msdn.microsoft.com/Forums/en-US/eb5dab00-b596-49ad-92b0-b8dee90e24c8/wmcopydata-event-to-receive-data-in-form-application?forum=winforms
I'm using it for similar Purose (to notify that options have been changed)
In our C++/Cli-scenario (MFC-)programs communicate vith WM_COPYDATA with Information-String in COPYDATASTRUCT-Member lpData
(Parameterlist like "Caller=xyz Receiver=abc Job=dosomething"). also a C#-App can receive WM_COPYDATA-messages as shown in the link. Sending WM_COPYDATA from C# (to known Mainframe-Handle) is done by a cpp/cli-Assembly, (I didnt proove how sending WMCOPYDATA can bei done in C#).
PS in Cpp/Cli we send AfxGetMainWnd()->m_hWnd as WPARAM of WMCOPYDATA-Message and in C# (WndProc) m.WParam can be used as adress to send WM_COPYDATA
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Mac iWork/Pages Automation There is a rich scripting model for Microsoft Office, but not so with Apple iWork, and specifically the word processor Pages. While there are some AppleScript hooks, it looks like the best approach is to manipulate the underlying XML data.
This turns out to be pretty ugly because (for example) page breaks are stored in XML. So for example, you have something like:
... we hold these truths to be self evident, that </page>
<page>all men are created equal, and are ...
So if you want to add or remove text, you have to move the start/end tags around based on the size of the text on the page. This is pretty impossible without computing the number of words a page can hold, which seems wildly inelegant.
Anybody have any thoughts on this?
A: I'd suggest that modifying the underlying XML file is "considered harmful". Especially if you haven't checked to see if the document is open!
I've had a quick look at the Scripting Dictionary for Pages, and it seems pretty comprehensive; here is part of one entry:
document n [inh. document > item; see also Standard Suite] : A Pages document.
elements
contains captured pages, character
styles, charts, graphics, images,
lines, list styles, pages, paragraph
styles, sections, shapes, tables, text
boxes.
properties
body text (text) : The main text flow of the document.
bottom margin (real) : The bottom margin of the publication.
facing pages (boolean) : Whether or not the view is set to facing
pages.
footer margin (real) : The footer margin of the publication.
header margin (real) : The header margin of the publication.
id (integer, r/o) : The unique identifier of the document.
...
So, I guess I'd want to know what it is that you want to do that you can't do with AppleScript?
A: The latest version of iWork '09 includes very comprehensive, although not complete, Applesript hooks, especially for pages. The us of Applescript should be much safer and more stable than modifying the underlying file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What's the best way to distribute python command-line tools? My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
A: Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.
To reproduce here:
from setuptools import setup
setup(
# other arguments here...
entry_points = {
'console_scripts': [
'foo = package.module:func',
'bar = othermodule:somefunc',
],
}
)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: Enabling a button in WPF depending on ListBox.SelectedIndex I have a rather classic UI situation - two ListBoxes named SelectedItems and AvailableItems - the idea being that the items you have already selected live in SelectedItems, while the items that are available for adding to SelectedItems (i.e. every item that isn't already in there) live in AvailableItems.
Also, I have the < and > buttons to move the current selection from one list to the other (in addition to double clicking, which works fine).
Is it possible in WPF to set up a style/trigger to enable or disable the move buttons depending on anything being selected in either ListBox? SelectedItems is on the left side, so the < button will move the selected AvailableItems to that list. However, if no items are selected (AvailableItems.SelectedIndex == -1), I want this button to be disabled (IsEnabled == false) - and the other way around for the other list/button.
Is this possible to do directly in XAML, or do I need to create complex logic in the codebehind to handle it?
A: Less code solution:
<Button Name="button1" IsEnabled="{Binding ElementName=listBox1, Path=SelectedItems.Count}" />
If count is 0 that seems to map to false, > 0 to true.
A: Here's your solution.
<Button Name="btn1" >click me
<Button.Style>
<Style>
<Style.Triggers>
<DataTrigger
Binding ="{Binding ElementName=list1, Path=SelectedIndex}"
Value="-1">
<Setter Property="Button.IsEnabled" Value="false"/>
</DataTrigger>
</Style.Triggers>
</Style>
</Button.Style>
</Button>
A: <Button IsEnabled="{Binding SelectedValue, ElementName=ListName, Mode=OneWay, TargetNullValue=0}" >Remove</Button>
that worked in my case
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: Best Practices for AS3 XML Parsing I've been having some trouble parsing various types of XML within flash (specifically FeedBurner RSS files and YouTube Data API responses). I'm using a URLLoader to load a XML file, and upon Event.COMPLETE creating a new XML object. 75% of the time this work fine, and every now and again I get this type of exception:
TypeError: Error #1085: The element type "link" must be terminated by the matching end-tag "</link>".
We think the problem is that The XML is large, and perhaps the Event.COMPLETE event is fired before the XML is actually downloaded from the URLLoader. The only solution we have come up with is to set off a timer upon the Event, and essentially "wait a few seconds" before beginning to parse the data. Surely this can't be the best way to do this.
Is there any surefire way to parse XML within Flash?
Update Sept 2 2008 We have concluded the following, the excption is fired in the code at this point:
data = new XML(mainXMLLoader.data);
// calculate the total number of entries.
for each (var i in data.channel.item){
_totalEntries++;
}
I have placed a try/catch statement around this part, and am currently displaying an error message on screen when it occurs. My question is how would an incomplete file get to this point if the bytesLoaded == bytesTotal?
I have updated the original question with a status report; I guess another question could be is there a way to determine wether or not an XML object is properly parsed before accessing the data (in case the error is that my loop counting the number of objects is starting before the XML is actually parsed into the object)?
@Theo: Thanks for the ignoreWhitespace tip. Also, we have determined that the event is called before its ready (We did some tests tracing mainXMLLoader.bytesLoaded + "/" + mainXMLLoader.bytesLoaded
A: Have you tried checking that the bytes loaded are the same as the total bytes?
URLLoader.bytesLoaded == URLLoader.bytesTotal
That should tell you if the file has finished loading, it wont help with the compleate event firing to early, but it should tell you if its a problem with the xml been read.
I am unsure if it will work over domains, as my xml is always on the same site.
A: The concerning thing to me is that it might be firing Event.COMPLETE before it's finished loading, and that makes me wonder whether or not the load is timing out.
How often does the problem happen? Can you have success one moment, then failure the very next with the same feed?
For testing purposes, try tracing the URLLoader.bytesLoaded and the URLLoader.bytesTotal at the top of your Event.COMPLETE handler method. If they don't match, you know that the event is firing prematurely. If this is the case, you can listen for the URLLoader's progress event. Check the bytesLoaded against the bytesTotal in your handler and only parse the XML once the loading is truly complete. Granted, this is very likely akin to what the URLLoader is doing before it fires Event.COMPLETE, but if that's broken, you can try rolling your own.
Please let us know what you find out. And if you could, please paste in some source code. We might be able to spot something of note.
A: Just a side note, this statement has no effect:
XML.ignoreWhitespace;
because ignoreWhitespace is a property. You have to set it to true like this:
XML.ingoreWhitespace = true;
A: As you mentioned in your question, the problem is very likely that your program is looking at the XML before it has actually been completely downloaded, I don't know that there's a surefire way to "parse" the XML because the parsing portion of your code is more than likely fine, it's simply a matter of whether or not it has actually downloaded.
You could try to use the ProgressEvent.PROGRESS event to continually monitor the XML as it downloads and then as Re0sless suggested, check the bytesLoaded vs the bytesTotal and have your XML parse begin when the two numbers are equal instead of using the Event.COMPLETE event.
You should be able to get the bytesLoaded and bytesTotal numbers just fine regardless of domains, if you can access the file you can access its byte information.
A: If you could post some more code we might be able to find the issue.
Another thing to test (besides tracing bytesTotal) is to trace the data property of the loader in the Event.COMPLETE handler, just to see if the XML data was actually loaded correctly, for example check that there is a </link> there.
A: @Brian Warshaw: This issue happens only about 10-20% of the time. Sometimes it hiccups and simply reloading the app will work fine, other times I will spend half an hour reloading the app over and over again to no avail.
This is the original code (when I asked the question):
public class BlogReader extends MovieClip {
public static const DOWNLOAD_ERROR:String = "Download_Error";
public static const FEED_PARSED:String = "Feed_Parsed";
private var mainXMLLoader:URLLoader = new URLLoader();
public var data:XML;
private var _totalEntries:Number = 0;
public function BlogReader(url:String){
mainXMLLoader.addEventListener(Event.COMPLETE, LoadList);
mainXMLLoader.addEventListener(IOErrorEvent.IO_ERROR, errorCatch);
mainXMLLoader.load(new URLRequest(url));
XML.ignoreWhitespace;
}
private function errorCatch(e:IOErrorEvent){
trace("Oh noes! Yous gots no internets!");
dispatchEvent(new Event(DOWNLOAD_ERROR));
}
private function LoadList(e:Event):void {
data = new XML(e.target.data);
// calculate the total number of entries.
for each (var i in data.channel.item){
_totalEntries++;
}
dispatchEvent(new Event(FEED_PARSED));
}
}
And this is the code that I wrote based on Re0sless' original reply (similar to some suggestions mentioned):
public class BlogReader extends MovieClip {
public static const DOWNLOAD_ERROR:String = "Download_Error";
public static const FEED_PARSED:String = "Feed_Parsed";
private var mainXMLLoader:URLLoader = new URLLoader();
public var data:XML;
protected var _totalEntries:Number = 0;
public function BlogReader(url:String){
mainXMLLoader.addEventListener(Event.COMPLETE, LoadList);
mainXMLLoader.addEventListener(IOErrorEvent.IO_ERROR, errorCatch);
mainXMLLoader.load(new URLRequest(url));
XML.ignoreWhitespace;
}
private function errorCatch(e:IOErrorEvent){
trace("Oh noes! Yous gots no internets!");
dispatchEvent(e);
}
private function LoadList(e:Event):void {
isDownloadComplete();
}
private function isDownloadComplete() {
trace (mainXMLLoader.bytesLoaded + "/" + mainXMLLoader.bytesLoaded);
if (mainXMLLoader.bytesLoaded == mainXMLLoader.bytesLoaded){
trace ("xml fully loaded");
data = new XML(mainXMLLoader.data);
// calculate the total number of entries.
for each (var i in data.channel.item){
_totalEntries++;
}
dispatchEvent(new Event(FEED_PARSED));
} else {
trace ("xml not fully loaded, starting timer");
var t:Timer = new Timer(300, 1);
t.addEventListener(TimerEvent.TIMER_COMPLETE, loaded);
t.start();
}
}
private function loaded(e:TimerEvent){
trace ("timer finished, trying again");
e.target.removeEventListener(TimerEvent.TIMER_COMPLETE, loaded);
e.target.stop();
isDownloadComplete();
}
}
I'll point out that since adding the code determining if mainXMLLoader.bytesLoaded == mainXMLLoader.bytesLoaded I have not had an issue - that said, this bug is hard to reproduce so for all I know I haven't fixed anything, and instead just added useless code.
A: The Event.COMPLETE handler really shouldn't be called unless the loader was fully loaded, it makes no sense. Have you confirmed that it is in fact not fully loaded (by looking at the bytesLoaded vs. bytesTotal values that you trace)? If the Event.COMPLETE event is dispatched before bytesLoaded == bytesTotal that is a bug.
Good that you've got it working with the timer, but it is very odd that you need it.
A: I suggest that you file a bug report at https://bugs.adobe.com/flashplayer/, because the event really shouldn't fire before all the bytes are loaded. In the meantime I guess you have to live with the timer. You might be able to do the same by listening at the progress event instead, that could perhaps save you from having to handle the timer yourself.
A: You could set a unique element namespace at the very end of your XML document that has one attribute "value" equal to "true";
//The XML
//Flash ignores the line that specifies the XML version and encoding so I have here as well.
<parent>
<child name="child1" />
<child name="child2" />
<child name="child3" />
<child name="child4" />
<documentEnd value="true" />
</parent>
//Sorry about the spacing, but it is difficult to get XML to show.
//Flash
var loader:URLLoader = new URLLoader();
var request:URLRequest = new URLRequest('pathToXML/xmlFileName.xml');
var xml:XML;
//Event Listener with weak reference set to true (5th parameter);
//The above comment does not define a required practice, this is to aid with garbage collection.
loader.addEventListener(Event.COMPLETE, onXMLLoadComplete, false, 0, true);
loader.load(request);
function onXMLLoadComplete(e:Event):void
{
xml = new XML(e.target.data);
//Now we check the last element (child) to see if it is documentEnd.
if(xml[xml.length()-1].documentEnd.@value == "true")
{
trace("Woot, it seems your xml made it!");
}
else
{
//Attempt the load again because it seems it failed when it was unable to find documentEnd in the XML Object.
loader.load(request);
}
}
I hope that this helps you for now, but the real hope is that enough people let adobe know about this issue. It is a sad thing to not be able to rely on events. I must say though, from what I have heard about XML, it is not very optimal at a large scale and believe this is when you require something like AMFPHP to serialize the data.
Hope this helps! Remember the idea here is that we know what the very last child/element in the XML is because we set it! There is no reason that we shouldn't be able to access the last child/element, but if we cannot, we must assume that the XML was not indeed complete and we force it to load again.
A: sometimes the RSS server page can fail to spit out correct and valid XML data especially if your constantly hitting it, so it may not be your fault. Have you tried hitting the page in a web browser (preferably with an xml validator plugin) to check that the server response is always valid?
The only other thing that I can see here is the line:
xml = new XML(event.target.data);
//the data should already be XML, so only casting is necessary
xml = XML(event.target.data);
Have you also tried setting the urlloader dataFormat to URLLoaderDataFormat.TEXT, and also adding url headers of prama-no-cache and/or adding a cache buster tot he url?
Just some suggestions...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Any recommendations for lightweight .net Win Forms HTML renderer controls? Trying to avoid the .net WebBrowser control (I don't need to navigate to a url, print rendered html or any of the other inbuilt goodies). Wrapping the IE dll seems a bit heavyweight.
I simply require something that can display basic html marked up text - an html equivalent of RichTextBox in effect. Anyone have any experiences / recommendations / war stories?
A: I developed this HTML control for .NET, which does what you were asking: i.e. display basic html marked up text.
It doesn't use IE or any other unmanaged code (except for the .NET framework itself).
A: Lutz Roeder (of Reflector fame) has a WYSIWYG HTML editor in .NET on his site here: http://www.lutzroeder.com/dotnet/. Check out the download called "writer". I haven't used it myself, but it was the first thing that popped into my mind.
A: While it takes a bit of effort, you can disable almost all of the 'extra' functionality of the built in WebBrowser control.
If the built in web browser provides all the functionality you need why look elsewhere?
A: J. Menendez Poo's fully managed HTML renderer isn't complete, but it's by far the best I've found.
I still have to try it in depth, but looks a lot more promising than the other alternative:
*
*Bruce Shankle's Anole
This assuming you don't actually need the editing capability's of Lutz Roeder's Writer.
A: Might want to take a look at Awesomium. I've had success with it in .net apps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Using an ocx in a console application I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
A: Isn't an OCX an ActiveX User Control? (something that you put onto a form for the user to interact with)?
The easiest way I know of to test COM/ActiveX stuff is to use excel. (Yes I know it sounds dumb, bear with me)
*
*Run Excel, create a new file if it hasn't done this for you
*Press Alt+F11 to launch the Visual Basic Editor (if you have excel 2007 it's on the 'Developer' ribbon tab thing
Now that you're in happy visual basic land...
*
*From the Tools menu, select References
*Select your OCX/COM object from the list, or click Browse... to find the file if it's not registered with COM - You may be able to skip this step if your OCX is already registered.
*From the Insert menu, select UserForm
*In the floating Toolbox window, right click and select Additional Controls
*Find your OCX in the list and tick it
*You can then drag your OCX from the toolbox onto the userform
*From the Run menu, run it.
*Test your OCX and play around with it.
*SAVE THE EXCEL FILE so you don't have to repeat these steps every time.
A: Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++.
Save to test.cpp and compile: cl.exe /EHsc test.cpp
To test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need.
#include "windows.h"
#include "shobjidl.h"
#include "atlbase.h"
//
// compile with: cl /EHsc test.cpp
//
// A fun little program to demonstrate creating an OCX.
// (CLSID_TaskbarList in this case)
//
BOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam )
{
ITaskbarList* ptbl = (ITaskbarList*)lParam;
ptbl->DeleteTab(hwnd);
return TRUE;
}
void HideTaskWindows(ITaskbarList* ptbl)
{
EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl);
}
// ============
BOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam )
{
ITaskbarList* ptbl = (ITaskbarList*)lParam;
ptbl->AddTab(hwnd);
return TRUE;// continue enumerating
}
void ShowTaskWindows(ITaskbarList* ptbl)
{
if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl))
throw "Unable to enum windows in ShowTaskWindows";
}
// ============
int main(int, char**)
{
CoInitialize(0);
try {
CComPtr<IUnknown> pUnk;
if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk)))
throw "Unabled to create CLSID_TaskbarList";
// Do something with the object...
CComQIPtr<ITaskbarList> ptbl = pUnk;
if (ptbl)
ptbl->HrInit();
HideTaskWindows(ptbl);
MessageBox( GetDesktopWindow(), _T("Check out the task bar!"), _T("StackOverflow FTW"), MB_OK);
ShowTaskWindows(ptbl);
}
catch( TCHAR * msg ) {
MessageBox( GetDesktopWindow(), msg, _T("Error"), MB_OK);
}
CoUninitialize();
return 0;
}
A: @orion thats so cool. Never thought of it that way.
Well @jschroedl thats was fun indeed.
Testing an activex in console app is fun. But I think its worth not trying down that path. You can call the methods or set and get the properties either through the way @jschroedl had explained or you can call the IDIspatch object through the Invoke function.
The first step is to GetIDsByName and call the function through Invoke and parameters to the function should be an array of VARIANTS in the Invoke formal parameter list.
All is fine and dandy. But once you get to events its downhill from there. Windows application requires a message pump to fire events. On a console you don't have one. I went down the path to implement a EventNotifier for the events just like you implement a CallBack interface in classic C++ way. But the events doesn't get to your implemented interface.
I am pretty sure this cannot be done on a console application. But I am really hoping someone out there will have a different take on events in a console application
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: When do you use sIFR? I heard Joel and Jeff talking about sIFR in one of the early podcasts. I've been using it on www.american-data.com and www.chartright.us with some fairly mixed results.
Yesterday I was informed that the first line of text on my website appeared upside down in Internet Explorer 6 without flash player. I'm pretty sure that assessment was wrong, owing to no flash player = no sIFR. But I'm getting some odd behavior on my pages, at least in IE 6, 7 and 8. I only really wanted to use sIFR because my fonts looked crummy on my computer in Firefox.
My question is: if you use sIFR, when do you use sIFR? In which cases do you disable sIFR? When is it better to just use the browser font?
A: You use sIFR moderately, say for headlines. Try not to use it for links, because links in Flash don't work as well as normal HTML links. It also makes little sense to use sIFR only for text that never changes, an image would work a lot better.
I haven't heard about the upside-down problem in a few years now, but in any case, that's an issue with IE 6 and (an old?) Flash player. In any case, it always makes sense to test thoroughly.
Also, did you look into sIFR 3 lately? It's much improved over v2.
A: I had plenty of headaches after implementing sIFR on my last website project. Most of the problems were to do with browser inconsistencies like you are describing. Text would appear in odd places, not wrap properly or just not display the way I wanted it to. I found that, as per usual, firefox was displaying nicely while I had to implement several different css hacks in order to get the same code to display properly in IE7 and IE6.
I say stick to standard browser fonts if you can, but if the project / client requires you to use it then make sure you test it thoroughly in all browsers and with various flash blockers etc.
A: Try to consider up front what kind of headache you're creating for yourself (if you are, which isn't always the case) by implementing sIFR. It's probably advisable to only use it when your site design is relatively straightforward. As soon as you start having to deal with specific browser rendering exceptions (CSS, for instance) due to a complex design, you're going to run into problems related to sIFR. And if you design sites for clients, it's tough to go back and tell them halfway through that sIFR is going to have to be removed. So try to identify issues up front.
One example we ran into was having sIFR titles, and then directly to the right of the title, say about padding-right: 20px (so, dependent on the width of the title text), some kind of icon. That led to a lot of hassle, making us wish we hadn't started using sIFR in the first place.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to round up the result of integer division? I'm thinking in particular of how to display pagination controls, when using a language such as C# or Java.
If I have x items which I want to display in chunks of y per page, how many pages will be needed?
A: In need of an extension method:
public static int DivideUp(this int dividend, int divisor)
{
return (dividend + (divisor - 1)) / divisor;
}
No checks here (overflow, DivideByZero, etc), feel free to add if you like. By the way, for those worried about method invocation overhead, simple functions like this might be inlined by the compiler anyways, so I don't think that's where to be concerned. Cheers.
P.S. you might find it useful to be aware of this as well (it gets the remainder):
int remainder;
int result = Math.DivRem(dividend, divisor, out remainder);
A: HOW TO ROUND UP THE RESULT OF INTEGER DIVISION IN C#
I was interested to know what the best way is to do this in C# since I need to do this in a loop up to nearly 100k times. Solutions posted by others using Math are ranked high in the answers, but in testing I found them slow. Jarod Elliott proposed a better tactic in checking if mod produces anything.
int result = (int1 / int2);
if (int1 % int2 != 0) { result++; }
I ran this in a loop 1 million times and it took 8ms. Here is the code using Math:
int result = (int)Math.Ceiling((double)int1 / (double)int2);
Which ran at 14ms in my testing, considerably longer.
A: This should give you what you want. You will definitely want x items divided by y items per page, the problem is when uneven numbers come up, so if there is a partial page we also want to add one page.
int x = number_of_items;
int y = items_per_page;
// with out library
int pages = x/y + (x % y > 0 ? 1 : 0)
// with library
int pages = (int)Math.Ceiling((double)x / (double)y);
A: A variant of Nick Berardi's answer that avoids a branch:
int q = records / recordsPerPage, r = records % recordsPerPage;
int pageCount = q - (-r >> (Integer.SIZE - 1));
Note: (-r >> (Integer.SIZE - 1)) consists of the sign bit of r, repeated 32 times (thanks to sign extension of the >> operator.) This evaluates to 0 if r is zero or negative, -1 if r is positive. So subtracting it from q has the effect of adding 1 if records % recordsPerPage > 0.
A: Found an elegant solution:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
Source: Number Conversion, Roland Backhouse, 2001
A: Another alternative is to use the mod() function (or '%'). If there is a non-zero remainder then increment the integer result of the division.
A: For records == 0, rjmunro's solution gives 1. The correct solution is 0. That said, if you know that records > 0 (and I'm sure we've all assumed recordsPerPage > 0), then rjmunro solution gives correct results and does not have any of the overflow issues.
int pageCount = 0;
if (records > 0)
{
pageCount = (((records - 1) / recordsPerPage) + 1);
}
// no else required
All the integer math solutions are going to be more efficient than any of the floating point solutions.
A: Converting to floating point and back seems like a huge waste of time at the CPU level.
Ian Nelson's solution:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
Can be simplified to:
int pageCount = (records - 1) / recordsPerPage + 1;
AFAICS, this doesn't have the overflow bug that Brandon DuRette pointed out, and because it only uses it once, you don't need to store the recordsPerPage specially if it comes from an expensive function to fetch the value from a config file or something.
I.e. this might be inefficient, if config.fetch_value used a database lookup or something:
int pageCount = (records + config.fetch_value('records per page') - 1) / config.fetch_value('records per page');
This creates a variable you don't really need, which probably has (minor) memory implications and is just too much typing:
int recordsPerPage = config.fetch_value('records per page')
int pageCount = (records + recordsPerPage - 1) / recordsPerPage;
This is all one line, and only fetches the data once:
int pageCount = (records - 1) / config.fetch_value('records per page') + 1;
A: I do the following, handles any overflows:
var totalPages = totalResults.IsDivisble(recordsperpage) ? totalResults/(recordsperpage) : totalResults/(recordsperpage) + 1;
And use this extension for if there's 0 results:
public static bool IsDivisble(this int x, int n)
{
return (x%n) == 0;
}
Also, for the current page number (wasn't asked but could be useful):
var currentPage = (int) Math.Ceiling(recordsperpage/(double) recordsperpage) + 1;
A: you can use
(int)Math.Ceiling(((decimal)model.RecordCount )/ ((decimal)4));
A: The integer math solution that Ian provided is nice, but suffers from an integer overflow bug. Assuming the variables are all int, the solution could be rewritten to use long math and avoid the bug:
int pageCount = (-1L + records + recordsPerPage) / recordsPerPage;
If records is a long, the bug remains. The modulus solution does not have the bug.
A: For C# the solution is to cast the values to a double (as Math.Ceiling takes a double):
int nPages = (int)Math.Ceiling((double)nItems / (double)nItemsPerPage);
In java you should do the same with Math.ceil().
A: Alternative to remove branching in testing for zero:
int pageCount = (records + recordsPerPage - 1) / recordsPerPage * (records != 0);
Not sure if this will work in C#, should do in C/C++.
A: I made this for me, thanks to Jarod Elliott & SendETHToThisAddress replies.
public static int RoundedUpDivisionBy(this int @this, int divider)
{
var result = @this / divider;
if (@this % divider is 0) return result;
return result + Math.Sign(@this * divider);
}
Then I realized it is overkill for the CPU compared to the top answer.
However, I think it's readable and works with negative numbers as well.
A: You'll want to do floating point division, and then use the ceiling function, to round up the value to the next integer.
A: I had a similar need where I needed to convert Minutes to hours & minutes. What I used was:
int hrs = 0; int mins = 0;
float tm = totalmins;
if ( tm > 60 ) ( hrs = (int) (tm / 60);
mins = (int) (tm - (hrs * 60));
System.out.println("Total time in Hours & Minutes = " + hrs + ":" + mins);
A: The following should do rounding better than the above solutions, but at the expense of performance (due to floating point calculation of 0.5*rctDenominator):
uint64_t integerDivide( const uint64_t& rctNumerator, const uint64_t& rctDenominator )
{
// Ensure .5 upwards is rounded up (otherwise integer division just truncates - ie gives no remainder)
return (rctDenominator == 0) ? 0 : (rctNumerator + (int)(0.5*rctDenominator)) / rctDenominator;
}
A: A generic method, whose result you can iterate over may be of interest:
public static Object[][] chunk(Object[] src, int chunkSize) {
int overflow = src.length%chunkSize;
int numChunks = (src.length/chunkSize) + (overflow>0?1:0);
Object[][] dest = new Object[numChunks][];
for (int i=0; i<numChunks; i++) {
dest[i] = new Object[ (i<numChunks-1 || overflow==0) ? chunkSize : overflow ];
System.arraycopy(src, i*chunkSize, dest[i], 0, dest[i].length);
}
return dest;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "386"
} |
Q: C# Corrupt Memory Error I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is:
System.AccessViolationException was unhandled
Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt."
Source="System.Windows.Forms"
StackTrace:
at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData)
at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
at System.Windows.Forms.Application.Run(Form mainForm)
at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18
at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
InnerException:
UPDATE:
Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now.
A: List of some possibilities:
*
*An object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that).
*An unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others.
*Mashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code.
*You are using unsafe block and doing funny stuff with it.
In you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong.
Are you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control?
A: This kind of prolem can occur if you are calling unmanaged code e.g. a dll. It can occur when Marshalling goes horribly wrong.
Can you tell us if you are calling unmanaged code? If so are you using default Marshalling or more specific stuff? From the looks of the stack trace are you using unsafe code e.g. Pointers and the like? This could be your problem.
A: Here is a more detailed stacktrace. It looks to me like it has something to do with the System.Windows.Form.dll
the TargetSite is listed as {IntPtr DispatchMessageW(MSG ByRef)}
and under module it has System.windows.forms.dll
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Automating WSDL.exe in a Custom Build I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services.
When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services.
Is there a generally accepted way to automate this?
A: There are a number of way to do it. A NAnt build script will do it, but I think the most commonly accepted method now is to use MSBuild. See MSDN for details.
A: Our company uses a combination of NANT + Cruise Control + Custom Utility apps to build our products. More specifically, the task in NANT will allow you to fire off those command-line applications such as WSDL.exe
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Solution deployment, CM, InstallShield People,
We have 4 or 5 utilities that work in conjunction with our application. These utilities are either .bat files, or VB apps, PowerBuilder, etc. I am trying to manage these utils in source control, and am trying to figure out a better way to assign versions to them. Right now, the developers use the version control's meta-data -- specifically label -- to store the version number of the tool.
My goal is to have individual InstallShield packages for each utility, and an easy means to manage and assign version numbers to these packages.
Would you recommend a separate .ini file with the info, or store the info in InstallShield .ism file itself, or just use the meta-data info from version control tool?
UPDATE:
I like the idea Orion. I have one concern though. The script that increments the version number... it can not be intelligent enough to increment Major number etc. right. e.g. if one of the utils has version 1.2.3 and we are at a point where the new version is 2.0.0. The script may not be able to handle this.
I think this has to do a lot with our branching techniques -- we don't have any. The folks thought since the utils are so small, the source may not need branches.
A: PowerBuilder in particular has a nice trick you can do to incorporate the build number from an ini file into the compiled application.
Details here: http://www.pbdr.com/pbtips/ex/autorev.htm
We have ini file inside source control that stores the build number and its value is used in our build scripts to determine what label to apply to the source tree after a successful build. Works very nicely for our needs. When we branch, we do have to manually kick the file to increment the proper number though.
A: I managed our build system at my last job, which seemed to have some parallels to what you're asking.
There were ~30 C++ projects which needed compiling, and various .NET/Java things, and the odd perl script.
This was all built on our build machine using NAnt - If I were doing it today I'd use rake, but the idea is the same.
We basically had an auto-incrementing build number which was stored in a version.txt file in the root of the repository.
Each time we did a build (automatically done each night, or also on-demand if neccessary) the script would increment this number and check the file back into source control.
All the other apps referenced this file for their version number, or for things which didn't support working like this, the script would set environment variables or perform other workarounds
*
*I'm pretty sure that our installshield programs referenced an environment variable for their version number, but we deprecated them in favour of wix as installshield really did suck
*in the case of visual studio, grep/replace the number within the .csproj files, and check them back in
Hope this gives you some ideas
A: Using the meta data from your version control system should keep things simpler. It's how your developers already use the system. There is no additional file to maintain. My personal experience has taught me to version the satellite applications with the same as version as the main app. K.I.S.S
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PowerShell App.Config Has anyone worked out how to get PowerShell to use app.config files? I have a couple of .NET DLL's I'd like to use in one of my scripts but they expect their own config sections to be present in app.config/web.config.
A: I'm guessing that the settings would have to be in powershell.exe.config in the powershell directory, but that seems to be a bad way of doing things.
You can use ConfigurationManager.OpenMappedExeConfiguration to open a configuration file based on the executing DLL name, rather than the application exe, but this would obviously require changes to the DLLs.
A: Cross-referencing with this thread, which helped me with the same question:
Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script
I added the following to my script, before invoking the DLL that needs config settings, where $configpath is the location of the file I want to load:
[appdomain]::CurrentDomain.SetData("APP_CONFIG_FILE", $configpath)
Add-Type -AssemblyName System.Configuration
See this post to ensure the configuration file specified is applied to the running context.
A: Attempting a new answer to an old question.
I think the modern answer would be: don't do that. PowerShell is a shell. The normal way of passing information between parts of the shell are shell variables. For powershell that would look like:
$global:MyComponent_MySetting = '12'
# i.e.
$PSDefaultParameterValues
$ErrorActionPreference
If settings is expected to be inherited across processes boundaries the convention is to use environment variables. I extend this to settings that cross C# / PowerShell boundary. A couple of examples:
$env:PATH
$env:PSModulePath
If you think this is an anti-pattern for .NET you might want to reconsider. This is the norm for PAAS hosted apps, and is going to be the new default for ASP.NET running on server-optimized CLR (ASP.NET v5).
See https://github.com/JabbR/JabbRv2/blob/dev/src/JabbR/Startup.cs#L21
Note: at time of writing I'm linking to .AddEnvironmentVariables()
I've revisited this question a few times, including asking it myself. I wanted to put a stake in the ground to say PowerShell stuff doesn't work well with <appSettings>. IMO it is much better to embrace the shell aspect of PS over the .NET aspect in this regards.
If you need complex configuration take a JSON string. POSH v3+ has ConvertFrom-JSON built-in. If everything in your process uses the same complex configuration put it in a .json file and point to that file from an environment variable.
If a single file doesn't suffice there are well established solutions like the PATH pattern, GIT .gitignore resolution, or ASP.NET web.config resolution (which I won't repeat here).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How to generate a core dump in Linux on a segmentation fault? I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
A: As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
A: Ubuntu 19.04
All other answers themselves didn't help me. But the following sum up did the job
Create ~/.config/apport/settings with the following content:
[main]
unpackaged=true
(This tells apport to also write core dumps for custom apps)
check: ulimit -c. If it outputs 0, fix it with
ulimit -c unlimited
Just for in case restart apport:
sudo systemctl restart apport
Crash files are now written in /var/crash/. But you cannot use them with gdb. To use them with gdb, use
apport-unpack <location_of_report> <target_directory>
Further information:
*
*Some answers suggest changing core_pattern. Be aware, that that file might get overwritten by the apport service on restarting.
*Simply stopping apport did not do the job
*The ulimit -c value might get changed automatically while you're trying other answers of the web. Be sure to check it regularly during setting up your core dump creation.
References:
*
*https://stackoverflow.com/a/47481884/6702598
A: To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.
If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?
A: By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.
A: Better to turn on core dump programmatically using system call setrlimit.
example:
#include <sys/resource.h>
bool enable_core_dump(){
struct rlimit corelim;
corelim.rlim_cur = RLIM_INFINITY;
corelim.rlim_max = RLIM_INFINITY;
return (0 == setrlimit(RLIMIT_CORE, &corelim));
}
A: It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of core_pattern sysctl value, through systemd-coredump(8). The core file size rlimit would typically be configured as "unlimited" already.
It is then possible to retrieve the core dumps using coredumpctl(1).
The storage of core dumps, etc. is configured by coredump.conf(5). There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:
Find the core file:
[vps@phoenix]~$ coredumpctl list test_me | tail -1
Sun 2019-01-20 11:17:33 CET 16163 1224 1224 11 present /home/vps/test_me
Get the core file:
[vps@phoenix]~$ coredumpctl -o test_me.core dump 16163
A: What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.
A: This is typically sufficient:
ulimit -c unlimited
Note this will not persist between ssh sections! To add persistence:
echo '* soft core unlimited' >> /etc/security/limits.conf
Now, if you're using Ubuntu, "apport" is probably running. Here's how to check:
sudo systemctl status apport.service
If it is, you'll probably find core dumps in one of these places:
/var/lib/apport/coredump
/var/crash
If you want to change the location of core dumps
Make sure that you have the permissions to create files and the directory exists in the directory you're sending a core dump to!
Here's an example. Note this will not persist across reboots:
sysctl -w kernel.core_pattern=/coredumps/core-%e-%s-%u-%g-%p-%t
mkdir /coredumps
Make sure that the process that's crashing has access to write to this. The easiest way would be an example like this:
chmod 777 /coredumps
Test that core dumps works
> crash.c
gcc -Wl,--defsym=main=0 crash.c
./a.out
==output== Segmentation fault (core dumped)
If it doesn't say "core dumped" above, something isn't working.
A: This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
A: Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>
static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);
struct sigaction sigact;
char *progname;
int main(int argc, char **argv) {
char *s;
progname = *(argv);
atexit(cleanup);
init_signals();
printf("About to seg fault by assigning zero to *s\n");
*s = 0;
sigemptyset(&sigact.sa_mask);
return 0;
}
void init_signals(void) {
sigact.sa_handler = signal_handler;
sigemptyset(&sigact.sa_mask);
sigact.sa_flags = 0;
sigaction(SIGINT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGSEGV);
sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGBUS);
sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGQUIT);
sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGHUP);
sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGKILL);
sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}
static void signal_handler(int sig) {
if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
if (sig == SIGSEGV || sig == SIGBUS){
dumpstack();
panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
}
if (sig == SIGQUIT) panic("QUIT signal ended program\n");
if (sig == SIGKILL) panic("KILL signal ended program\n");
if (sig == SIGINT) ;
}
void panic(const char *fmt, ...) {
char buf[50];
va_list argptr;
va_start(argptr, fmt);
vsprintf(buf, fmt, argptr);
va_end(argptr);
fprintf(stderr, buf);
exit(-1);
}
static void dumpstack(void) {
/* Got this routine from http://www.whitefang.com/unix/faq_toc.html
** Section 6.5. Modified to redirect to file to prevent clutter
*/
/* This needs to be changed... */
char dbx[160];
sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
/* Change the dbx to gdb */
system(dbx);
return;
}
void cleanup(void) {
sigemptyset(&sigact.sa_mask);
/* Do any cleaning up chores here */
}
You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
A: There are more things that may influence the generation of a core dump. I encountered these:
*
*the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
*in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
There are more situations which may prevent the generation that are described in the man page - try man core.
A: For Ubuntu 14.04
*
*Check core dump enabled:
ulimit -a
*One of the lines should be :
core file size (blocks, -c) unlimited
*If not :
gedit ~/.bashrc and add ulimit -c unlimited to end of file and save, re-run terminal.
*Build your application with debug information :
In Makefile -O0 -g
*Run application that create core dump (core dump file with name ‘core’ should be created near application_name file):
./application_name
*Run under gdb:
gdb application_name core
A: In order to activate the core dump do the following:
*
*In /etc/profile comment the line:
# ulimit -S -c 0 > /dev/null 2>&1
*In /etc/security/limits.conf comment out the line:
* soft core 0
*execute the cmd limit coredumpsize unlimited and check it with cmd limit:
# limit coredumpsize unlimited
# limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1024
memorylocked 32 kbytes
maxproc 528383
#
*to check if the corefile gets written you can kill the relating process with cmd kill -s SEGV <PID> (should not be needed, just in case no core file gets written this can be used as a check):
# kill -s SEGV <PID>
Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "245"
} |
Q: Add a shortcut to Startup folder with parameters in Adobe AIR I am trying to include a link to my application in the Startup folder with a parameter passed to the program.
I think it would work if I created the shortcut locally and then added it to my source. After that I could copy it to the Startup folder on first run.
File.userDirectory.resolvePath("Start Menu\\Programs\\Startup\\startup.lnk");
However, I am trying to get this to occur during install. I see there is are some settings related to the installation in app.xml, but nothing that lets me install it to two folders, or use a parameter.
<!-- The subpath of the standard default installation location to use. Optional. -->
<!-- <installFolder></installFolder> -->
<!-- The subpath of the Windows Start/Programs menu to use. Optional. -->
<!-- <programMenuFolder></programMenuFolder> -->
A: I'm new to Air, but also haven't found any way to customize the install process. It looks like you're limited to your application code. (Updating appears more flexible.)
From your example, it looks like you want your app' to run with a parameter constant each time Windows starts. So you're probably already aware you can set:
NativeApplication.nativeApplication.startAtLogin=true
when your app' first runs. Could you combine this with your parameter in a settings file in the application or user directory and accomplish what you need?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What Comes After The %? I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example:
double radius = 1.0;
double area = 0.0;
area = calculateArea( radius );
printf( "%10.1f %10.2\n", radius, area );
I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this?
A: man 3 printf
on a Linux system will give you all the information you need. You can also find these manual pages online, for example at http://linux.die.net/man/3/printf
A: 10.1f means floating point with 10 characters wide with 1 place after the decimal point.
If the number has less than 10 digits, it's padded with spaces.
10.2f is the same, but with 2 places after the decimal point.
You have these basic types:
%d - integer
%x - hex integer
%s - string
%c - char (only one)
%f - floating point (float)
%d - signed int (decimal)
%i - signed int (integer) (same as decimal).
%u - unsigned int
%ld - long (signed) int
%lu - long unsigned int
%lld - long long (signed) int
%llu - long long unsigned int
Edit: there are several others listed in @Eli's response (man 3 printf).
A:
10.1f means floating point with 1 place after the decimal point and the 10 places before the decimal point. If the number has less than 10 digits, it's padded with spaces. 10.2f is the same, but with 2 places after the decimal point.
On every system I've seen, from Unix to Rails Migrations, this is not the case. @robintw expresses it best:
Basically in a simple form it's %[width].[precision][type].
That is, not "10 places before the decimal point," but "10 places, both before and after, and including the decimal point."
A: http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful
Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc).
Hope this helps
UPDATE:
To clarify using your examples:
printf( "%10.1f %10.2\n", radius, area );
%10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place.
%10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places.
A: 10.1f means you want to display a float with 1 decimal and the displayed number should be 10 characters long.
A: In short, those values after the % tell printf how to interpret (or output) all of the variables coming later. In your example, radius is interpreted as a float (this the 'f'), and the 10.1 gives information about how many decimal places to use when printing it out.
See this link for more details about all of the modifiers you can use with printf.
A: Man pages contain the information you want. To read what you have above:
printf( "%10.2f", 1.5 )
This will print:
1.50
Whereas:
printf("%.2f", 1.5 )
Prints:
1.50
Note the justification of both.
Similarly:
printf("%10.1f", 1.5 )
Would print:
1.5
Any number after the . is the precision you want printed. Any number before the . is the distance from the left margin.
A: One issue that hasn't been raised by others is whether double is the same as a float. On some systems a different format specifier was needed for a double compared to a float. Not least because the parameters passed could be of different sizes.
%f - float
%lf - double
%g - double
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: AnkhSVN Cannot Connect Due to Proxy Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work?
A: Current version of AnkhSVN does not provide a GUI for proxy settings, but you can hand-edit the servers file (which is a simple .ini) and it should work.
Servers file resides in: C:\Documents and Settings\YOU\Application Data\Subversion (or wherever your APP_DATA is)
A: You can also use TortoiseSVN for editting the proxy settings.
TortoiseSVN saves the settings in the registry in the common location that all Subversion clients (by default) use.
UPDATE: A proxy settings dialog is now implemented in the AnkhSVN daily builds.
It will be available in the next release.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Easy way to backport Java 6 code to Java 5? We've been developing a windows based application using Java for a while now, but NOW we want to make a mac port and have to reverse of code from Java 6 and Java 5. Is there any easy way to do this rather than re-coding Java 6 specific code? We'd like to keep our code up to date.
A: There are a couple of libraries out there which can help you. Unfortunately I haven't tried them myself, because I've never run into such a situation.
*
*Retroweaver
*Retrotransaltor
*backport-jsr166
A: See here:
http://en.wikipedia.org/wiki/Comparison_of_backporting_tools
I tried retrotranslator from 1.6 to 1.4.2 and it works pretty well!
A: Apple released Java 6 on the latest version of OS X. What features are you using specifically from Java6 that aren't in java5?
A: In my experience this is so easy that the whining takes more time than the doing. There are very few things in 1.6 that can't be backported with a minute or so (literally) worth of work. How many compile errors are you seeing when you try it with 1.5, and what for?
Keep in mind that there are readily available, API compatible, low-footprint backports for the few things that are useful in 1.6 (SwingWorker).
A: Do you know how much you would have to rewrite if you just went back to Java 5? If you changes the JDK setting in your IDE and try to recompile it should give you a pretty good idea on how big the changes would actually be. For most developers, Java 6 didn't really offer too much in the way of new features/APIs but I guess it's possible your project depends heavily on something that was added.
A: There is also Java 8 for Mac OS X. New versions of Java would be compatible, like Java 8 is compatible with Java 5 Code, so Java 6 code is compatible too.
A: You might be able to backport the additional libraries from Java 6 to Java 5, but I imagine it would be rather more trouble that it's worth. Intel Macs with 64-bit processors (so not the original Intel Mac Mini) running Leopard have Java 6, so perhaps you could just target them?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Recommendations for a .NET component to access an email inbox I've been asked to write a Windows service in C# to periodically monitor an email inbox and insert the details of any messages received into a database table.
My instinct is to do this via POP3 and sure enough, Googling for ".NET POP3 component" produces countless (ok, 146,000) results.
Has anybody done anything similar before and can you recommend a decent component that won't break the bank (a few hundred dollars maximum)?
Would there be any benefits to using IMAP rather than POP3?
A: I use the free and open source SharpMimeTools in my application, BugTracker.NET. It has been very dependable:
http://anmar.eu.org/projects/sharpmimetools/
See the files POP3Client.cs, POP3Main.cs, and insert_bug.aspx
A: With IMAP protocol you can access sub folders, and set message status (seen/unseen), also you can use IDLE feature for instant notifications.
Mail.dll includes POP3, IMAP, SMTP components with SSL support and powerful MIME parser:
using(Imap imap = new Imap())
{
imap.Connect("imap.server.com"); // or ConnectSSL for SSL
imap.Login("user", "password");
imap.SelectInbox();
List<long> uids = imap.Search(Flag.Unseen);
foreach (long uid in uids)
{
IMail mail = new MailBuilder()
.CreateFromEml(imap.GetMessageByUID(uid));
Console.WriteLine(mail.Subject);
}
imap.Close();
}
Please note that this is commercial product that I've created.
You can download it at https://www.limilabs.com/mail
A: I recomment chilkat. They have pretty stable components, and you can get their email component for as cheap as $99 for a single developer. Personally, I think going with the whole package of components is a better deal, as it's only $289, and comes with many useful components. I'm not affiliated with them in any way, although I probably sound like I am.
A: I would recommend AdvancedIntellect. There are components for POP3 and IMAP (ASPNetPOP3 and ASPNetIMAP). Good quality and very responsive support - I remember receiving replies to my questions on a weekend.
A: You may want to check our Rebex Mail component. It includes IMAP, SMTP, POP3 protocols and and S/MIME parser.
The POP3 does not have a concept of 'unread' messages or searchig for messages matching specific criteria. POP3 simply returns all messages in your inbox.
Using IMAP you can instruct the IMAP server to send you just unread messages, messages which arrived since specified time, messages from specific user etc. You don't have to download it all to the client and do the filtering there.
Following code shows how to download unread messages from the Imap server using Rebex.Net.Imap class.
// create client, connect and log in
Imap client = new Imap();
client.Connect("imap.example.org");
client.Login("username", "password");
// select folder
client.SelectFolder("Inbox");
// get message list - envelope headers
ImapMessageCollection messages = client.Search
(
ImapSearchParameter.HasFlagsNoneOf(ImapMessageFlags.Seen)
);
// display info about each message
Console.WriteLine("UID | From | To | Subject");
foreach (ImapMessageInfo message in messages)
{
Console.WriteLine(
"{0} | {1} | {2} | {3}",
message.UniqueId,
message.From,
message.To,
message.Subject);
}
// disconnect
client.Disconnect();
Example of combining multiple search criteria follows. This will return messages from the last year larger than 100KB.
ImapMessageCollection messages = client.Search
(
ImapSearchParameter.Arrived(DateTime.Now.AddYears(-1), DateTime.Now),
ImapSearchParameter.Size(1024 * 100, Int32.MaxValue)
);
You can download the trial from rebex.net/secure-mail.net/download.aspx
A: If you use an open source POP3 implementation or something freely available then you will have access to modify the code and expand it in the direction needed. A quick Google resulted in this C# POP3 code from Code Project to retrieve messages.
There's something empowering about rolling your own, or at least extending it.
A: Lumisoft is open-source and includes IMAP and POP clients (among other stuff). I've been using them for years with no problems.
A: How about WCF? It's free.
If you have an Exchange server:
http://msdn.microsoft.com/en-us/library/bb397812.aspx
an example for pop3:
http://bartdesmet.net/blogs/bart/archive/2006/09/13/4417.aspx
A: C#Mail cost $0 but is also GNU GPL licenced, so make sure that's OK.
A: You can do this using MailBee.NET Objects: http://www.afterlogic.com/products/net-email-components
While I'd recommend to use IMAP indeed, particularly since it offers IDLE support mentioned here already, you could do the same with POP3. There's a brief description of both the approaches, and a complete sample for IMAP IDLE scenario:
http://www.afterlogic.com/wiki/Getting_notifications_about_new_messages_in_mailbox_%28IMAP_IDLE_and_polling%29
Please note that I am affiliated with AfterLogic, and I'll be pleased to assist you if you need any help, check Request Support option at our website.
A: IMAPX2 is the best. Using IMAP you can control the folders in a mail server, a thing you wouldn't be able to do using POP. IMAPX is an open source code you can look into, and is free to use.
IMAPX is straight forward and reliable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Is AnkhSVN any good? I asked a couple of coworkers about AnkhSVN and neither one of them was happy with it. One of them went as far as saying that AnkhSVN has messed up his devenv several times.
What's your experience with AnkhSVN? I really miss having an IDE integrated source control tool.
A: I tried version 1, and it was unreliable to say the least. I can't say anything about 2.0.
If you can afford it, the one I use, VisualSVN, is very good and uses TortoiseSVN for all its gui, except for the specialized things related to its VS integration.
A: @pilif: AnkhSVN maintains an in-memory state of the working copy, which is invalidated/updated by Visual Studio events (ie you edit/change a file) and AnkhSVN events (ie you commit/update/revert/etc)
Whenever the working copy is changed from outside Visual Studio (by editing with another tool, or by using another Subversion client), you will have to refresh AnkhSvn using the Refresh command we provide.
The other thing that happens when you delete a file in a project with TortoiseSvn for example, is that it remains listed in the project file, and you will have to remove it there seperately (and then commit the project file as well).
A: Copy/Pasting parts of my own Blogpost, as I switched from Ankh to VisualSVN:
Why did I switch? Because i was a bit unhappy with the overall stability of Ankh, since it has some problems actually tracking Solution changes. VisualSVN is “just” a TortoiseSVN Frontend, which means it leaves all the “heavy lifting” to a third-party tool that a) is installed on most Workstations anyway and b) that’s been tested and used by such a wide audience, it’s really rock-solid.
Now, AnkhSVN is certainly not a bad product, and the people behind it are serious about what they are doing, but having long-deleted files still in my SVN or getting the “Please Cleanup your solution” message get’s annoying after some time, but my biggest gripe is the property window. It’s nice that there is a nice window with Radio Buttons asking me which property I want to add. Unfortunately, there is no way to manually enter a property.
Edit: That was for AnkhSVN 1.x. In the meantime, it was updated to 2.x and much improved. I use it in production on a system where I don't have VisualSVN and it works extremely well now.
A: I had no problems with v1, but I was warned not to use it. I've been using v2 for a while, and I've had no problems with it. I still keep a backup of the repository though...
A: Older AnkhSVN (pre 2.0) was very crappy and I was only using it for shiny icons in the solution explorer. I relied on Tortoise for everything except reverts.
The newer Ankh is a complete rewrite (it is now using the Source Control API of the IDE) and looks & works much better. Still, I haven't forced it to any heavy lifting. Icons is enough for me.
The only gripe I have with 2.0 is the fact that it slaps its footprint to .sln files. I always revert them lest they cause problems for co-workers who do not have Ankh installed. I don't know if my fears are groundless or not.
addendum:
I have been using v2.1.7141 a bit more extensively for the last few weeks and here are the new things I have to add:
*
*No ugly crashes that plagued v1.x. Yay!
*For some reason, "Show Changes" (diff) windows are limited to only two. Meh.
*Diff windows do not allow editing/reverting yet. Boo!
*Updates, commits and browsing are MUCH faster than Tortoise. Yay!
All in all, I would not use it standalone, but once you start using it, it becomes an almost indispensable companion to Tortoise.
A: I started with AnkhSvn and then moved on to VisualSvn. I have my own gripes with VisualSvn but its far less trouble compared to Ankh. I'm yet to try the new version of Ankh which they say is a complete rewrite and had inputs from Microsoft dev team as well.
A: I always had stability issues with AnkhSVN. I couldn't switch everyone to Subversion where I work without an integrated solution.
Thank goodness for VisualSVN + TortoiseSVN.
VisualSVN isn't free, but it is cheap, and works a treat.
A: I've been using both the newest version of Ankh SVN and Tortoise on a project at home. I find them to both be very good with a caveat.
I've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit. This is probably down to me misusing SVN in some way but TFS at work does not have this problem.
A: I tried AnkhSVN (1.0.3, just 4 months ago), and it did not work the way I wanted it to (i.e. needed to select things in the browser window instead of based on active file). I ended up making some macros that utilize TortoiseSVN that work much more like what I expected.
I've been very happy with using TortoiseSVN via explorer and my macros inside the IDE.
A: @mcintyre321
I've found that both SVN tools have at times failed to keep up with my file/folder renaming and moving resulting in it thinking that a perfectly good file needs to be deleted on the next commit.
A move or rename operation results in an delete and 'add with history' at subversion level.
TortoiseSvn shows this as:
originalFile deleted
newFile added (+)
A: Earlier on (like 2 years ago when I last tried), AnkhSVN and Tortoise used in parallel with the same working copy caused some kind of working copy corruption where Ankh and Tortoise somehow lost track of the state the other tool left the working copy in.
It was as if one of the tools stored additional metadata not contained in the working copy and was reliant on that being correct.
The problems showed themselves by Ankh (or Tortoise) insisting on files being there which weren't, on files being changed which weren't and on files not being changed which were (and thus unable to commit).
Maybe this has been fixed since, but I thought I'd better warn you guys.
A: About a year ago me and a buddy used AnkhSVN for a project... several commits later while moving namespaces around, it broke the SVN repository. Broke as in, the last commit we did got corrupted, and we couldn't commit anymore.
After that we used TortoiseSVN and did the namespace moving manually, it just... worked. If you're only working on base class libraries you could always try using SharpDevelop instead (that integrates with TortoiseSVN).
I do hope they did fix AnkhSVN now though because IDE integrations always rock... when they work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How do I create a self signed SSL certificate to use while testing a web app How do I create a self signed SSL certificate for an Apache Server to use while testing a web app?
A:
How do I create a self-signed SSL
Certificate for testing purposes?
from http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert:
*
*Make sure OpenSSL is installed and in your PATH.
*Run the following command, to create server.key and server.crt
files:
openssl req -new -x509 -nodes -out server.crt -keyout server.key
These can be used as follows in your httpd.conf file:
SSLCertificateFile /path/to/this/server.crt
SSLCertificateKeyFile /path/to/this/server.key
*It is important that you are aware that this server.key does not have any passphrase. To add a passphrase to the key, you should run the following command, and enter & verify the passphrase as requested.
openssl rsa -des3 -in server.key -out server.key.new
mv server.key.new server.key
Please backup the server.key file, and the passphrase you entered,
in a secure location.
A: Various tools exist that can generate SSLs. Try OpenSSL for example. Alternatively, there's one in the IIS 6 resource kit, if you're on Windows.
A:
WARNING: This is totally useless for purposes other than local testing.
Replace MYDOMAIN with your local domain. Works with localhost too.
In some folder create MYDOMAIN.conf file. Add the following content into it:
[ req ]
prompt = no
default_bits = 2048
default_keyfile = MYDOMAIN.pem
distinguished_name = subject
req_extensions = req_ext
x509_extensions = x509_ext
string_mask = utf8only
# The Subject DN can be formed using X501 or RFC 4514 (see RFC 4519 for a description).
# Its sort of a mashup. For example, RFC 4514 does not provide emailAddress.
[ subject ]
countryName = KE
stateOrProvinceName = Nairobi
localityName = Nairobi
organizationName = Localhost
# Use a friendly name here because its presented to the user. The server's DNS
# names are placed in Subject Alternate Names. Plus, DNS names here is deprecated
# by both IETF and CA/Browser Forums. If you place a DNS name here, then you
# must include the DNS name in the SAN too (otherwise, Chrome and others that
# strictly follow the CA/Browser Baseline Requirements will fail).
commonName = Localhost dev cert
emailAddress [email protected]
# Section x509_ext is used when generating a self-signed certificate. I.e., openssl req -x509 ...
[ x509_ext ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
# You only need digitalSignature below. *If* you don't allow
# RSA Key transport (i.e., you use ephemeral cipher suites), then
# omit keyEncipherment because that's key transport.
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alternate_names
nsComment = "OpenSSL Generated Certificate"
# RFC 5280, Section 4.2.1.12 makes EKU optional
# CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
# In either case, you probably only need serverAuth.
# extendedKeyUsage = serverAuth, clientAuth
# Section req_ext is used when generating a certificate signing request. I.e., openssl req ...
[ req_ext ]
subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alternate_names
nsComment = "OpenSSL Generated Certificate"
# RFC 5280, Section 4.2.1.12 makes EKU optional
# CA/Browser Baseline Requirements, Appendix (B)(3)(G) makes me confused
# In either case, you probably only need serverAuth.
# extendedKeyUsage = serverAuth, clientAuth
[ alternate_names ]
DNS.1 = MYDOMAIN
# Add these if you need them. But usually you don't want them or
# need them in production. You may need them for development.
# DNS.5 = localhost
# DNS.6 = localhost.localdomain
DNS.7 = 127.0.0.1
# IPv6 localhost
# DNS.8 = ::1
Generate the certificate files:
$ sudo openssl req -config MYDOMAIN.conf -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout MYDOMAIN.key -days 1024 -out MYDOMAIN.crt
$ sudo openssl pkcs12 -export -out MYDOMAIN.pfx -inkey MYDOMAIN.key -in MYDOMAIN.crt
$ sudo chown -R $USER *
Make your local machine trust your certificate:
# Install the cert utils
$ sudo apt-get install libnss3-tools
# Trust the certificate for SSL
$ pk12util -d sql:$HOME/.pki/nssdb -i MYDOMAIN.pfx
# Trust self-signed server certificate
$ certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n 'dev cert' -i MYDOMAIN.crt
Edit /etc/apache2/sites-available/default-ssl.conf and make sure these two directives are pointing to the files .crt and .key you have just created ( un-comment it if needed ):
SSLCertificateFile /path/to/MYDOMAIN.crt
SSLCertificateKeyFile /path/to/MYDOMAIN.key
Apply configuration and re-start apache:
# If you are not using the default configuration ( /etc/apache2/sites-available/default-ssl.conf ),
# then replace "default-ssl" for whatever conf file name you've chosen
# ( DO NOT include the .conf bit ).
$ sudo a2ensite default-ssl
$ sudo service apache2 restart
Visit https://MYDOMAIN on your browser. Firefox will warn you that the certificate is self-signed and, therefore, say it is invalid. You will have to add an exception.
Source:
*
*Most of it I got from 3dw1n_m0535;
*If you run into trouble, read the README file at /usr/share/doc/apache2/README.Debian.gz
A: Use OpenSSL (http://www.openssl.org/)
Here's a tutorial: http://novosial.org/openssl/self-signed/
Here is the good tutorial to start with: SSH localhost.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Why go 64 bit OS? On these questions:
*
*Which Vista edition is best for a developer machine?
*Vista or XP for Dev Machine
People are recommending 64 bit, can you explain why? Is it just so you can have more then 3GB of addressable RAM that 32 bit gives you?
And how does Visual Studio benefit from all this extra RAM?
I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.
A: Vista, as far as I know, has much better 64 bit support than XP. It is more well advertised than 64 bit XP, and more popular. Driver and software support should be much better for 64-bit Vista.
The 64-bit switch is in progress right now in the computing industry. You might as well switch. Microsoft made the serious leap to 64-bit already, and many have already followed suit. Those who haven't switched, will soon, most likely.
As for the technical benefits, there aren't many aside from the higher memory limits. Vista will certainly allow you to take advantage of the 4GB+ of RAM if you have it on 64-bit though.
A: A number of reasons.
*
*Yes, you're right it is so you can have more than 3 gig of ram
*More and more systems are going to be 64 bit soon so it makes sense to develop on what you're going to be running on
*Some bugs can only be observed when running in 64 bit mode
A: "There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by."
This is caused, in .net, by having the AnyCPU flag set. AnyCPU on an x64 machine will run the process as a x64 process, which proceeds to explode when attempting to call/load a 32 bit dll. Since those libraries are 32 bit you need to set the build to x86, to ensure the app will run as an x86 process, if on an x64 machine it will run in WoW.
Signed Drivers. No more "Unknown Device Driver" blue screens, drivers that cause issues are found out, and rightly blamed for their crashes.
Signed drivers also means the drivers are current. Manufacturers that used to get away with updating a driver once every 2-3 years had to get signed/certified. Which means the driver is relatively current and had to pass basic "is this total crap" test at Microsoft.
This "lack of driver support" I've always seen as a boon. Forcing manufacturer certification.
More address space. Others have mentioned that this allows more RAM, which is true. But it has more impact on memory management performance. It also means having 4 gigs RAM and a graphics card with 512MB on it will be fully used by the system. On a 32 bit OS the system has to decide, out of the limited addresses, what hardware gets what range, physical RAM loses.
Then there is always the possibility of using more than 4 gigs RAM, good for when you have lots of VMs
x64 Vista loads core OS processes/services, during boot, into random addresses. Giving some exploits a 1/256 chance of picking the right memory location, instead of 100% on a 32 machine.
No kernel patching. None. Nada. Zilch. It does mean some Sysinternal tools do not work, however it means xyz spyware/virus cant maliciously apply the same techniques as sysinternals to hide forever, intercept calls, etc. (this is what keeps out some anti-virus software... as well as viruses)
A: Another technical benefit, aside from the increased address space, is that 64bit apps always use DEP, so you are forced to fix those bugs and potential security holes.
A: 64-bit won't be mainstream before most programs are availiable in 64 bit versions. And who make programs? Developers, developers, developers!
See my point? If developers don't make the shift, how is 64-bit programs going to be mainstream?
Other than that, there is of cource more reasons:
*
*Signed drivers
*More memory, as you mentioned
*You get the possibility to test your programs on 64-bit (obviously)
*It's the future. =)
A: I switched from 32 bit Vista to 64 bit and haven't looked back. I have only had a problem with one device (a multi-track firewire mixing board) - but everything else that has worked for 32-bit works for 64. Throw in the ability to add piles of cheap RAM, and I don't see any reason why anyone would stick with 32 if the processor supports it.
If you're really unsure, use Vista's much improved multi-boot functionality and install 32 bit XP and 64 bit Vista on the same machine on different partitions. I did, but to tell you the truth, I haven't gone back into XP for at least 9 months now.
A: Another advantage of 64 bit:
All the registers associated with the microprocessors are 64-bit. This enables High- precision computations and 64-bit arithmetic to be performed in fewer clock-cycles as compared to 32-bit microprocessors. In certain cases like 64-bit multiplication, it is twice as fast.
A: XP 64bit wasn't ready for prime time, there were no drivers for it. In Windows Vista 64-bit this isn't the case. So if you are looking to install Windows Vista go 64-bit if you are keeping XP stay at 32-bit.
A: Bigger is always best? The RAM thing is the major advantage, and the increased address space. I guess as long as drivers aren't an issue, then why NOT 64bit?
A: People are recommending 64 bit, can you explain why? Is it just so you can have more then 3Gb of addressable RAM that 32 bit gives you?
This addressable RAM limit is not a problem for a regular user, but it is pretty critical on DB configuration, scientific computing, etc...
And how does Visual Studio benefit from all this extra RAM?
Does it??? If you want to compile faster you can gain up to 20% compilation time compiling directly from a ramdisk partition. I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.
Switching 64 bits for a regular dev station is probably useless.
A: Vista x64 has been a very pleasant experience for me. There are a couple of edge cases, but most software and drivers work fine with it at this point. The biggest practical reason I see to use it is that you can load up on RAM (say 6GB or more) and then dedicate lots of it to virtual machines and other apps that require lots of memory (like Photoshop). If you are only using Visual Studio and maybe a couple other apps day to day, then it might not be as beneficial, but I find myself 0ften running 10 to 20 apps at a time (seriously) and the extra RAM is critical.
A: DotNet rocks had a recent show all about the benefits and pitfalls of going 64-bit from a .Net developer perspective.
http://www.dotnetrocks.com/default.aspx?showNum=341
There are the obvious benefits of having access to more RAM in windows, as well as the obvious possible downside presented by unavailable drivers (which not only have to be 64-bit, but signed and certified as well).
Other points made are in that if you ever need to test anything you are developing under 64-bit, the only way you can do that is on a 64-bit OS. You can always create VM image to test under 32-bit from a 64-bit OS.
There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Prevent WebBrowser control from swallowing exceptions I'm using the System.Windows.Forms.WebBrowser, to make a view a-la Visual Studio Start Page. However, it seems the control is catching and handling all exceptions by silently sinking them! No need to tell this is a very unfortunate behaviour.
void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e)
{
// WebBrowser.Navigating event handler
throw new Exception("OMG!");
}
The code above will cancel navigation and swallow the exception.
void webBrowserNavigating(object sender, WebBrowserNavigatingEventArgs e)
{
// WebBrowser.Navigating event handler
try
{
e.Cancel = true;
if (actions.ContainsKey(e.Url.ToString()))
{
actions[e.Url.ToString()].Invoke(e.Url, webBrowser.Document);
}
}
catch (Exception exception)
{
MessageBox.Show(exception.ToString());
}
}
So, what I do (above) is catch all exceptions and pop a box, this is better than silently failing but still clearly far from ideal. I'd like it to redirect the exception through the normal application failure path so that it ultimately becomes unhandled, or handled by the application from the root.
Is there any way to tell the WebBrowser control to stop sinking the exceptions and just forward them the natural and expected way? Or is there some hacky way to throw an exception through native boundaries?
A: I haven't seen the browser eat exceptions, unless you mean script errors. Script errors can be enabled via the browser.ScriptErrorsSuppressed property.
If you're talking about real exceptions, not just script errors, can you show us some code that reproduces the problem? We've used the browser extensively and haven't seen what you're describing.
edit the code sample wasn't there when I asked for a code sample
A: My best bet why it happens is because there is a native-managed-native boundary to cross. The native part doesn't forward the managed exceptions correctly and there is not much that can be done.
I am still hoping for a better answer though.
A: 11 years late to the party here, but the following solution works for me.
In webBrowserNavigating, replace MessageBox.Show(exception.ToString()); with Dispatcher.BeginInvoke(() => { throw exception; });.
As soon as the webBrowserNavigating method completes and control returns to the windows event loop, the exception is thrown and handled by the normal mechanism.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: The best way of checking for -moz-border-radius support I wanted some of those spiffy rounded corners for a web project that I'm currently working on.
I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly.
I already utilize jQuery so I looked at the excellent rounded corners plugin and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs.
I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific -moz-border-radius-* properties and if so utilize them.
The check for webkit support looks like this:
var webkitAvailable = false;
try {
webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius'] != undefined);
}
catch(err) {}
That, however, did not work for -moz-border-radius so I started checking for alternatives.
My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse.
My best solution yet is as follows.
var mozborderAvailable = false;
try {
var o = jQuery('<div>').css('-moz-border-radius', '1px');
mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px';
o = null;
} catch(err) {}
It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties
*
*-moz-border-radius-topleft
*-moz-border-radius-topright
*-moz-border-radius-bottomleft
*-moz-border-radius-bottomright
Is there any javascript/CSS guru out there that have a better solution?
(The feature request for this page is at http://plugins.jquery.com/node/3619)
A: Why not use -moz-border-radius and -webkit-border-radius in the stylesheet? It's valid CSS and throwing an otherwise unused attribute would hurt less than having javascript do the legwork of figuring out if it should apply it or not.
Then, in the javascript you'd just check if the browser is IE (or Opera?) - if it is, it'll ignore the proprietary tags, and your javascript could do it's thing.
Maybe I'm missing something here...
A: I know this is an older question, but it shows up high in searches for testing border-radius support so I thought I'd throw this nugget in here.
Rob Glazebrook has a little snippet that extends the support object of jQuery to do a nice quick check for border-radius support (also moz and web-kit).
jQuery(function() {
jQuery.support.borderRadius = false;
jQuery.each(['BorderRadius','MozBorderRadius','WebkitBorderRadius','OBorderRadius','KhtmlBorderRadius'], function() {
if(document.body.style[this] !== undefined) jQuery.support.borderRadius = true;
return (!jQuery.support.borderRadius);
}); });
Attribution
That way, if there isn't support for it you can fall back and use jQuery to implement a 2-way slider so that other browsers still have a similar visual experience.
A: How about this?
var mozborderAvailable = false;
try {
if (typeof(document.body.style.MozBorderRadius) !== "undefined") {
mozborderAvailable = true;
}
} catch(err) {}
I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera.
(Edit: better undefined test)
A: Apply CSS unconditionally and check element.style.MozBorderRadius in the script?
A: As you're already using jQuery you could use jQuery.browser utility to do some browser sniffing and then target your CSS / JavaScript accordingly.
A: The problem with this is that Firefox 2 does not use anti-aliasing for the borders. The script would need to detect for Firefox 3 before is uses native rounded corners as FF3 does use anti-aliasing.
A: I've developed the following method for detecting whether the browser supports rounded borders or not. I have yet to test it on IE (am on a Linux machine), but it works correctly in Webkit and Gecko browsers (i.e. Safari/Chrome and Firefox) as well as in Opera:
function checkBorders() {
var div = document.createElement('div');
div.setAttribute('style', '-moz-border-radius: 8px; -webkit-border-radius: 8px; border-radius: 8px;');
for ( stylenr=0; stylenr<div.style.length; stylenr++ ) {
if ( /border.*?-radius/i.test(div.style[stylenr]) ) {
return true;
};
return false;
};
If you wanted to test for Firefox 2 or 3, you should check for the Gecko rendering engine, not the actual browser. I can't find the precise release date for Gecko 1.9 (which is the version that supports anti-aliased rounded corners), but the Mozilla wiki says it was released in the first quarter of 2007, so we'll assume May just to be sure.
if ( /Gecko\/\d*/.test(navigator.userAgent) && parseInt(navigator.userAgent.match(/Gecko\/\d*/)[0].split('/')[1]) > 20070501 )
All in all, the combined function is this:
function checkBorders() {
if ( /Gecko\/\d*/.test(navigator.userAgent) && parseInt(navigator.userAgent.match(/Gecko\/\d*/)[0].split('/')[1]) > 20070501 ) {
return true;
} else {
var div = document.createElement('div');
div.setAttribute('style', '-moz-border-radius: 8px; -webkit-border-radius: 8px; border-radius: 8px;');
for ( stylenr=0; stylenr<div.style.length; stylenr++ ) {
if ( /border.*?-radius/i.test(div.style[stylenr]) ) {
return true;
};
return false;
};
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Best traffic / performance / usage monitoring module? Are there any open source (or I guess commercial) packages that you can plug into your site for monitoring purposes? I'd like something that we can hook up to our ASP.NET site and use to provide reporting on things like:
*
*performance over time
*current load
*page traffic
*SQL performance
*PU time monitoring
Ideally in c# :)
With some sexy graphs.
Edit: I'd also be happy with a package that I can feed statistics and views of data to, and it would analyse trends, spot abnormal behaviour (e.g. "no one has logged in for the last hour. is this Ok?", "high traffic levels detected", "low number of API calls detected") and generally be very useful indeed. Does such a thing exist?
At my last office we had a big screen which showed us loads and loads of performance counters over a couple of time ranges, and we could spot weird stuff happening, but the data was not stored and there was no way to report on it. Its a package for doing this that I'm after.
A: It should be noted that google analytics is not an accurate representation of web site usage. This is because the web beacon (web bug) used on the page does not always load for these reasons:
*
*Google analytics servers are called by millions of pages every second and can not always process the requests in a timely fashion.
*Users often browse away from a page before the full page has loaded and thus there is not enough time to load Googles web beacon to record a hit.
*Google analytics require javascript to be installed which can be disabled.
*Quite a few (but not substantial amount) of people block google-analytics.com from their browsers, myself included.
The physical log files are the best 'real' representation of site usage as they record every request. Alternatively there are far better 'professional' packages, of which Omniture is my favourite, which have much better response times, alternative methods for recording actions and more functionality.
A: If you're after things like server data, would RRDTool be something you're after?
It's not really a webserver type stats program though, I have no idea how it would scale.
Edit:
I've also just found Splunk Swarm, if you're interested in something that looks "cool".
A: Google Analytics is free (up to 50,000 hits per month I think) and is easy to setup with just a little javascript snippet to insert into your header or footer and has great detailed reports, with some very nice graphs.
A: Google Analytics is quick to set up and provides more sexy graphs than you can shake a stick at.
http://www.google.com/analytics/
A: Not Invented here but it's on my todo list to setup.
http://awstats.sourceforge.net/
A: @Ian
Looks like they've raised the limit. Not very surprising, it is google after all ;)
This free version is limited to 5 million pageviews a month - however, users with an active Google AdWords account are given unlimited pageview tracking.
http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55543
A: http://www.serverdensity.com/
A: One option is to use external monitoring tools, which will monitor the web performance from outside the firewall by simulating end user activities.
Catchpoint Systems has an interesting approach that requires very little coding and gives you the performance stats from outside the datacenter and from inside the asp.net (like processing time, etc)
http://www.catchpoint.com/products.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Validate decimal numbers in JavaScript - IsNumeric() What's the cleanest, most effective way to validate decimal numbers in JavaScript?
Bonus points for:
*
*Clarity. Solution should be clean and simple.
*Cross-platform.
Test cases:
01. IsNumeric('-1') => true
02. IsNumeric('-1.5') => true
03. IsNumeric('0') => true
04. IsNumeric('0.42') => true
05. IsNumeric('.42') => true
06. IsNumeric('99,999') => false
07. IsNumeric('0x89f') => false
08. IsNumeric('#abcdef') => false
09. IsNumeric('1.2.3') => false
10. IsNumeric('') => false
11. IsNumeric('blah') => false
A: This way seems to work well:
function IsNumeric(input){
var RE = /^-{0,1}\d*\.{0,1}\d+$/;
return (RE.test(input));
}
In one line:
const IsNumeric = (num) => /^-{0,1}\d*\.{0,1}\d+$/.test(num);
And to test it:
const IsNumeric = (num) => /^-{0,1}\d*\.{0,1}\d+$/.test(num);
function TestIsNumeric(){
var results = ''
results += (IsNumeric('-1')?"Pass":"Fail") + ": IsNumeric('-1') => true\n";
results += (IsNumeric('-1.5')?"Pass":"Fail") + ": IsNumeric('-1.5') => true\n";
results += (IsNumeric('0')?"Pass":"Fail") + ": IsNumeric('0') => true\n";
results += (IsNumeric('0.42')?"Pass":"Fail") + ": IsNumeric('0.42') => true\n";
results += (IsNumeric('.42')?"Pass":"Fail") + ": IsNumeric('.42') => true\n";
results += (!IsNumeric('99,999')?"Pass":"Fail") + ": IsNumeric('99,999') => false\n";
results += (!IsNumeric('0x89f')?"Pass":"Fail") + ": IsNumeric('0x89f') => false\n";
results += (!IsNumeric('#abcdef')?"Pass":"Fail") + ": IsNumeric('#abcdef') => false\n";
results += (!IsNumeric('1.2.3')?"Pass":"Fail") + ": IsNumeric('1.2.3') => false\n";
results += (!IsNumeric('')?"Pass":"Fail") + ": IsNumeric('') => false\n";
results += (!IsNumeric('blah')?"Pass":"Fail") + ": IsNumeric('blah') => false\n";
return results;
}
console.log(TestIsNumeric());
.as-console-wrapper { max-height: 100% !important; top: 0; }
I borrowed that regex from http://www.codetoad.com/javascript/isnumeric.asp. Explanation:
/^ match beginning of string
-{0,1} optional negative sign
\d* optional digits
\.{0,1} optional decimal point
\d+ at least one digit
$/ match end of string
A: To me, this is the best way:
isNumber : function(v){
return typeof v === 'number' && isFinite(v);
}
A: return (input - 0) == input && input.length > 0;
didn't work for me. When I put in an alert and tested, input.length was undefined. I think there is no property to check integer length. So what I did was
var temp = '' + input;
return (input - 0) == input && temp.length > 0;
It worked fine.
A: I realize the original question did not mention jQuery, but if you do use jQuery, you can do:
$.isNumeric(val)
Simple.
https://api.jquery.com/jQuery.isNumeric/ (as of jQuery 1.7)
A: If I'm not mistaken, this should match any valid JavaScript number value, excluding constants (Infinity, NaN) and the sign operators +/- (because they are not actually part of the number as far as I concerned, they are separate operators):
I needed this for a tokenizer, where sending the number to JavaScript for evaluation wasn't an option... It's definitely not the shortest possible regular expression, but I believe it catches all the finer subtleties of JavaScript's number syntax.
/^(?:(?:(?:[1-9]\d*|\d)\.\d*|(?:[1-9]\d*|\d)?\.\d+|(?:[1-9]\d*|\d))
(?:[e]\d+)?|0[0-7]+|0x[0-9a-f]+)$/i
Valid numbers would include:
- 0
- 00
- 01
- 10
- 0e1
- 0e01
- .0
- 0.
- .0e1
- 0.e1
- 0.e00
- 0xf
- 0Xf
Invalid numbers would be
- 00e1
- 01e1
- 00.0
- 00x0
- .
- .e0
A: Only problem I had with @CMS's answer is the exclusion of NaN and Infinity, which are useful numbers for many situations. One way to check for NaN's is to check for numeric values that don't equal themselves, NaN != NaN! So there are really 3 tests you'd like to deal with ...
function isNumber(n) {
n = parseFloat(n);
return !isNaN(n) || n != n;
}
function isFiniteNumber(n) {
n = parseFloat(n);
return !isNaN(n) && isFinite(n);
}
function isComparableNumber(n) {
n = parseFloat(n);
return (n >=0 || n < 0);
}
isFiniteNumber('NaN')
false
isFiniteNumber('OxFF')
true
isNumber('NaN')
true
isNumber(1/0-1/0)
true
isComparableNumber('NaN')
false
isComparableNumber('Infinity')
true
My isComparableNumber is pretty close to another elegant answer, but handles hex and other string representations of numbers.
A: I think parseFloat function can do all the work here. The function below passes all the tests on this page including isNumeric(Infinity) == true:
function isNumeric(n) {
return parseFloat(n) == n;
}
A: A couple of tests to add:
IsNumeric('01.05') => false
IsNumeric('1.') => false
IsNumeric('.') => false
I came up with this:
function IsNumeric(input) {
return /^-?(0|[1-9]\d*|(?=\.))(\.\d+)?$/.test(input);
}
The solution covers:
*
*An optional negative sign at the beginning
*A single zero, or one or more digits not starting with 0, or nothing so long as a period follows
*A period that is followed by 1 or more numbers
A: I'd like to add the following:
1. IsNumeric('0x89f') => true
2. IsNumeric('075') => true
Positive hex numbers start with 0x and negative hex numbers start with -0x.
Positive oct numbers start with 0 and negative oct numbers start with -0.
This one takes most of what has already been mentioned into consideration, but includes hex and octal numbers, negative scientific, Infinity and has removed decimal scientific (4e3.2 is not valid).
function IsNumeric(input){
var RE = /^-?(0|INF|(0[1-7][0-7]*)|(0x[0-9a-fA-F]+)|((0|[1-9][0-9]*|(?=[\.,]))([\.,][0-9]+)?([eE]-?\d+)?))$/;
return (RE.test(input));
}
A: Yahoo! UI uses this:
isNumber: function(o) {
return typeof o === 'number' && isFinite(o);
}
A: The accepted answer failed your test #7 and I guess it's because you changed your mind. So this is a response to the accepted answer, with which I had issues.
During some projects, I have needed to validate some data and be as certain as possible that it is a javascript numerical value that can be used in mathematical operations.
jQuery and some other javascript libraries already include such a function, usually called isNumeric. There is also a post on stackoverflow that has been widely accepted as the answer, the same general routine that the aforementioned libraries are using.
function isNumber(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
First, the code above would return true if the argument was an array of length 1, and that single element was of a type deemed as numeric by the above logic. In my opinion, if it's an array then its not numeric.
To alleviate this problem, I added a check to discount arrays from the logic
function isNumber(n) {
return Object.prototype.toString.call(n) !== '[object Array]' &&!isNaN(parseFloat(n)) && isFinite(n);
}
Of course, you could also use Array.isArray, jquery $.isArray or prototype Object.isArray instead of Object.prototype.toString.call(n) !== '[object Array]'
My second issue was that Negative Hexadecimal integer literal strings ("-0xA" -> -10) were not being counted as numeric. However, Positive Hexadecimal integer literal strings ("0xA" -> 10) were treated as numeric.
I needed both to be valid numeric.
I then modified the logic to take this into account.
function isNumber(n) {
return Object.prototype.toString.call(n) !== '[object Array]' &&!isNaN(parseFloat(n)) && isFinite(n.toString().replace(/^-/, ''));
}
If you are worried about the creation of the regex each time the function is called then you could rewrite it within a closure, something like this
var isNumber = (function () {
var rx = /^-/;
return function (n) {
return Object.prototype.toString.call(n) !== '[object Array]' &&!isNaN(parseFloat(n)) && isFinite(n.toString().replace(rx, ''));
};
}());
I then took CMSs +30 test cases and cloned the testing on jsfiddle and added my extra test cases and my above-described solution.
It may not replace the widely accepted/used answer but if this is more of what you are expecting as results from your isNumeric function then hopefully this will be of some help.
EDIT: As pointed out by Bergi, there are other possible objects that could be considered numeric and it would be better to whitelist than a blacklist. With this in mind, I would add to the criteria.
I want my isNumeric function to consider only Numbers or Strings
With this in mind, it would be better to use
function isNumber(n) {
return (Object.prototype.toString.call(n) === '[object Number]' || Object.prototype.toString.call(n) === '[object String]') &&!isNaN(parseFloat(n)) && isFinite(n.toString().replace(/^-/, ''));
}
Test the solutions
var testHelper = function() {
var testSuite = function() {
test("Integer Literals", function() {
ok(isNumber("-10"), "Negative integer string");
ok(isNumber("0"), "Zero string");
ok(isNumber("5"), "Positive integer string");
ok(isNumber(-16), "Negative integer number");
ok(isNumber(0), "Zero integer number");
ok(isNumber(32), "Positive integer number");
ok(isNumber("040"), "Octal integer literal string");
ok(isNumber(0144), "Octal integer literal");
ok(isNumber("-040"), "Negative Octal integer literal string");
ok(isNumber(-0144), "Negative Octal integer literal");
ok(isNumber("0xFF"), "Hexadecimal integer literal string");
ok(isNumber(0xFFF), "Hexadecimal integer literal");
ok(isNumber("-0xFF"), "Negative Hexadecimal integer literal string");
ok(isNumber(-0xFFF), "Negative Hexadecimal integer literal");
});
test("Foating-Point Literals", function() {
ok(isNumber("-1.6"), "Negative floating point string");
ok(isNumber("4.536"), "Positive floating point string");
ok(isNumber(-2.6), "Negative floating point number");
ok(isNumber(3.1415), "Positive floating point number");
ok(isNumber(8e5), "Exponential notation");
ok(isNumber("123e-2"), "Exponential notation string");
});
test("Non-Numeric values", function() {
equals(isNumber(""), false, "Empty string");
equals(isNumber(" "), false, "Whitespace characters string");
equals(isNumber("\t\t"), false, "Tab characters string");
equals(isNumber("abcdefghijklm1234567890"), false, "Alphanumeric character string");
equals(isNumber("xabcdefx"), false, "Non-numeric character string");
equals(isNumber(true), false, "Boolean true literal");
equals(isNumber(false), false, "Boolean false literal");
equals(isNumber("bcfed5.2"), false, "Number with preceding non-numeric characters");
equals(isNumber("7.2acdgs"), false, "Number with trailling non-numeric characters");
equals(isNumber(undefined), false, "Undefined value");
equals(isNumber(null), false, "Null value");
equals(isNumber(NaN), false, "NaN value");
equals(isNumber(Infinity), false, "Infinity primitive");
equals(isNumber(Number.POSITIVE_INFINITY), false, "Positive Infinity");
equals(isNumber(Number.NEGATIVE_INFINITY), false, "Negative Infinity");
equals(isNumber(new Date(2009, 1, 1)), false, "Date object");
equals(isNumber(new Object()), false, "Empty object");
equals(isNumber(function() {}), false, "Instance of a function");
equals(isNumber([]), false, "Empty Array");
equals(isNumber(["-10"]), false, "Array Negative integer string");
equals(isNumber(["0"]), false, "Array Zero string");
equals(isNumber(["5"]), false, "Array Positive integer string");
equals(isNumber([-16]), false, "Array Negative integer number");
equals(isNumber([0]), false, "Array Zero integer number");
equals(isNumber([32]), false, "Array Positive integer number");
equals(isNumber(["040"]), false, "Array Octal integer literal string");
equals(isNumber([0144]), false, "Array Octal integer literal");
equals(isNumber(["-040"]), false, "Array Negative Octal integer literal string");
equals(isNumber([-0144]), false, "Array Negative Octal integer literal");
equals(isNumber(["0xFF"]), false, "Array Hexadecimal integer literal string");
equals(isNumber([0xFFF]), false, "Array Hexadecimal integer literal");
equals(isNumber(["-0xFF"]), false, "Array Negative Hexadecimal integer literal string");
equals(isNumber([-0xFFF]), false, "Array Negative Hexadecimal integer literal");
equals(isNumber([1, 2]), false, "Array with more than 1 Positive interger number");
equals(isNumber([-1, -2]), false, "Array with more than 1 Negative interger number");
});
}
var functionsToTest = [
function(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
},
function(n) {
return !isNaN(n) && !isNaN(parseFloat(n));
},
function(n) {
return !isNaN((n));
},
function(n) {
return !isNaN(parseFloat(n));
},
function(n) {
return typeof(n) != "boolean" && !isNaN(n);
},
function(n) {
return parseFloat(n) === Number(n);
},
function(n) {
return parseInt(n) === Number(n);
},
function(n) {
return !isNaN(Number(String(n)));
},
function(n) {
return !isNaN(+('' + n));
},
function(n) {
return (+n) == n;
},
function(n) {
return n && /^-?\d+(\.\d+)?$/.test(n + '');
},
function(n) {
return isFinite(Number(String(n)));
},
function(n) {
return isFinite(String(n));
},
function(n) {
return !isNaN(n) && !isNaN(parseFloat(n)) && isFinite(n);
},
function(n) {
return parseFloat(n) == n;
},
function(n) {
return (n - 0) == n && n.length > 0;
},
function(n) {
return typeof n === 'number' && isFinite(n);
},
function(n) {
return !Array.isArray(n) && !isNaN(parseFloat(n)) && isFinite(n.toString().replace(/^-/, ''));
}
];
// Examines the functionsToTest array, extracts the return statement of each function
// and fills the toTest select element.
var fillToTestSelect = function() {
for (var i = 0; i < functionsToTest.length; i++) {
var f = functionsToTest[i].toString();
var option = /[\s\S]*return ([\s\S]*);/.exec(f)[1];
$("#toTest").append('<option value="' + i + '">' + (i + 1) + '. ' + option + '</option>');
}
}
var performTest = function(functionNumber) {
reset(); // Reset previous test
$("#tests").html(""); //Clean test results
isNumber = functionsToTest[functionNumber]; // Override the isNumber global function with the one to test
testSuite(); // Run the test
// Get test results
var totalFail = 0;
var totalPass = 0;
$("b.fail").each(function() {
totalFail += Number($(this).html());
});
$("b.pass").each(function() {
totalPass += Number($(this).html());
});
$("#testresult").html(totalFail + " of " + (totalFail + totalPass) + " test failed.");
$("#banner").attr("class", "").addClass(totalFail > 0 ? "fail" : "pass");
}
return {
performTest: performTest,
fillToTestSelect: fillToTestSelect,
testSuite: testSuite
};
}();
$(document).ready(function() {
testHelper.fillToTestSelect();
testHelper.performTest(0);
$("#toTest").change(function() {
testHelper.performTest($(this).children(":selected").val());
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" type="text/javascript"></script>
<script src="https://rawgit.com/Xotic750/testrunner-old/master/testrunner.js" type="text/javascript"></script>
<link href="https://rawgit.com/Xotic750/testrunner-old/master/testrunner.css" rel="stylesheet" type="text/css">
<h1>isNumber Test Cases</h1>
<h2 id="banner" class="pass"></h2>
<h2 id="userAgent">Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11</h2>
<div id="currentFunction"></div>
<div id="selectFunction">
<label for="toTest" style="font-weight:bold; font-size:Large;">Select function to test:</label>
<select id="toTest" name="toTest">
</select>
</div>
<div id="testCode"></div>
<ol id="tests">
<li class="pass">
<strong>Integer Literals <b style="color:black;">(0, 10, 10)</b></strong>
<ol style="display: none;">
<li class="pass">Negative integer string</li>
<li class="pass">Zero string</li>
<li class="pass">Positive integer string</li>
<li class="pass">Negative integer number</li>
<li class="pass">Zero integer number</li>
<li class="pass">Positive integer number</li>
<li class="pass">Octal integer literal string</li>
<li class="pass">Octal integer literal</li>
<li class="pass">Hexadecimal integer literal string</li>
<li class="pass">Hexadecimal integer literal</li>
</ol>
</li>
<li class="pass">
<strong>Foating-Point Literals <b style="color:black;">(0, 6, 6)</b></strong>
<ol style="display: none;">
<li class="pass">Negative floating point string</li>
<li class="pass">Positive floating point string</li>
<li class="pass">Negative floating point number</li>
<li class="pass">Positive floating point number</li>
<li class="pass">Exponential notation</li>
<li class="pass">Exponential notation string</li>
</ol>
</li>
<li class="pass">
<strong>Non-Numeric values <b style="color:black;">(0, 18, 18)</b></strong>
<ol style="display: none;">
<li class="pass">Empty string: false</li>
<li class="pass">Whitespace characters string: false</li>
<li class="pass">Tab characters string: false</li>
<li class="pass">Alphanumeric character string: false</li>
<li class="pass">Non-numeric character string: false</li>
<li class="pass">Boolean true literal: false</li>
<li class="pass">Boolean false literal: false</li>
<li class="pass">Number with preceding non-numeric characters: false</li>
<li class="pass">Number with trailling non-numeric characters: false</li>
<li class="pass">Undefined value: false</li>
<li class="pass">Null value: false</li>
<li class="pass">NaN value: false</li>
<li class="pass">Infinity primitive: false</li>
<li class="pass">Positive Infinity: false</li>
<li class="pass">Negative Infinity: false</li>
<li class="pass">Date object: false</li>
<li class="pass">Empty object: false</li>
<li class="pass">Instance of a function: false</li>
</ol>
</li>
</ol>
<div id="main">
This page contains tests for a set of isNumber functions. To see them, take a look at the source.
</div>
<div>
<p class="result">Tests completed in 0 milliseconds.
<br>0 tests of 0 failed.</p>
</div>
A: function IsNumeric(num) {
return (num >=0 || num < 0);
}
This works for 0x23 type numbers as well.
A: To check if a variable contains a valid number and not
just a String which looks like a number,
Number.isFinite(value) can be used.
This is part of the language since
ES2015
Examples:
Number.isFinite(Infinity) // false
Number.isFinite(NaN) // false
Number.isFinite(-Infinity) // false
Number.isFinite(0) // true
Number.isFinite(2e64) // true
Number.isFinite('0') // false
Number.isFinite(null) // false
A: An integer value can be verified by:
function isNumeric(value) {
var bool = isNaN(+value));
bool = bool || (value.indexOf('.') != -1);
bool = bool || (value.indexOf(",") != -1);
return !bool;
};
This way is easier and faster! All tests are checked!
A: Here's a lil bit improved version (probably the fastest way out there) that I use instead of exact jQuery's variant, I really don't know why don't they use this one:
function isNumeric(val) {
return !isNaN(+val) && isFinite(val);
}
The downside of jQuery's version is that if you pass a string with leading numerics and trailing letters like "123abc" the parseFloat | parseInt will extract the numeric fraction out and return 123, BUT, the second guard isFinite will fail it anyway.
With the unary + operator it will die on the very first guard since + throws NaN for such hybrids :)
A little performance yet I think a solid semantic gain.
A: Yeah, the built-in isNaN(object) will be much faster than any regex parsing, because it's built-in and compiled, instead of interpreted on the fly.
Although the results are somewhat different to what you're looking for (try it):
// IS NUMERIC
document.write(!isNaN('-1') + "<br />"); // true
document.write(!isNaN('-1.5') + "<br />"); // true
document.write(!isNaN('0') + "<br />"); // true
document.write(!isNaN('0.42') + "<br />"); // true
document.write(!isNaN('.42') + "<br />"); // true
document.write(!isNaN('99,999') + "<br />"); // false
document.write(!isNaN('0x89f') + "<br />"); // true
document.write(!isNaN('#abcdef') + "<br />"); // false
document.write(!isNaN('1.2.3') + "<br />"); // false
document.write(!isNaN('') + "<br />"); // true
document.write(!isNaN('blah') + "<br />"); // false
A: Arrrgh! Don't listen to the regular expression answers. RegEx is icky for this, and I'm not talking just performance. It's so easy to make subtle, impossible to spot mistakes with your regular expression.
If you can't use isNaN(), this should work much better:
function IsNumeric(input)
{
return (input - 0) == input && (''+input).trim().length > 0;
}
Here's how it works:
The (input - 0) expression forces JavaScript to do type coercion on your input value; it must first be interpreted as a number for the subtraction operation. If that conversion to a number fails, the expression will result in NaN. This numeric result is then compared to the original value you passed in. Since the left hand side is now numeric, type coercion is again used. Now that the input from both sides was coerced to the same type from the same original value, you would think they should always be the same (always true). However, there's a special rule that says NaN is never equal to NaN, and so a value that can't be converted to a number (and only values that cannot be converted to numbers) will result in false.
The check on the length is for a special case involving empty strings. Also note that it falls down on your 0x89f test, but that's because in many environments that's an okay way to define a number literal. If you want to catch that specific scenario you could add an additional check. Even better, if that's your reason for not using isNaN() then just wrap your own function around isNaN() that can also do the additional check.
In summary, if you want to know if a value can be converted to a number, actually try to convert it to a number.
I went back and did some research for why a whitespace string did not have the expected output, and I think I get it now: an empty string is coerced to 0 rather than NaN. Simply trimming the string before the length check will handle this case.
Running the unit tests against the new code and it only fails on the infinity and boolean literals, and the only time that should be a problem is if you're generating code (really, who would type in a literal and check if it's numeric? You should know), and that would be some strange code to generate.
But, again, the only reason ever to use this is if for some reason you have to avoid isNaN().
A: This should work. Some of the functions provided here are flawed, also should be faster than any other function here.
function isNumeric(n)
{
var n2 = n;
n = parseFloat(n);
return (n!='NaN' && n2==n);
}
Explained:
Create a copy of itself, then converts the number into float, then compares itself with the original number, if it is still a number, (whether integer or float) , and matches the original number, that means, it is indeed a number.
It works with numeric strings as well as plain numbers. Does not work with hexadecimal numbers.
Warning: use at your own risk, no guarantees.
A: My solution,
function isNumeric(input) {
var number = /^\-{0,1}(?:[0-9]+){0,1}(?:\.[0-9]+){0,1}$/i;
var regex = RegExp(number);
return regex.test(input) && input.length>0;
}
It appears to work in every situation, but I might be wrong.
A: I'm using simpler solution:
function isNumber(num) {
return parseFloat(num).toString() == num
}
A: None of the answers return false for empty strings, a fix for that...
function is_numeric(n)
{
return (n != '' && !isNaN(parseFloat(n)) && isFinite(n));
}
A: function inNumeric(n){
return Number(n).toString() === n;
}
If n is numeric Number(n) will return the numeric value and toString() will turn it back to a string. But if n isn't numeric Number(n) will return NaN so it won't match the original n
A: Here's a dead-simple one (tested in Chrome, Firefox, and IE):
function isNumeric(x) {
return parseFloat(x) == x;
}
Test cases from question:
console.log('trues');
console.log(isNumeric('-1'));
console.log(isNumeric('-1.5'));
console.log(isNumeric('0'));
console.log(isNumeric('0.42'));
console.log(isNumeric('.42'));
console.log('falses');
console.log(isNumeric('99,999'));
console.log(isNumeric('0x89f'));
console.log(isNumeric('#abcdef'));
console.log(isNumeric('1.2.3'));
console.log(isNumeric(''));
console.log(isNumeric('blah'));
Some more test cases:
console.log('trues');
console.log(isNumeric(0));
console.log(isNumeric(-1));
console.log(isNumeric(-500));
console.log(isNumeric(15000));
console.log(isNumeric(0.35));
console.log(isNumeric(-10.35));
console.log(isNumeric(2.534e25));
console.log(isNumeric('2.534e25'));
console.log(isNumeric('52334'));
console.log(isNumeric('-234'));
console.log(isNumeric(Infinity));
console.log(isNumeric(-Infinity));
console.log(isNumeric('Infinity'));
console.log(isNumeric('-Infinity'));
console.log('falses');
console.log(isNumeric(NaN));
console.log(isNumeric({}));
console.log(isNumeric([]));
console.log(isNumeric(''));
console.log(isNumeric('one'));
console.log(isNumeric(true));
console.log(isNumeric(false));
console.log(isNumeric());
console.log(isNumeric(undefined));
console.log(isNumeric(null));
console.log(isNumeric('-234aa'));
Note that it considers infinity a number.
A: @Joel's answer is pretty close, but it will fail in the following cases:
// Whitespace strings:
IsNumeric(' ') == true;
IsNumeric('\t\t') == true;
IsNumeric('\n\r') == true;
// Number literals:
IsNumeric(-1) == false;
IsNumeric(0) == false;
IsNumeric(1.1) == false;
IsNumeric(8e5) == false;
Some time ago I had to implement an IsNumeric function, to find out if a variable contained a numeric value, regardless of its type, it could be a String containing a numeric value (I had to consider also exponential notation, etc.), a Number object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. +true == 1; but true shouldn't be considered as "numeric").
I think is worth sharing this set of +30 unit tests made to numerous function implementations, and also share the one that passes all my tests:
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
P.S. isNaN & isFinite have a confusing behavior due to forced conversion to number. In ES6, Number.isNaN & Number.isFinite would fix these issues. Keep that in mind when using them.
Update :
Here's how jQuery does it now (2.2-stable):
isNumeric: function(obj) {
var realStringObj = obj && obj.toString();
return !jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0;
}
Update :
Angular 4.3:
export function isNumeric(value: any): boolean {
return !isNaN(value - parseFloat(value));
}
A: Use the function isNaN. I believe if you test for !isNaN(yourstringhere) it works fine for any of these situations.
A: knockoutJs Inbuild library validation functions
By extending it the field get validated
1) number
self.number = ko.observable(numberValue).extend({ number: true});
TestCase
numberValue = '0.0' --> true
numberValue = '0' --> true
numberValue = '25' --> true
numberValue = '-1' --> true
numberValue = '-3.5' --> true
numberValue = '11.112' --> true
numberValue = '0x89f' --> false
numberValue = '' --> false
numberValue = 'sfsd' --> false
numberValue = 'dg##$' --> false
2) digit
self.number = ko.observable(numberValue).extend({ digit: true});
TestCase
numberValue = '0' --> true
numberValue = '25' --> true
numberValue = '0.0' --> false
numberValue = '-1' --> false
numberValue = '-3.5' --> false
numberValue = '11.112' --> false
numberValue = '0x89f' --> false
numberValue = '' --> false
numberValue = 'sfsd' --> false
numberValue = 'dg##$' --> false
3) min and max
self.number = ko.observable(numberValue).extend({ min: 5}).extend({ max: 10});
This field accept value between 5 and 10 only
TestCase
numberValue = '5' --> true
numberValue = '6' --> true
numberValue = '6.5' --> true
numberValue = '9' --> true
numberValue = '11' --> false
numberValue = '0' --> false
numberValue = '' --> false
A: I have run the following below and it passes all the test cases...
It makes use of the different way in which parseFloat and Number handle their inputs...
function IsNumeric(_in) {
return (parseFloat(_in) === Number(_in) && Number(_in) !== NaN);
}
A: I realize this has been answered many times, but the following is a decent candidate which can be useful in some scenarios.
it should be noted that it assumes that '.42' is NOT a number, and '4.' is NOT a number, so this should be taken into account.
function isDecimal(x) {
return '' + x === '' + +x;
}
function isInteger(x) {
return '' + x === '' + parseInt(x);
}
The isDecimal passes the following test:
function testIsNumber(f) {
return f('-1') && f('-1.5') && f('0') && f('0.42')
&& !f('.42') && !f('99,999') && !f('0x89f')
&& !f('#abcdef') && !f('1.2.3') && !f('') && !f('blah');
}
The idea here is that every number or integer has one "canonical" string representation, and every non-canonical representation should be rejected. So we cast to a number and back, and see if the result is the original string.
Whether these functions are useful for you depends on the use case. One feature is that distinct strings represent distinct numbers (if both pass the isNumber() test).
This is relevant e.g. for numbers as object property names.
var obj = {};
obj['4'] = 'canonical 4';
obj['04'] = 'alias of 4';
obj[4]; // prints 'canonical 4' to the console.
A: If you need to validate a special set of decimals y
you can use this simple javascript:
http://codesheet.org/codesheet/x1kI7hAD
<input type="text" name="date" value="" pattern="[0-9]){1,2}(\.){1}([0-9]){2}" maxlength="6" placeholder="od npr.: 16.06" onchange="date(this);" />
The Javascript:
function date(inputField) {
var isValid = /^([0-9]){1,2}(\.){1}([0-9]){2}$/.test(inputField.value);
if (isValid) {
inputField.style.backgroundColor = '#bfa';
} else {
inputField.style.backgroundColor = '#fba';
}
return isValid;
}
A: isNumeric=(el)=>{return Boolean(parseFloat(el)) && isFinite(el)}
Nothing very different but we can use Boolean constructor
A: I think my code is perfect ...
/**
* @param {string} s
* @return {boolean}
*/
var isNumber = function(s) {
return s.trim()!=="" && !isNaN(Number(s));
};
A: You can minimize this function in a lot of way, and you can also implement it with a custom regex for negative values or custom charts:
$('.number').on('input',function(){
var n=$(this).val().replace(/ /g,'').replace(/\D/g,'');
if (!$.isNumeric(n))
$(this).val(n.slice(0, -1))
else
$(this).val(n)
});
A: No need to use extra lib.
const IsNumeric = (...numbers) => {
return numbers.reduce((pre, cur) => pre && !!(cur === 0 || +cur), true);
};
Test
> IsNumeric(1)
true
> IsNumeric(1,2,3)
true
> IsNumeric(1,2,3,0)
true
> IsNumeric(1,2,3,0,'')
false
> IsNumeric(1,2,3,0,'2')
true
> IsNumeric(1,2,3,0,'200')
true
> IsNumeric(1,2,3,0,'-200')
true
> IsNumeric(1,2,3,0,'-200','.32')
true
A: A simple and clean solution by leveraging language's dynamic type checking:
function IsNumeric (string) {
if(string === ' '.repeat(string.length)){
return false
}
return string - 0 === string * 1
}
if you don't care about white-spaces you can remove that " if "
see test cases below
function IsNumeric (string) {
if(string === ' '.repeat(string.length)){
return false
}
return string - 0 === string * 1
}
console.log('-1' + ' → ' + IsNumeric('-1'))
console.log('-1.5' + ' → ' + IsNumeric('-1.5'))
console.log('0' + ' → ' + IsNumeric('0'))
console.log('0.42' + ' → ' + IsNumeric('0.42'))
console.log('.42' + ' → ' + IsNumeric('.42'))
console.log('99,999' + ' → ' + IsNumeric('99,999'))
console.log('0x89f' + ' → ' + IsNumeric('0x89f'))
console.log('#abcdef' + ' → ' + IsNumeric('#abcdef'))
console.log('1.2.3' + ' → ' + IsNumeric('1.2.3'))
console.log('' + ' → ' + IsNumeric(''))
console.log('33 ' + ' → ' + IsNumeric('33 '))
A: Since jQuery 1.7, you can use jQuery.isNumeric():
$.isNumeric('-1'); // true
$.isNumeric('-1.5'); // true
$.isNumeric('0'); // true
$.isNumeric('0.42'); // true
$.isNumeric('.42'); // true
$.isNumeric('0x89f'); // true (valid hexa number)
$.isNumeric('99,999'); // false
$.isNumeric('#abcdef'); // false
$.isNumeric('1.2.3'); // false
$.isNumeric(''); // false
$.isNumeric('blah'); // false
Just note that unlike what you said, 0x89f is a valid number (hexa)
A: It can be done without RegExp as
function IsNumeric(data){
return parseFloat(data)==data;
}
A: @CMS' answer: Your snippet failed on whitespace cases on my machine using nodejs. So I combined it with
@joel's answer to the following:
is_float = function(v) {
return !isNaN(v) && isFinite(v) &&
(typeof(v) == 'number' || v.replace(/^\s+|\s+$/g, '').length > 0);
}
I unittested it with those cases that are floats:
var t = [
0,
1.2123,
'0',
'2123.4',
-1,
'-1',
-123.423,
'-123.432',
07,
0xad,
'07',
'0xad'
];
and those cases that are no floats (including empty whitespaces and objects / arrays):
var t = [
'hallo',
[],
{},
'jklsd0',
'',
"\t",
"\n",
' '
];
Everything works as expected here. Maybe this helps.
Full source code for this can be found here.
A: The following seems to works fine for many cases:
function isNumeric(num) {
return (num > 0 || num === 0 || num === '0' || num < 0) && num !== true && isFinite(num);
}
This is built on top of this answer (which is for this answer too):
https://stackoverflow.com/a/1561597/1985601
A: function isNumber(n) {
return (n===n+''||n===n-0) && n*0==0 && /\S/.test(n);
}
Explanations:
(n===n-0||n===n+'') verifies if n is a number or a string (discards arrays, boolean, date, null, ...). You can replace (n===n-0||n===n+'') by n!==undefined && n!==null && (n.constructor===Number||n.constructor===String): significantly faster but less concise.
n*0==0 verifies if n is a finite number as isFinite(n) does. If you need to check strings that represent negative hexadecimal, just replace n*0==0 by something like n.toString().replace(/^\s*-/,'')*0==0.
It costs a little of course, so if you don't need it, don't use it.
/\S/.test(n) discards empty strings or strings, that contain only white-spaces (necessary since isFinite(n) or n*0==0 return a false positive in this case). You can reduce the number of call to .test(n) by using (n!=0||/0/.test(n)) instead of /\S/.test(n), or you can use a slightly faster but less concise test such as (n!=0||(n+'').indexOf('0')>=0): tiny improvement.
A: One can use a type-check library like https://github.com/arasatasaygin/is.js or just extract a check snippet from there (https://github.com/arasatasaygin/is.js/blob/master/is.js#L131):
is.nan = function(value) { // NaN is number :)
return value !== value;
};
// is a given value number?
is.number = function(value) {
return !is.nan(value) && Object.prototype.toString.call(value) === '[object Number]';
};
In general if you need it to validate parameter types (on entry point of function call), you can go with JSDOC-compliant contracts (https://www.npmjs.com/package/bycontract):
/**
* This is JSDOC syntax
* @param {number|string} sum
* @param {Object.<string, string>} payload
* @param {function} cb
*/
function foo( sum, payload, cb ) {
// Test if the contract is respected at entry point
byContract( arguments, [ "number|string", "Object.<string, string>", "function" ] );
}
// Test it
foo( 100, { foo: "foo" }, function(){}); // ok
foo( 100, { foo: 100 }, function(){}); // exception
A: Best way to do this is like this:
function isThisActuallyANumber(data){
return ( typeof data === "number" && !isNaN(data) );
}
A: I found simple solution, probably not best but it's working fine :)
So, what I do is next, I parse string to Int and check if length size of new variable which is now int type is same as length of original string variable. Logically if size is the same it means string is fully parsed to int and that is only possible if string is "made" only of numbers.
var val=1+$(e).val()+'';
var n=parseInt(val)+'';
if(val.length == n.length )alert('Is int');
You can easily put that code in function and instead of alert use return true if int.
Remember, if you use dot or comma in string you are checking it's still false cos you are parsing to int.
Note: Adding 1+ on e.val so starting zero wouldn't be removed.
A: With regex we can cover all the cases ask in the question. Here it is:
isNumeric for all integers and decimals:
const isNumeric = num => /^-?[0-9]+(?:\.[0-9]+)?$/.test(num+'');
isInteger for just integers:
const isInteger = num => /^-?[0-9]+$/.test(num+'');
A: Need to check for the null/undefined condition and remove commas (for the US number format) if typeof n === 'string'.
function isNumeric(n)
{
if(n === null || typeof n === 'undefined')
return false;
if(typeof n === 'string')
n = n.split(',').join('');
return !isNaN(parseFloat(n)) && isFinite(n);
}
https://jsfiddle.net/NickU/nyzeot03/3/
A: If you preference is to have your numeric function predictates to be implicitly strict (eg, no parsing of strings), then this should do the trick.
function isNumeric(n, parse) {
var t = typeof(n);
if (parse){
if (t !== 'number' && t !=='string') return false;
return !isNaN(parseFloat(n)) && isFinite(n);
}else{
if (t !== 'number') return false;
return !isNaN(n) && isFinite(n) && !_.isString(n);
}
}
function isInteger(n, parse) {
return isNumeric(n, parse) && n % 1 === 0;
}
function isFloat(n, parse) {
return isNumeric(n, parse) && n % 1 !== 0;
}
If you want the code to parse strings, then just pass the true in the parse parameter.
This is modification of underscore-contrib's approach which is to be implicitly loose and try parsing strings and even returns true for isNumeric([1]), which can be a real trap for people. My approach above will also be faster as it only calls parseFloat() when parse = true.
A: The following may work as well.
function isNumeric(v) {
return v.length > 0 && !isNaN(v) && v.search(/[A-Z]|[#]/ig) == -1;
};
A: Well, I'm using this one I made...
It's been working so far:
function checkNumber(value) {
if ( value % 1 == 0 )
return true;
else
return false;
}
If you spot any problem with it, tell me, please.
Like any numbers should be divisible by one with nothing left, I figured I could just use the module, and if you try dividing a string into a number the result wouldn't be that. So.
A: Here I've collected the "good ones" from this page and put them into a simple test pattern for you to evaluate on your own.
For newbies, the console.log is a built in function (available in all modern browsers) that lets you output results to the JavaScript console (dig around, you'll find it) rather than having to output to your HTML page.
var isNumeric = function(val){
// --------------------------
// Recommended
// --------------------------
// jQuery - works rather well
// See CMS's unit test also: http://dl.getdropbox.com/u/35146/js/tests/isNumber.html
return !isNaN(parseFloat(val)) && isFinite(val);
// Aquatic - good and fast, fails the "0x89f" test, but that test is questionable.
//return parseFloat(val)==val;
// --------------------------
// Other quirky options
// --------------------------
// Fails on "", null, newline, tab negative.
//return !isNaN(val);
// user532188 - fails on "0x89f"
//var n2 = val;
//val = parseFloat(val);
//return (val!='NaN' && n2==val);
// Rafael - fails on negative + decimal numbers, may be good for isInt()?
// return ( val % 1 == 0 ) ? true : false;
// pottedmeat - good, but fails on stringy numbers, which may be a good thing for some folks?
//return /^-?(0|[1-9]\d*|(?=\.))(\.\d+)?$/.test(val);
// Haren - passes all
// borrowed from http://www.codetoad.com/javascript/isnumeric.asp
//var RE = /^-{0,1}\d*\.{0,1}\d+$/;
//return RE.test(val);
// YUI - good for strict adherance to number type. Doesn't let stringy numbers through.
//return typeof val === 'number' && isFinite(val);
// user189277 - fails on "" and "\n"
//return ( val >=0 || val < 0);
}
var tests = [0, 1, "0", 0x0, 0x000, "0000", "0x89f", 8e5, 0x23, -0, 0.0, "1.0", 1.0, -1.5, 0.42, '075', "01", '-01', "0.", ".0", "a", "a2", true, false, "#000", '1.2.3', '#abcdef', '', "", "\n", "\t", '-', null, undefined];
for (var i=0; i<tests.length; i++){
console.log( "test " + i + ": " + tests[i] + " \t " + isNumeric(tests[i]) );
}
A: @Zoltan Lengyel 'other locales' comment (Apr 26 at 2:14) in @CMS Dec answer (2 '09 at 5:36):
I would recommend testing for typeof (n) === 'string':
function isNumber(n) {
if (typeof (n) === 'string') {
n = n.replace(/,/, ".");
}
return !isNaN(parseFloat(n)) && isFinite(n);
}
This extends Zoltans recommendation to not only be able to test "localized numbers" like isNumber('12,50') but also "pure" numbers like isNumber(2011).
A: I use this way to chack that varible is numeric:
v * 1 == v
A: function isNumeric(n) {
var isNumber = true;
$.each(n.replace(/ /g,'').toString(), function(i, v){
if(v!=',' && v!='.' && v!='-'){
if(isNaN(v)){
isNumber = false;
return false;
}
}
});
return isNumber;
}
isNumeric(-3,4567.89); // true <br>
isNumeric(3,4567.89); // true <br>
isNumeric("-3,4567.89"); // true <br>
isNumeric(3d,4567.89); // false
A: $('.rsval').bind('keypress', function(e){
var asciiCodeOfNumbers = [48,46, 49, 50, 51, 52, 53, 54, 54, 55, 56, 57];
var keynum = (!window.event) ? e.which : e.keyCode;
var splitn = this.value.split(".");
var decimal = splitn.length;
var precision = splitn[1];
if(decimal == 2 && precision.length >= 2 ) { console.log(precision , 'e'); e.preventDefault(); }
if( keynum == 46 ){
if(decimal > 2) { e.preventDefault(); }
}
if ($.inArray(keynum, asciiCodeOfNumbers) == -1)
e.preventDefault();
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2543"
} |
Q: Modifying Cruise Control.NET We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process.
Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis?
The dashboard is a .NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with.
Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =)
A: @Keith:
We are leveraging CC.NET to both run a CI build, as well as being able to use the Force Build feature to do a Build + Deploy. That is why we want hands off the dashboard.
I found this morning that I was able to place CCNET in a virtual directory within another web app, This allowed me to setup Forms Authentication, and let the root app manage that. Problem solved.
A: Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in.
You can always see who forced a build, and control it that way.
I find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue.
If you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all.
All the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: In C#, do you need to call the base constructor? In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called?
class BaseClass
{
public BaseClass()
{
// ... some code
}
}
class MyClass : BaseClass
{
public MyClass() // Do I need to put ": base()" here or is it implied?
{
// ... some code
}
}
A: It's implied for base parameterless constructors, but it is needed for defaults in the current class:
public class BaseClass {
protected string X;
public BaseClass() {
this.X = "Foo";
}
}
public class MyClass : BaseClass
{
public MyClass()
// no ref to base needed
{
// initialise stuff
this.X = "bar";
}
public MyClass(int param1, string param2)
:this() // This is needed to hit the parameterless ..ctor
{
// this.X will be "bar"
}
public MyClass(string param1, int param2)
// :base() // can be implied
{
// this.X will be "foo"
}
}
A: It is implied.
A: You do not need to explicitly call the base constructor, it will be implicitly called.
Extend your example a little and create a Console Application and you can verify this behaviour for yourself:
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
MyClass foo = new MyClass();
Console.ReadLine();
}
}
class BaseClass
{
public BaseClass()
{
Console.WriteLine("BaseClass constructor called.");
}
}
class MyClass : BaseClass
{
public MyClass()
{
Console.WriteLine("MyClass constructor called.");
}
}
}
A: A derived class is built upon the base class. If you think about it, the base object has to be instantiated in memory before the derived class can be appended to it. So the base object will be created on the way to creating the derived object. So no, you do not call the constructor.
A: It is implied, provided it is parameterless. This is because you need to implement constructors that take values, see the code below for an example:
public class SuperClassEmptyCtor
{
public SuperClassEmptyCtor()
{
// Default Ctor
}
}
public class SubClassA : SuperClassEmptyCtor
{
// No Ctor's this is fine since we have
// a default (empty ctor in the base)
}
public class SuperClassCtor
{
public SuperClassCtor(string value)
{
// Default Ctor
}
}
public class SubClassB : SuperClassCtor
{
// This fails because we need to satisfy
// the ctor for the base class.
}
public class SubClassC : SuperClassCtor
{
public SubClassC(string value) : base(value)
{
// make it easy and pipe the params
// straight to the base!
}
}
A: AFAIK, you only need to call the base constructor if you need to pass down any values to it.
A: You don’t need call the base constructor explicitly it will be implicitly called, but sometimes you need pass parameters to the constructor in that case you can do something like:
using System;
namespace StackOverflow.Examples
{
class Program
{
static void Main(string[] args)
{
NewClass foo = new NewClass("parameter1","parameter2");
Console.WriteLine(foo.GetUpperParameter());
Console.ReadKey();
}
}
interface IClass
{
string GetUpperParameter();
}
class BaseClass : IClass
{
private string parameter;
public BaseClass (string someParameter)
{
this.parameter = someParameter;
}
public string GetUpperParameter()
{
return this.parameter.ToUpper();
}
}
class NewClass : IClass
{
private BaseClass internalClass;
private string newParameter;
public NewClass (string someParameter, string newParameter)
{
this.internalClass = new BaseClass(someParameter);
this.newParameter = newParameter;
}
public string GetUpperParameter()
{
return this.internalClass.GetUpperParameter() + this.newParameter.ToUpper();
}
}
}
Note: If someone knows a better solution please tells me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: How to keep track of the references to an object? In a world where manual memory allocation and pointers still rule (Borland Delphi) I need a general solution for what I think is a general problem:
At a given moment an object can be referenced from multiple places (lists, other objects, ...). Is there a good way to keep track of all these references so that I can update them when the object is destroyed?
A: If you want to notify others of changes you should implement the "Observer Pattern". Delphi has already done that for you for TComponent descendants. You can call the TComponent.FreeNotification method and have your object be notified when the other component gets destroyed. It does that by calling the Notification method. You can remove yourself from the notification list by calling TComponent.RemoveFreeNotification. Also see this page.
Most Garbage Collectors do not let you get a list of references, so they won't help in this case. Delphi can do reference counting if you would use interfaces, but then again you need to keep track of the references yourself.
A: I can't quite figure out why you'd want to do this. Surely you would just check a reference in not Nil before using it?
Anwyays, two possible solutions I would consider are:
*
*Have objects manager their own reference counts.
*Create a reference counting manager class.
I would probably add AddRef() and ReleaseRef() functions to either the manager or the reference-aware class. You can then use these to check how many references exist at any point. COM does it this way.
The reference-aware class would manage only it's own reference count. The manager could use a Map to associate pointers with an integer for counting.
A: Are you trying to keep track of who's referencing an object so you can clear those references when the object is destroyed, or are you trying to keep track of when it's safe to destroy the object?
If the latter then it sounds like you're looking for a garbage collector. I've never dealt with Delphi so I don't know if there are GCs for it you can use, but I'd be surprised if there weren't.
If the former then a GC probably wouldn't help. If Delphi supports OOP/inheritence (I honestly don't know if it does) you could do something like this (pseudocode):
// Anything that will use one of your tracked objects implements this interface
interface ITrackedObjectUser {
public void objectDestroyed(TrackedObject o);
}
// All objects you want to track extends this class
class TrackedObject {
private List<ITrackedObjectUser> users;
public void registerRef(ITrackedObjectUser u) {
users.add(u);
}
public void destroy() {
foreach(ITrackedObjectUser u in users) {
u.objectDestroyed(this);
}
}
}
Basically, whenever you add one of your tracked objects to a collection that collection would register itself with that object. When the object is being destroyed (I figure you'd call destroy() in the object's destructor) then the object signals the collection that it's being destroyed so the collection can do whatever it needs to.
Unfortunately, this isn't really a good solution if you want to use build-in collections. You'd have to write your own collection objects (they could just wrap build-in ones though). And it would require to to make sure you're registering everywhere you want to track the object. It's not what I would consider a "happy" solution, though for small projects it probably wouldn't be too bad. I'm mainly hoping this idea will help spawn other ideas. :)
A: Is there a specific reason you want this? Are you running into problems with rogue pointers, or are you thinking it might be a problem one day?
IMHO it will not be a problem if you design your application right, and using the appropriate patterns really helps you.
Some info about patters:
http://delphi.about.com/od/oopindelphi/a/aa010201a.htm
http://www.obsof.com/delphi_tips/pattern.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: ADO.NET Entity Framework vs NHibernate So the ADO.NET Entity Framework has gotten a little bit of bad press (in the form of blog entries and a petition) but I don't want to rush to judgement. I'm limited in time for experimentation but I was wondering has anyone worked with it yet with more empirical feedback?
Finally, what are thoughts on using NHibernate which has been around for a long time and may be more mature than the ADO.NET Entity Framework.
A: It has been 2 years since the original post. From what I understand ADO.NET Entity Framework has matured in with .net 4. Does anyone have any new feedback on this topic?
Here's a link to the improvements added to EF since first release in 2008
http://blogs.msdn.com/b/adonet/archive/2009/05/11/update-on-the-entity-framework-in-net-4-and-visual-studio-2010.aspx
Update: I found this thread on stack overflow that does a nice job of discussing the updated EF:
Entity Framework 4 vs NHibernate
A: Microsoft have all but admitted that the ADO.Net Entity Framework isn't an ORM (I can't find a reference currently). So if you think of the Entity Framework as a query engine then apparently it is really good at what it does. For a complete ORM solution you might want to look elsewhere however.
The following blog post seems to bear out this difference:
http://blogs.msdn.com/dsimmons/archive/2008/05/17/why-use-the-entity-framework.aspx
A: I've used SubSonic, LinqToSql, LinqToEntities. Now i'm trying NHibernate. For now - i like NHibernate (probably cause i haven`t met problems enough). Worst of them all - LinqToEntities (that's only my opinion, of course).
A: NHibernate may be more mature. That does not necessarily mean it is a "better" solution. Having used it at my job for some time, I would personally prefer to use almost anything than NHibernate (even straight SQL, if migration were remotely feasible). The number of error messages thrown by NHibernate that don't mean anything (or that do mean something but should never occur) is absolutely staggering, as are some of its default behaviours (such as flushing the session once for each object returned in a Find).
Personally, when I have a choice, I use LINQ to SQL for all database work.
A: If zero configuration is main advantage of SubSonic you can look at Fluent nHibernate or Entity Framework Code-First
UPDATE from Răzvan Panda: Fluent NHibernate has been mostly made obsolete by inclusion in NHibernate itself of mapping by code, see: notherdev.blogspot.ie/2012/02/
A: I do have a problem with SubSonic. SubSonic choked to death on tables with same name but different schema. I don't want to discuss the best practices of building a database, because I did not make the call to do so.( When it comes to raise voice make a point and keep the job, I'd rather keep my job. :) )
A: I am coming around to liking Entity. It takes a while to figure out what all of its error messages mean but once you get used to it it really does a great job. The biggest drawback it has right now is no real support for going disconnected.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: cURL adding whitespace to post content? I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response:
$request = trim(file_get_contents('test.xml'));
$curlHandle = curl_init($servletURL);
curl_setopt($curlHandle, CURLOPT_POST, TRUE);
curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request));
curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out
$response = curl_exec($curlHandle);
That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating:
Content not allowed in prolog
I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out:
POST {serverURL} HTTP/1.1
Host: {ip of server}:8080
Accept: */*
Content-Length: 921
Expect: 100-continue
Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f
---------------------------01e7cda3896f
Content-Disposition: form-data; name="XML"
[SNIP - the XML was displayed]
---------------------------01e7cda3896f--
Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against?
A: Not an answer, but I find the whole fopen/fread/fclose thing very dull to peruse when looking at code.
You can replace:
$file = 'test.xml';
$fileHandle = fopen($file, 'r');
$request = fread($fileHandle, filesize($file));
fclose($fileHandle);
$request = trim($request);
With:
$request = trim(file_get_contents('test.xml'));
But anyway - to your question; if those are the headers that are being sent, then it shouldn't be a problem with the remote server. Try changing the contents of your xml file and using var_dump() to check the exact output (including the string length, so you can look for missing things)
Hope that helps
A: It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change:
# This sets the encoding to multipart/form-data
curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request));
to
# This sets it to application/x-www-form-urlencoded
curl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML=' . urlencode($request));
A: I did a wc -m test.xml and came back with 743 characters in the XML file and the var_dump on $request comes back with 742 characters so something is getting stripped with trim() (I assume).
I did a:
print "=====" . $request . "=====";
and the start and end of the XML butts right up against the ===== with no white space.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What's the ASP.NET Connection String Format for a Linked Server? I've got a database server that I am unable to connect to using the credentials I've been provided. However, on the staging version of the same server, there's a linked server that points to the production database. Both the staging server and the linked server have the same schema.
I've been reassured that I should expect to be able to connect to the live server before we go live. Unfortunately, I've reached a point in my development where I need more than the token sample records that are currently in the staging database. So, I was hoping to connect to the linked server.
Thus far in my development against this schema has been against the staging server itself, using Subsonic objects. That all works fine.
I can connect via SQL Server Management Studio to that linked server and execute my queries directly. I can also execute 'manual" queries in C# against the linked server by having my connection string hook up to the staging server and running my queries as
SELECT * FROM OpenQuery([LINKEDSERVER],'QUERY')
However, the Subsonic objects are what's enabling me to bring this project in on time and under budget, so I'm not looking to do straight queries in my code.
What I'm looking for is whether there's a way to state the connection string to the linked server. I've looked at lots of forum entries, etc. on the topic and most of the answers seem to completely gloss over the "linked server" portion of the question, focusing on basic connection string syntax.
A: I don't believe that you can access a linked server directly from an application without the OpenQuery syntax. Depending on the complexity of your schema, it might make sense to write a routine or sproc to populate your staging database with data from your live database.
You might also consider looking at Redgates SQL Data Generator or any other data gen tool. Redgates is pretty easy to use.
One other idea - can you get a backup of the live database that you can install in development to do your testing? If its just data for development and testing that you seek, you probably want to stay away from connecting to your production database at all.
A: Create testing stored procedures on server B that reference the data on server A via the linked server. e.g. if your regular sproc references a table on Server B say:
databaseA.dbo.tableName
then use the linked servername to reference the same database/table on server A:
linkedServerName.databaseA.dbo.tableName
If server A is identical in its database/table/column names than you will be able to do this by some quick find/replace work.
A: creating a linked server from .NET doesn't make any sense since a linked server is nothing but a connection from one sqlserver to another server (sql, file, excel, sybase etc etc), in essence it is just a connection string (you can impersonate and do some other stuff when creating a linked server).
A: One Way is to create two connection strings and access the approperiate database when required.
Second option is create connection for Database A only and create a link server For Databse B in Database.good article, i really like it. I am doing a bit on research about Asp.net connection and i found also macrotesting www.macrotesting.com to be very good source. Thanks for you article.....
Regards...
Meganathan .J
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Copying Files over an Intermittent Network Connection I am looking for a robust way to copy files over a Windows network share that is tolerant of intermittent connectivity. The application is often used on wireless, mobile workstations in large hospitals, and I'm assuming connectivity can be lost either momentarily or for several minutes at a time. The files involved are typically about 200KB - 500KB in size. The application is written in VB6 (ugh), but we frequently end up using Windows DLL calls.
Thanks!
A: Try using BITS (Background Intelligent Transfer Service). It's the infrastructure that Windows Update uses, is accessible via the Win32 API, and is built specifically to address this.
It's usually used for application updates, but should work well in any file moving situation.
http://www.codeproject.com/KB/IP/bitsman.aspx
A: I'm unclear as to what your actual problem is, so I'll throw out a few thoughts.
*
*Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at CopyFileEx with COPYFILERESTARTABLE
*Do you want verifiable copies? Sounds like you already have that by verifying hashes.
*Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, TransmitFile may help.
*Do you just want a fire and forget operation? I suppose shelling out to robocopy, or TeraCopy or something would work - but it seems a bit hacky to me.
*Do you want to know when the network comes back? IsNetworkAlive has your answer.
Based on what I know so far, I think the following pseudo-code would be my approach:
sourceFile = Compress("*.*");
destFile = "X:\files.zip";
int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE;
while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) {
do {
// optionally, increment a failed counter to break out at some point
Sleep(1000);
while (!IsNetworkAlive(NETWORKALIVELAN));
}
Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side.
A: I agree with Robocopy as a solution...thats why the utility is called "Robust File Copy"
I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
And by default, a million retries. That should be plenty for your intermittent connection.
It also does restartable transfers and you can even throttle transfers with a gap between packets assuing you don't want to use all the bandwidth as other programs are using the same connection (/IPG switch)?.
A: I've used Robocopy for this with excellent results. By default, it will retry every 30 seconds until the file gets across.
A: How about simply sending a hash after or before you send the file, and comparing that with the file you received? That should at least make sure you have a correct file.
If you want to go all out you could do the same process, but for small parts of the file. Then when you have all pieces, join them on the receiving end.
A: You could use Microsoft SyncToy (free).
http://www.microsoft.com/Downloads/details.aspx?familyid=C26EFA36-98E0-4EE9-A7C5-98D0592D8C52&displaylang=en
A: Hm, seems rsync does it, and does not need server/daemon/install I thought it does - just $ rsync src dst.
A: SMS if it's available works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Best way to learn SQL Server So I'm getting a new job working with databases (Microsoft SQL Server to be precise). I know nothing about SQL much less SQL Server. They said they'd train me, but I want to take some initiative to learn about it on my own to be ahead. Where's the best place to start (tutorials, books, etc)? I want to learn more about the SQL language moreso than any of the fancy point and click stuff.
A: If you're planning on coding against a sql database using .NET, skip ADO and go directly to Linq. You will NOT miss anything.
Oh, also, Joe Celko. If you see his name on an article or a book about SQL, read it.
A: This can be broad but here are some responsibilities that could get thrown at you in a brain dump format.
on the DBA end
*
*Backups
*Indexes
*Triggers
*Security per table database creating users ect.
*ODBC in your windows control panel
*know you normal forms
*the diff between a data warehouse (for reporting)
*and a Transactional database for most everything else (esp reporting in most environments)
On the Programing end
*
*Reporting (Run for the hills)
*Stored procedures
*Star and snowflake schema's
*ADO, ODBC
*CRUD apps (Create Read Update Delete)
A: SQL:
http://www.google.com/search?hl=en&q=introduction+to+sql ->
http://www.w3schools.com/SQL/sql_intro.asp
MSSQL:
http://www.google.com/search?hl=en&safe=off&q=introduction+to+ms+sql -> http://www.intermedia.net/support/SQL/sqltut.asp
The best way to learn? Write a lot of queries and read up on the Entity-relationship model
A: Sql Books Online would be a good place for referance.
A: SQL Server Central is a very good resource of information on MS SQL
A: I always use the SQL Server 7.0 documentation available on ddart.net.
A: Yikes...first I'd say "Best of luck to ya!"
Then secondly if you are really serious that you have no experiences with SQL I'd say find one of the SAMS "Teach Yourself SQL in 34 nanoseconds" books. Normally I'd never recommend a SAMS book, but if you are the stalwart type to accept a job you know nothing about then...what the heck.
A: One great way to learn how to layout your database tables and columns is to use the EDMX Designer in Visual Studio 2010. You can create the Entities you want, define associations between them, define inheritance relationships, and then let it figure out what tables you need, and how to model the relationships between those tables. Take a look at the SQL tables it creates for you and the Foreign Key (FK) relationships.
A: http://sqlzoo.net is a great, interactive place to start.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you test the usability of your user interfaces How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release?
We are a small software house, but I am interested in the best practices of how to measure usability.
Any help appreciated.
A: I like Paul Buchheit's answer on this from startup school. The short version of what he said listen to your users. Listen does not mean obey your users. Take in the data filter out all the bad advice and iteratively clean up the site. Lather, rinse, repeat.
If you are a small shop you probably don't have a team of QA or Usability people or whatever to go through the site. Your users are going to be the ones that actually use the site though. Their feedback can be invaluable.
If something is too hard for one of your users to use or too complex to understand why they should use it, then it might be the same way for 1000 other users. Find a simpler way of accomplishing the same thing.
Once you have gathered all of this feedback and have a list of things to do, do the simplest ones first. That way you have forward moving usability progress.
A: What I like to do is give someone an install package, ask them to perform a number of tasks related to how the application works, and watch.
Hardest part is to keep your mouth shut.
A: Some of the best advice on usability testing is available on Jakob Nielsen's Website http://www.useit.com. He advocates what Will mentioned - ask users to perform various tasks on your website or web application and then sit back to see what they do.
Do not interrupt the users by asking questions or guiding them. Just observe them and document their flow. You can also get hardware and software to do eye-tracking and understand what captures the attention of the users.
However, usability should not start from the testing phase. You must have some general idea of what users generally like and do not like when you do development. There are many websites and books outlining generally accepted usability standards and principles.
A: Normally, we test the usability of new interfaces by asking a small selection of users to try out a beta version.
We give a small amount of instruction as to what the new features/screens are supposed to do and let them dive straight into it. It's very interesting to see where they are looking and clicking. We never demo the new features - we only talk about what it does.
If the UI changes are minimal then they go live and we gather feedback from real users. It's only when we are making big changes that we go through usability tests on beta.
When developing new screens it usually helps a hell of a lot to get a colleague sat in front of the UI and ask them what it does. Which areas do they click on? Where are they looking first? What sections are drawing their attention? etc.
A: I agree with Adam; using a very computer illiterate person is very helpful. However, what I've run into before with that is the program I want them to try out just isn't "up their alley" as far as something they would ever want to do.
A good way to start is with a paper prototype. Have specific tasks that you want your "user" to perform and have them do it. For more on paper prototyping, start here.
A: I frequently take any new interface I'm working on to one of our technical support people. They've heard every complaint about interfaces that you could ever imagine, so if anyone is going to think up potential problems, they will.
Also, and I'm not kidding about this, I often take the least computer literate person I know (you're mother is often a good choice...but they have to have used a computer before, otherwise it's going to by pointless) and let them loose on the interface with no instruction. If they can't figure out where things are intuitively, then your GUI likely needs work. Remember, Don't make them think! (yes, I know this is for web design, but it applies)
A: There are many ways to test the usability of a system. Please check any available literature you can find. I just want to insist that usability test is not so hard as you or anyone might think. In a famous paper called "A mathematical model of the finding of usability problems" in INTERACT'93 and CHI'93, J. Nielsen and T. K. Landauer showed that only five users are enough to find most problems in a small system.
If you have no way to read this paper, try this article in the author's website:
http://www.useit.com/alertbox/20000319.html
A: Z'been a while since this question was last active but here goes anyways.
From experience :
*
*Always use Objectively measurable to decide if usability is better or not (time to accomplish carefully selected task, inactive time, KLM type metrics) here a key-mouse logger can be a precious ally
*Never go too far ahead before consulting and measuring again with your client (do not encage yourself with the paper prototype and emerge with the finish product... that just never works)
*read, read, read, try, evolve
*Keep things simple and always remember the task at had (why the user needs the interface)
*test, test and test again...
*Always go to the bottom of the user requests. Although the check box the user request at this particular place may be the best thing to do, it almost always hides a more fundamental flaw
*the system user (the one using it... as opposed to the one paying for it) is your best ally, keep him/her on your side
Never be afraid of refactoring your design and evolve your system. Also evolve your metrics and measurements also, however be careful in doing so not to break measurements continuity as it is the best token of objective progress in a VERY subjective world.
recommended reading (other than previously proposed):
*
*Handbook of usability testing Jeff Rubin. A bit extreme but we toyed around an agile version of his approach and found that if we spent 30 minutes a week with users we would get a LOT of useful feedback while not getting swamped with too much info.
*keep close watch to the Sneiderman and Nielsen of this world and other that may arrise
A: As usability inspection goes, there are several viable methods. They require a different amount of resources in regards to persons, analysis and equiptment.
The most common, and easiest to perform is called
Heuristic Evaluation
You basically walk through each screen to check if it conforms to the heuristics set by you, or your customer.
Check this article by Nielsen
Cognitive walkthrough
This method requires you to ask the user to complete steps in the application. You prepare steps for the user to complete. Issues that arrise during this walkthrough is taken into consideration when finishing the application.
Check this paper for details.
Think Aloud Analysis
I have used this method mostly in the early stages of prototyping. I let the user talk freely about the system while it is beeing used. Ask questions about use, design etc. You can get a really nice veiw of the general feeligns of the system, and what features are lacking.
Check this paper for details.
Interaction analysis
This is a more tricky one. I have only used the datagathering teqchniques proposed by this one. This technique takes into account context, activites, body language etc. Interaction analysis is commonly focused on research, not so much in commercial evaluations.
This link takes you to the article.
Keep in mind that these methods take practice to perfect. I would start with HE, continue to CW and THA. And only use Interaction Analysis if you have lots of resources and time.
A: There are a number of methods to test or evaluate usability of an application. Broken down into qualitative and quantitative methods and based on when you are planning to test.
Further it is categorized based on whether users are involved or experts do the testing.
To name a few methods,
*
*Expert Reviews - user interface or usability experts rate the usability of an interface based on decided heuristics and principles
*Formative usability testing - task flows are taken and users are provided with tasks to be completed. Qualitative feedback is collected based on what the users feel the pain points are during the testing. This form of testing is done during the design to provided feedback into the design of the application.
*Summative Usability testing - task flows are taken and users are provided with tasks to be completed. The applications performance on efficiency, effectiveness and satisfaction are measured based on users completion of tasks.
The importance difference is whether you engage the user or a expert to tell you the difference in usability. Further on when you do the evaluation - at the end of the project or during the design phases.
A: I'm a strong believer in what I call 3-martini usability testing. When designing a system, imagine that the person who will be using it has just had 3 martinis.
Before handing over the system to colleagues (other programmers, quality assurance, tech support) or usability testers, an informal test with a couple of friends and a bottle of vodka (outside of work, of course) can often prove instructive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Query a union table with fields as columns I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see.
I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table.
The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK"
What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like:
ID NAME SPECIALTYPE
1 Red Dragon Archfiend Synchro
1 Red Dragon Archfiend DARK
1 Red Dragon Archfiend Effect
and I could tally my results that way.
I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like:
1 Red Dragon Archfiend Synchro/DARK/Effect
..but I don't know if that's feasible.
Help me stackoverflow Kenobi! You're my only hope.
A: Is this for SQL server?
If yes then
Concatenate Values From Multiple Rows Into One Column (2000)
Concatenate Values From Multiple Rows Into One Column Ordered (2005+)
A: Related but values are values are kept in separate columns and you have know your "special types" a head of time: SQL query to compare product sales by month
Otherwise I would do this with cursor in a stored procedure or preform the transformation in the business or presentation layer.
Stab at sql if you know all cases:
Select
ID,NAME
,Synchro+DARK+Effect -- add a some substring logic to trim any trailing /'s
from
(select
ID
,NAME
--may need to replace max() with min().
,MAX(CASE SPECIALTYPE WHEN "Synchro" THEN SPECIALTYPE +"/" ELSE "" END) Synchro
,MAX(CASE SPECIALTYPE WHEN "DARK" THEN SPECIALTYPE +"/" ELSE "" END) DARK
,MAX(CASE SPECIALTYPE WHEN "Effect" THEN SPECIALTYPE ELSE "" END) Effect
from
table
group by
ID
,NAME) sub1
A: Don't collapse by concatenation for storage of related records in your database. Its not exactly best practices.
What you're describing is a pivot table. Pivot tables are hard. I'd suggest avoiding them if at all possible.
Why not just read in your related rows and process them in memory? It doesn't sound like you're going to spend too many milliseconds doing this...
A: One option is to have Properties have a PropertyType, so:
table cards
integer ID | string name | ... (other properties common to all Cards)
table property_types
integer ID | string name | string format | ... (possibly validations)
table properties
integer ID | integer property_type_id | string name | string value
foreign key property_type_id references property_types.ID
table cards_properties
integer ID | integer card_id | integer property_id
foreign key card_id references cards.ID
foreign key property_id references propertiess.ID
That way, when you want to set a new property value, you can validate it by its type. One type could be "SpecialType" with an enumeration of values.
A: I do have a type/format for my properties table, that way I know how to cast/evaluate when I'm dealing with an integer value. I wasn't sure if it was pertinent to this issue or not.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQL Server 2005 insert trigger not inserting enough records I have a table in a SQL Server 2005 database with a trigger that is supposed to add a record to a different table whenever a new record is inserted. It seems to work fine, but if I execute an Insert Into on the master table that uses a subquery as the source of the values, the trigger only inserts one record in the other table, even though multiple records were added to the master. I want the trigger to fire for each new record added to the master table. Is that possible in 2005?
The insert I'm doing is:
INSERT INTO [tblMenuItems] ([ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID]) SELECT [ID], [MenuID], [SortOrder], [ItemReference], [MenuReference], [ConcurrencyID] FROM [IVEEtblMenuItems]
Here is what the trigger looks like:
CREATE TRIGGER [dbo].[tblMenuItemInsertSecurity] ON [dbo].[tblMenuItems]
FOR INSERT
AS
Declare @iRoleID int
Declare @iMenuItemID int
Select @iMenuItemID = [ID] from Inserted
DECLARE tblUserRoles CURSOR FASTFORWARD FOR SELECT [ID] from tblUserRoles
OPEN tblUserRoles
FETCH NEXT FROM tblUserRoles INTO @iRoleID
WHILE (@@FetchStatus = 0)
BEGIN
INSERT INTO tblRestrictedMenuItems(
[RoleID],
[MenuItemID],
[RestrictLevel])
VALUES(
@iRoleID,
@iMenuItemID,
1)
FETCH NEXT FROM tblUserRoles INTO @iRoleID
END
CLOSE tblUserRoles
Deallocate tblUserRoles
A: Your trigger is only using the first row from 'Inserted'. This is a common misunderstanding when dealing with SQL triggers for the first time. The trigger fires per update not per row.
For example if you do the following:-
update products
set title = 'geoff de geoff'
this would update all the products, but a trigger on the product table would only fire once.
The Inserted 'table' you get in trigger would contain all the rows. You must either loop through Inserted with a cursor, or better join Inserted to the table you are updating.
A: Please lookup multi row consideration for triggers
What is with the cursor inside a trigger? Learn how to program set based, cursors are Evil in T-SQL and should only be used to defragment/update stats/other maintanance a bunch of tables
A: The trigger only fires once for each INSERT statment executed - not once for each record inserted.
In your trigger you can access the 'virtual' table called inserted for details of the records inserted.
ie:
SELECT COUNT(*) FROM inserted
Will return the number of inserted records.
A: I just want to second @Gordon Bell on his answer...
"Catch" the values the very moment they are being inserted. You do not really need the cursor in this situation (or maybe you have a reason?).
A simple TRIGGER might be all you need:
http://dbalink.wordpress.com/2008/06/20/how-to-sql-server-trigger-101/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Whats the best way to securely publish a site post build? So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool?
Edit: I should mention this is windows/.net and I'll be deploying to iis6
A: For some projects I use Capistrano to push out to live. It is built on top of ruby and makes deploy script writing super easy and uses ssh.
On other projects I have a tiny deploy app that uses bash to do an svn export to a temporary directory and then rsync it over to the live server. You can make rsync use ssh.
I greatly prefer the Capistrano method, even if your project isn't in ruby/rails.
A: This seems like the sort of thing that could be done easily with SFTP. Take a look at PuTTY (psftp and pscp) or WinSCP for Windows, or rsync and OpenSSH for Unixes.
A: Make a copy of your live site directory, use rsync to update that copy with your latest version, then rename the live and updated directories so that the updated version is now live.
In bash:
#!/bin/bash
set -e
cp -R /var/livesite /var/newversion
rsync user@devserver:/var/readytogolive /var/newversion
mv /var/livesite /var/oldlivesite
mv /var/newversion /var/livesite
Viola!
Edit: @Ted Percival - That's a good idea. I didn't even know about "set -e". Updated script. Edit: updated again at Ted's suggestion (although I think it would still work if somehow the cp command failed, and if cp fails you probably have more serious problems.)
A: @Neall, I'd add a set -e on the second line, because you don't want the live site being replaced if the rsync fails for any reason. set -e causes the script to exit if any of its commands fail.
Edit: The set -e should be the first thing in the script, right after #!/bin/bash.
A: I'll second the recommendation for Capistrano, though if you're looking for a GUI-based solution you could try the Webistrano front end. Clean, ssh-based, sane deployment and rollback semantics and easy scripting and extensibility via ruby.
A: You could always write a small client/server app that encrypts at the source, pushes the files, and then decrypts at the destination. That's a little bit of work, but probably a trivial amount. And it's scriptable as long as your automation tool supports executing something in the file system (which I think all do).
The only downside is that you may not be able to get meaningful error messages on failure in your integration environment without a bit more work on your part (though depending on your setup, this could be as simple as sending error messages to stdout).
A: hm, around here we use a staging "server" for testing purposes on the live environment (actually, its an apache virtual host on the production server) and araxis merge (a really smart line-by-line file comparison tool) to sync development and staging.
once its tested, just; replace the files on the production webroot :)
/mp
A: On a freelance job I did, we set up three seperate enviroments.
*
*A Dev server, that ran continues builds using CruiseControl. Any check-in would trigger a build. QA Testing was done here.
*A Test Server, that user acceptance testing was done on.
*Production.
The workflow was as followed:
*
*Developer checks in changes to SourceControl.
*CruiseControl builds and deploys the build to Dev.
*Dev is QA'ed
*After passing QA, a robocopy script is ran that deploys the Dev build to Test.
*Test is UAT'ed
*After Test passes, a robocopy script is ran that deploys Test to PRD.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Creation Date of Compiled Executable (VC++ 2005) The creation date of an executable linked in VS2005 is not set to the real creation-date of the .exe file. Only a complete re-build will set the current date, a re-link will not do it. Obviously the file is set to some date, which is taken from one of the project-files.
So: is there a way to force the linker to set the creation-date to the real link-date?
A: Delete the executable as part of a pre-link event.
Edit:
Hah, I forgot about Explorer resetting the creation date if you name a file exactly the same as a file that was recently deleted.
Why are you keying off the creation date anyway?
A: A complete rebuild will delete that file forcing the linker to create it, hence the reason it gets a new creation date. You could try disabling incremental linking under project properties (Linker | General). If that doesn't do it you could add a build event to delete the exe file and force it to create a new file each time. Both of these things could increase your build time.
A: Deleting the executable doesn't do the job. That's the problem. Also I could not identify any projectfile, whose datetime was the same as the later linked executable. That lets me conclude, that the 'creation date' is an information taken from within some project-file.
The project has 400000 lines, so a full build is no option.
A: What about using somethign like DirDate (or write a little util yourself) to set the creation date and call it from the post-build step?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unit testing kernel drivers I'm looking for a testing framework for the Windows kernel environment. So far, I've found cfix. Has any one tried it? Are there alternatives?
A: Being the author of cfix, I might be a little biased here -- but as a matter of fact, I am currently not aware of any other unit-testing framework for NT kernel mode.
If you should experience any problems with cfix, feel free to contact me.
A: Microsoft Static Driver Verifier is described as "a compile-time tool that explores code paths in a device driver by symbolically executing the source code. SDV is a unit-testing tool for Microsoft Windows device drivers based on the Windows Driver Model (WDM)."
Is that what you're looking for?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Getting stack traces on Unix systems, automatically What methods are there for automatically getting a stack trace on Unix systems? I don't mean just getting a core file or attaching interactively with GDB, but having a SIGSEGV handler that dumps a backtrace to a text file.
Bonus points for the following optional features:
*
*Extra information gathering at crash time (eg. config files).
*Email a crash info bundle to the developers.
*Ability to add this in a dlopened shared library
*Not requiring a GUI
A: If you are on systems with the BSD backtrace functionality available (Linux, OSX 1.5, BSD of course), you can do this programmatically in your signal handler.
For example (backtrace code derived from IBM example):
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
void sig_handler(int sig)
{
void * array[25];
int nSize = backtrace(array, 25);
char ** symbols = backtrace_symbols(array, nSize);
for (int i = 0; i < nSize; i++)
{
puts(symbols[i]);;
}
free(symbols);
signal(sig, &sig_handler);
}
void h()
{
kill(0, SIGSEGV);
}
void g()
{
h();
}
void f()
{
g();
}
int main(int argc, char ** argv)
{
signal(SIGSEGV, &sig_handler);
f();
}
Output:
0 a.out 0x00001f2d sig_handler + 35
1 libSystem.B.dylib 0x95f8f09b _sigtramp + 43
2 ??? 0xffffffff 0x0 + 4294967295
3 a.out 0x00001fb1 h + 26
4 a.out 0x00001fbe g + 11
5 a.out 0x00001fcb f + 11
6 a.out 0x00001ff5 main + 40
7 a.out 0x00001ede start + 54
This doesn't get bonus points for the optional features (except not requiring a GUI), however, it does have the advantage of being very simple, and not requiring any additional libraries or programs.
A: Here is an example of how to get some more info using a demangler. As you can see this one also logs the stacktrace to file.
#include <iostream>
#include <sstream>
#include <string>
#include <fstream>
#include <cxxabi.h>
void sig_handler(int sig)
{
std::stringstream stream;
void * array[25];
int nSize = backtrace(array, 25);
char ** symbols = backtrace_symbols(array, nSize);
for (unsigned int i = 0; i < size; i++) {
int status;
char *realname;
std::string current = symbols[i];
size_t start = current.find("(");
size_t end = current.find("+");
realname = NULL;
if (start != std::string::npos && end != std::string::npos) {
std::string symbol = current.substr(start+1, end-start-1);
realname = abi::__cxa_demangle(symbol.c_str(), 0, 0, &status);
}
if (realname != NULL)
stream << realname << std::endl;
else
stream << symbols[i] << std::endl;
free(realname);
}
free(symbols);
std::cerr << stream.str();
std::ofstream file("/tmp/error.log");
if (file.is_open()) {
if (file.good())
file << stream.str();
file.close();
}
signal(sig, &sig_handler);
}
A: Dereks solution is probably the best, but here's an alternative anyway:
Recent Linux kernel version allow you to pipe core dumps to a script or program. You could write a script to catch the core dump, collect any extra information you need and mail everything back.
This is a global setting though, so it'd apply to any crashing program on the system. It will also require root rights to set up.
It can be configured through the /proc/sys/kernel/core_pattern file. Set that to something like ' | /home/myuser/bin/my-core-handler-script'.
The Ubuntu people use this feature as well.
A: FYI,
the suggested solution (using backtrace_symbols in a signal handler) is dangerously broken. DO NOT USE IT -
Yes, backtrace and backtrace_symbols will produce a backtrace and a translate it to symbolic names, however:
*
*backtrace_symbols allocates memory using malloc and you use free to free it - If you're crashing because of memory corruption your malloc arena is very likely to be corrupt and cause a double fault.
*malloc and free protect the malloc arena with a lock internally. You might have faulted in the middle of a malloc/free with the lock taken, which will cause these function or anything that calls them to dead lock.
*You use puts which uses the standard stream, which is also protected by a lock. If you faulted in the middle of a printf you once again have a deadlock.
*On 32bit platforms (e.g. your normal PC of 2 year ago), the kernel will plant a return address to an internal glibc function instead of your faulting function in your stack, so the single most important piece of information you are interested in - in which function did the program fault, will actually be corrupted on those platform.
So, the code in the example is the worst kind of wrong - it LOOKS like it's working, but it will really fail you in unexpected ways in production.
BTW, interested in doing it right? check this out.
Cheers,
Gilad.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Recommend a tool to manage Extended Properties in SQL server 2005 Server Management Studio tends to be a bit un-intuitive when it comes to managing Extended Properties, so can anyone recommend a decent tool that improves the situation.
One thing I would like to do is to have templates that I can apply objects, thus standardising the nomenclature and content of the properties applied to objects.
A: Take a look at Data Dictionary Creator, an open source tool I wrote to make it easier to edit extended properties. It includes the ability to export the information in a variety of formats, as well.
http://www.codeplex.com/datadictionary
A: You might also think about having a nice re-runnable script that lets you maintain the extended properties. The system stored procedures for doing this work well, but they are a pain, so I wrap them with my own stored procedure so I can more easily deal with them.
For example, below is a stored procedure targeted at column level extended properties that a) checks to see if the extended property already exists, and b) if so drops it, and c) then adds it.
This lets me maintain a clean re-runnable (which is critical for automated build processes) script of simple one liners to add the extended properties (column level only - you'd need to modify this one or write a similar one for other object types).
Here is the sproc:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo]. [snap_xpColumn_addUpdate]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].snap_xpColumn_addUpdate
GO
CREATE PROCEDURE [dbo].[snap_xpColumn_addUpdate]
@TableName NVARCHAR(255),
@ColumnName NVARCHAR(255),
@ExtPropName NVARCHAR(255),
@ExtPropValue NVARCHAR(255),
@SchemaOwner NVARCHAR(255) = 'dbo'
AS
IF EXISTS(SELECT * FROM ::fn_listextendedproperty(@ExtPropName,'SCHEMA',@SchemaOwner,
'TABLE',@TableName,'COLUMN',@ColumnName))
BEGIN
-- drop it
EXEC sys.sp_dropextendedproperty @name=@ExtPropName,
@level0type=N'SCHEMA',
@level0name=@SchemaOwner,
@level1type=N'TABLE',
@level1name=@TableName,
@level2type=N'COLUMN',
@level2name=@ColumnName
END
-- add it
EXEC sys.sp_addextendedproperty @name=@ExtPropName,
@value=@ExtPropValue,
@level0type=N'SCHEMA',
@level0name=@SchemaOwner,
@level1type=N'TABLE',
@level1name=@TableName,
@level2type=N'COLUMN',
@level2name=@ColumnName
GO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Best Way to Begin Learning Web Application Design I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications.
I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation.
What would you recommend for a Standalone Application Developer wanting to branch out into Web Development?
A: A lot of languages have web counterparts. JSP for Java, Rails for Ruby, Django for Python, etc. That might be a lead.
If you want to go for the platform with arguably the biggest user base (and with that, the biggest pile of tutorials and examples), go for PHP.
I strongly advise on looking into various frameworks though. For every web-oriented language there's bound to be a lot of resources that take away the trouble of writing all the low-level plumbing code, so you can focus on the stuff that matters. Personally I almost exclusively use .NET, but I've heard about a bunch of nice PHP frameworks, like the Zend platform and CakePHP (for MVC development).
If you intend to also use javascript in your applications to give that nice web 2.0 feel to your applications, please, use a library that hides the messy browser details. You'll go nuts if you try to do all the cross-browser scripts yourself. Some good ones are Prototype and jQuery.
A: There is a wide variety of web application languages you could get into. The ones I have most experience with (and therefore will be talking about here) are PHP, eRuby and Ruby on Rails. All of these have good tutorials available on the internet - I'll link to some of them below.
Which to choose depends on exactly what you're looking to do. Using PHP and eRuby you have to do most things yourself - whereas Ruby on Rails will do lots of stuff for you (useful, but can also be dangerous if you don't know what you're doing). Ruby on Rails is good for doing database related things - for example the standard CRUD (Create, Read, Update, Delete) application. The standard kind of app Ruby on Rails (often abbreviated to RoR) tutorials teach you is a blog application (Create entries, Read entries, Update entries, Delete entries) or an Address Book Application. It is possible to do many of these sort of applications almost in one line of code - using RoR's 'scaffold' function.
PHP and eRuby make you do more of the work yourself - but this can be better in some situations. PHP is more well known and used than eRuby, but I like the Ruby language so I tend to like using eRuby. These are both good for doing simple applications (like contact forms on websites) or more complex applications (phpBB - a piece of forum software is written in php).
As for which one to choose - I'd have a play with them and see what you think. Try running through the first few bits of a tutorial with each and see how whether you like it or not.
Here come the links to various tutorials:
PHP
*
*PHP 101
*PHP Intro from W3Schools
eRuby
*
*Beginning eRuby - not great, but shows you how you can embed it in HTML
*Try Ruby in your Browser - helps you learn Ruby which you need to know for eRuby
Ruby on Rails
*
*Rolling with Ruby on Rails - the latest 'revisited' version for the latest version of RoR
*Rolling with Ruby on Rails part 2
There are a few tutorials to get you started. Some of these take you through installing the necessary software (webserver and anything else needed - eg. php or ruby) and some don't. A good way to get Apache (webserver), MySQL (db) and PHP installed on windows is to use XAMPP. If you're on linux then apache, mysql and php will be in your package repositories and there may be distro specific guides to setting them up.
A: Eloquent JavaScript and AppJet offer great tutorials that allow you to follow along while you learn.
Once you cover all the basics, Ajaxian should answer many of the questions you have about application design, etc. Not only do they post many excellent articles on these topics, but you should explore many of the sites they link to, as these sites usually also provide a wealth of info.
When it comes to server interactions, know your options. Ajax isn't all there is. Research technologies like Comet and JSON-RPC, as well as looking at various server-side frameworks that provide easy access to JavaScript such as DWR, Jayrock, or any tool that exposes your functions to JavaScript using whatever language you choose to use on the server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How Do You Secure database.yml? Within Ruby on Rails applications database.yml is a plain text file that stores database credentials.
When I deploy my Rails applications I have an after deploy callback in my Capistrano
recipe that creates a symbolic link within the application's /config directory to the database.yml file. The file itself is stored in a separate directory that's outside the standard Capistrano /releases directory structure. I chmod 400 the file so it's only readable by the user who created it.
*
*Is this sufficient to lock it down? If not, what else do you do?
*Is anyone encrypting their database.yml files?
A: Better late than never, I am posting my answer as the question still remains relevant. For Rails 5.2+, it is possible to secure any sensitive information using an encrypted file credentials.yml.enc.
Rails stores secrets in config/credentials.yml.enc, which is encrypted and hence cannot be edited directly. We can edit the credentials by running the following command:
$ EDITOR=nano rails credentials:edit
secret_key_base: 3b7cd727ee24e8444053437c36cc66c3
production_dbpwd: my-secret-password
Now, these secrets can be accessed using Rails.application.credentials.
So your database.yml will look like this:
production:
adapter: mysql
database: my_db
username: db_user
password: <%= Rails.application.credentials.production_dbpwd %>
You can read more about this here
A: The way I have tackled this is to put the database password in a file with read permissions only for the user I run my application as. Then, in database.yml I use ERB to read the file:
production:
adapter: mysql
database: my_db
username: db_user
password: <%= begin IO.read("/home/my_deploy_user/.db") rescue "" end %>
Works a treat.
A: Even if you secure the database.yml file, people can still write that uses the same credentials if they can change the code of your application.
An other way to look at this is: does the web application have to much access to the database. If true lower the permissions. Give just enough permissions to the application. This way an attacker can only do what the web application would be able to do.
A: You'll also want to make sure that your SSH system is well secured to prevent people from logging in as your Capistrano bot. I'd suggest restricting access to password-protected key pairs.
Encrypting the .yml file on the server is useless since you have to give the bot the key, which would be stored . . . on the same server. Encrypting it on your machine is probably a good idea. Capistrano can decrypt it before sending.
A: Take a look at this github solution: https://github.com/NUBIC/bcdatabase. bcdatabase provides an encrypted store where the passwords can be kept separated from the yaml files.
bcdatabase
bcdatabase is a library and utility
which provides database configuration
parameter management for Ruby on Rails
applications. It provides a simple
mechanism for separating database
configuration attributes from
application source code so that
there's no temptation to check
passwords into the version control
system. And it centralizes the
parameters for a single server so that
they can be easily shared among
multiple applications and easily
updated by a single administrator.
A: If you're very concerned about security of the yml file, I have to ask: Is it stored in your version control? If so, that's another point where an attacker can get at it. If you're doing checkout/checkin over non-SSL, someone could intercept it.
Also, with some version control (svn, for exampl), even if you remove it, it's still there in the history. So, even if you removed it at some point in the past, it's still a good idea to change the passwords.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: Unit testing in Delphi - how are you doing it? I'm wondering how the few Delphi users here are doing unit testing, if any? Is there anything that integrates with the IDE that you've found works well? If not, what tools are you using and do you have or know of example mini-projects that demonstrate how it all works?
Update:
I forgot to mention that I'm using BDS 2006 Pro, though I occasionally drop into Delphi 7, and of course others may be using other versions.
A: DUnit2 is available from http://members.optusnet.com.au/~mcnabp/
DUnit2 is modified more regularly than the original dunit. It also works on Delphi 2009.
Try: http://sourceforge.net/projects/dunit2/ - it moved as the original author Peter McNab passed away several years ago. Still some activity on the dunit mailing list.
A: You could take a look at the unit testing classes available in our SynCommons open source unit. It's used in our Open-Source framework for all regression tests. It's perhaps not the best, but it's worth taking a look at it.
See http://blog.synopse.info/post/2010/07/23/Unit-Testing-light-in-Delphi
In order to implement an unit test, you just declare a new test case by creating a class like this:
type
TTestNumbersAdding = class(TSynTestCase)
published
procedure TestIntegerAdd;
procedure TestDoubleAdd;
end;
procedure TTestNumbersAdding.TestDoubleAdd;
var A,B: double;
i: integer;
begin
for i := 1 to 1000 do
begin
A := Random;
B := Random;
CheckSame(A+B,Adding(A,B));
end;
end;
Then you create a test suit, and run it.
In the up-to-come 1.13 version, there is also a new logging mechanism with stack trace of any raised exception and such, just like MadExcept, using .map file content as source.
It's now used by the unit testing classes, so that any failure will create an entry in the log with the source line, and stack trace:
C:\Dev\lib\SQLite3\exe\TestSQL3.exe 0.0.0.0 (2011-04-13)
Host=Laptop User=MyName CPU=2*0-15-1027 OS=2.3=5.1.2600 Wow64=0 Freq=3579545
TSynLogTest 1.13 2011-04-13 05:40:25
20110413 05402559 fail TTestLowLevelCommon(00B31D70) Low level common: TDynArray "" stack trace 0002FE0B SynCommons.TDynArray.Init (15148) 00036736 SynCommons.Test64K (18206) 0003682F SynCommons.TTestLowLevelCommon._TDynArray (18214) 000E9C94 TestSQL3 (163)
The difference between a test suit without logging and a test suit with logging is only this:
procedure TSynTestsLogged.Failed(const msg: string; aTest: TSynTestCase);
begin
inherited;
with TestCase[fCurrentMethod] do
fLogFile.Log(sllFail,'%: % "%"',
[Ident,TestName[fCurrentMethodIndex],msg],aTest);
end;
The logging mechanism can do much than just log the testing: you can log recursive calls of methods, select the information you want to appear in the logs, profile the application from the customer side, writing published properties, TList or TCollection content as JSON into the log content, and so on...
The first time the .map file is read, a .mab file is created, and will contain all symbol information needed. You can send the .mab file with the .exe to your client, or even embed its content to the .exe. This .mab file is optimized: a .map of 927,984 bytes compresses into a 71,943 .mab file.
So this unit could be recognized as the natural child of DUnit and MadExcept wedding, in pure OpenSource. :)
Additional information is available on our forum. Feel free to ask. Feedback and feature requests are welcome! Works from Delphi 6 up to XE.
A: There's a new unit testing framework for modern Delphi versions in development: https://github.com/VSoftTechnologies/DUnitX
A: DUnit is a xUnit type of unit testing framework to be used with win32 Delphi. Since Delphi 2005 DUnit is integrated to a certan point into the IDE. Other DUnit integration tools for the Delphi IDE can be found here. DUnit comes with documentation with examples.
A: Usually I create a Unit test project (File->New->Other->Unit Test->Test Project). It contains the stuff I need so it's been good enough so far.
I use delphi 2007 so I don't really know if this is available in 2006.
A: We do unit testing of all logic code using DUnit and use the code coverage profiler included in AQTime to check that all paths through the code are executed by the tests.
A: We have two approaches, first we have Dunit tests that are run buy the developers - these make sure that the code that has just been changed still works as before. The other approach is to use CruiseControl.NET to build executables and then run the dunit tests everytime a change is made, to ensure that there are no unintended consequences of the change.
Much of our codebase has no tests, so the automatic tests are a case of continuous development in order to ensure our applications work as we think they should.
A: There are some add-ons for DUnit, maybe this is worth a new entry on SO. Two which I can put on the list now are
*
*FastMM4 integration: Unit tests will automatically detect memory leaks (and other things), works with DUnit 9.3 and newer
*OpenCTF is a 'component test
framework' based on DUnit, it
creates the tests dynamically for
all components in the project's
forms, frames and datamodules, and
tests them using customized rules (open source)
A: We tried to use DUnit with Delphi 5, but it didn't work well. Specially if you are implementing COM interfaces, we found many dependencies to setup all the test infrastructure. I don't know if the test support has improved in newer versions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: What are some good SSH Servers for windows? Trying to setup an SSH server on Windows Server 2003. What are some good ones? Preferably open source. I plan on using WinSCP as a client so a server which supports the advanced features implemented by that client would be great.
A: I agree that cygwin/OpenSSH is the best choice, but its setup can be involved to say the least. Here is a document to get you started though: Installing OpenSSH
A: I've been using Bitvise SSH Server and it's really great. From install to administration it does it all through a GUI so you won't be putting together a sshd_config file. Plus if you use their client, Tunnelier, you get some bonus features (like mapping shares, port forwarding setup up server side, etc.) If you don't use their client it will still work with the Open Source SSH clients.
It's not Open Source and it costs $39.95, but I think it's worth it.
UPDATE 2009-05-21 11:10: The pricing has changed. The current price is $99.95 per install for commercial, but now free for non-commercial/personal use. Here is the current pricing.
A: I've been using Bitvise SSH Server for a number of years. It is a wonderful product and it is easy to setup and maintain. It gives you great control over how users connect to the server with support for security groups.
A: copssh - OpenSSH for Windows
http://www.itefix.no/i2/copssh
Packages essential Cygwin binaries.
A:
OpenSSH is a contender. Looks like it hasn't been updated in a while though.
It's the de facto choice in my opinion. And yes, running under Cygwin is really the nicest method.
A: VanDyke VShell is the best Windows SSH Server I've ever worked with. It is kind of expensive though ($250). If you want a free solution, freeSSHd works okay. The CYGWIN solution is always an option, I've found, however, that it is a lot of work & overhead just to get SSH.
A: You can run OpenSSH on Cygwin, and even install it as a Windows service.
I once used it this way to easily add backups of a Unix system - it would rsync a bunch of files onto the Windows server, and the Windows server had full tape backups.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: How do I change the title bar icon in Adobe AIR? I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon.
I have been able to change it in the system tray, however.
A: Does the following help?
http://groups.google.com/group/chennai-flex-user-group/browse_thread/thread/cffb9ab56450c28e
A: The first link shows how to change the Taskbar Icon, the second shows the application icon I believe used on the desktop. I am going to recompile and install the application and see if it works.
Edit: Yea, the one that changes the Desktop Icon also changes the Title Bar icon. It's in the app.xml file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Web server farms with IIS ? Basic Infos Can somebody point me to a resource that explains how to go about having 2+ IIS web server clustered (or Webfarm not sure what its called) ?
All I need is something basic, an overview how and where to start.
Can't seem to find anything...
A: This MSDN magazine article has a good overview of the technologies involved:
http://msdn.microsoft.com/en-us/magazine/cc500561.aspx
A: Microsoft have articles on TechNet about clustering IIS using Network Load Balancing. You can do this more simply than using special hardware load balancing.
For hardware load balancing you place a device in front of the web servers and it manages the load. Each device is different so you would want to check the manufacturers guides and compatibility.
You should also check that your application does not have problems with load balancing. The sticky session problem is just one problem you should find out more about.
A: http://www.iis.net/download/applicationrequestrouting
http://www.iis.net/download/webfarmframework
http://www.iis.net/download/webdeploy
A: What you're after is called Load Balancing.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0baca8b1-73b9-4cd2-ab9c-654d88d05b4f.mspx?mfr=true
There's a very good book on the topic:
http://www.amazon.co.uk/Windows-Clustering-Balancing-Osborne-Networking/dp/0072226226/ref=sr_1_1?ie=UTF8&s=books&qid=1219249588&sr=8-1
A: A couple of good articles for those who are looking nowadays for information about Server Farms - Load Balancing and Application Request Rooting that I found and wanted to share are these:
HTTP Load Balancing using Application Request Routing:
https://learn.microsoft.com/en-us/iis/extensions/configuring-application-request-routing-arr/http-load-balancing-using-application-request-routing.
Overview - Build a Web Farm with IIS Servers: https://learn.microsoft.com/en-us/iis/web-hosting/scenario-build-a-web-farm-with-iis-servers/overview-build-a-web-farm-with-iis-servers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Mixing 32 bit and 16 bit code with nasm This is a low-level systems question.
I need to mix 32 bit and 16 bit code because I'm trying to return to real-mode from protected mode. As a bit of background information, my code is doing this just after GRUB boots so I don't have any pesky operating system to tell me what I can and can't do.
Anyway, I use [BITS 32] and [BITS 16] with my assembly to tell nasm which types of operations it should use, but when I test my code use bochs it looks like the for some operations bochs isn't executing the code that I wrote. It looks like the assembler is sticking in extras 0x66 and 0x67's which confuses bochs.
So, how do I get nasm to successfully assemble code where I mix 32 bit and 16 bit code in the same file? Is there some kind of trick?
A: The problem turned out to be that I wasn't setting up my descriptor tables correctly. I had one bit flipped wrong so instead of going to 16-bit mode I was going to 32-bit mode (with segments that happened to have a limit of one meg).
Thanks for the suggestions!
Terry
A: The 0x66 and 0x67 are opcodes that are used to indicate that the following opcode should be interpreted as a non-default bitness. More specifically, (and according to this link),
"When NASM is in BITS 16 mode, instructions which use 32-bit data are prefixed with an 0x66 byte, and those referring to 32-bit addresses have an 0x67 prefix. In BITS 32 mode, the reverse is true: 32-bit instructions require no prefixes, whereas instructions using 16-bit data need an 0x66 and those working on 16-bit addresses need an 0x67."
This suggests that it's bochs that at fault.
A: You weren't kidding about this being low-level!
Have you checked the generated opcodes / operands to make sure that nasm is honoring your BITS directives correctly? Also check to make sure the jump targets are correct - maybe nasm is using the wrong offsets.
If it's not a bug in nasm, maybe there is a bug in bochs. I can't imagine that people switch back to 16-bit mode from 32-bit mode very often anymore.
A: If you're in real mode your default size is implicitly 16 bits, so you should use BITS 16 mode. This way if you need a 32-bit operand size you add the 0x66 prefix, and for a 32-bit address size you add the 0x67 prefix.
Look at the Intel IA-32 Software Developer's Guide, Volume 3, Chapter 16 (MIXING 16-BIT AND 32-BIT CODE; the chapter number might change according to the edition of the book):
Real-address mode, virtual-8086 mode, and SMM are native 16-bit modes.
The BITS 32 directive will only confuse the assembler if you use it outside of Protected Mode or Long Mode.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: TCL development: debug environment I like a bit of TiVo hacking in spare time - TiVo uses a Linux variant and TCL. I'd like to write TCL scripts on my Windows laptop, test them and then FTP them over to my TiVo.
Can I have a recommendation for a TCL debugging environment for Windows, please?
A: Komodo from Activestate is a good IDE for Windows/Linux. There is a trial version - I am not sure if there is a free version after trial though.
A: I'm not sure that you need a debugging environment as such. Just grab the binary release from ActiveState (http://www.activestate.com/Products/activetcl/index.mhtml) and run your scripts from the command prompt (C:/blahblah/tclsh myprog.tcl) and see what it spits out.
I'd advise against building it from source because it doesn't really gain you anything.
A: This wiki page discusses tools for developing and debugging in Tcl. In particular, I've been enamoured with tkinspect (mentioned on that wiki page with its own page elsewhere on the wiki) which allows one in a linux or other unix x environment to interact with a running tk application to attempt to do some debugging. Of course, ActiveState's commercial product "tcl dev kit" has a debugger. There are other debuggers - free and not so free - discussed on the wiki as well.
A: There is now a Tcl plugin for Netbeans, which has a debugging feature. Here are some screenshots: http://wiki.tcl.tk/28657
A: If you are looking for a Debugger with editing possibilities,
RamDebugger is also a nice tool.
A: ActiveState has a Tcl development kit (not free, but cheap) that I've used in the past. It even worked with our embedded tcl interpreter.
http://www.activestate.com/tcl_dev_kit/
A: I've found this breakpoint setter from the Tcl wiki (from Richard Suchenwirth) to be handy. Once the interpreter sees a call to this, say "bp beforehairyfunction", it pauses and gives you a tclsh prompt.
proc bp {{s {}}} {
if ![info exists ::bp_skip] {
set ::bp_skip [list]
} elseif {[lsearch -exact $::bp_skip $s]>=0} return
if [catch {info level -1} who] {set who ::}
while 1 {
puts -nonewline "$who/$s> "; flush stdout
gets stdin line
if {$line=="c"} {puts "continuing.."; break}
if {$line=="i"} {set line "info locals"}
catch {uplevel 1 $line} res
puts $res
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: I would like some tips for debugging WCF Web Service exceptions I've created a WCF service and when I browse to the endpoint I get the following fault:
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<s:Fault>
<faultcode
xmlns:a="http://schemas.microsoft.com/ws/2005/05/addressing/none">
a:ActionNotSupported
</faultcode>
<faultstring xml:lang="en-GB">
The message with Action '' cannot be processed at the receiver,
due to a ContractFilter mismatch at the EndpointDispatcher.
This may be because of either a contract mismatch (mismatched
Actions between sender and receiver) or a binding/security
mismatch between the sender and the receiver. Check that sender
and receiver have the same contract and the same binding
(including security requirements, e.g. Message, Transport, None).
</faultstring>
</s:Fault>
</s:Body>
</s:Envelope>
I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this?
A: I've found SvcTraceViewer.exe to be the most valuable tool when it comes to diagnosing WCF errors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How would I get started writing my own firewall? There is previous little on the google on this subject other than people asking this very same question.
How would I get started writing my own firewall?
I'm looking to write one for the windows platform but I would also be interested in this information for other operating systems too.
A: This question is alarmingly similar to those asking how to write an encryption algorithm. The answers to both should end in gentle reminders about industry standard solutions that already:
*
*embody years of experience and constant improvement,
*are probably far more secure than any home-grown solution, and
*account for ancillary requirements, such as efficiency.
A firewall must inspect every packet efficiently and accurately, and it therefore runs within the OS kernel or network stacks. Errors or inefficiencies jeopardize the security and performance of the entire machine and those downstream.
Building your own low-level firewall is an excellent exercise that will provide an education across many technologies. But for any real application, it's much safer and smarter to build a shell around the existing firewall API. Under Windows, the netsh command will do this; Linux uses netfilter and iptables. Googling any of these will point you to lots of theory, examples, and other helpful information.
So, to get started, I'd brush up on TCP/IP (specifically, the header information: ports and protocols), then learn about the various types of attacks and how to detect them. Learn about each operating system of interest and how it interacts with the network stacks. Finally, think about administration and logging: how will you configure your firewall and trace packets through it to ensure it's doing what you want it to do?
Good luck!
A: The usual approach is to use API hooking. Google can teach you that. Just hook all important networking stuff, like connect's and listens's, and refuse what you want.
A: For Windows 2000/XP there is an article with examples on CodeProject Developing Firewalls for Windows 2000/XPFor Vista I think you will need to use Windows Filtering Platform
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Most succinct way to determine if a variable equals a value from a 'list' of values If I have a variable in C# that needs to be checked to determine if it is equal to one of a set of variables, what is the best way to do this?
I'm not looking for a solution that stores the set in an array. I'm more curious to see if there is a solution that uses boolean logic in some way to get the answer.
I know I could do something like this:
int baseCase = 5;
bool testResult = baseCase == 3 || baseCase == 7 || baseCase == 12 || baseCase == 5;
I'm curious to see if I could do something more like this:
int baseCase = 5;
bool testResult = baseCase == (3 | 7 | 12 | 5);
Obviously the above won't work, but I'm interested in seeing if there is something more succinct than my first example, which has to repeat the same variable over and over again for each test value.
UPDATE:
I decided to accept CoreyN's answer as it seems like the most simple approach. It's practical, and still simple for a novice to understand, I think.
Unfortunately where I work our system uses the .NET 2.0 framework and there's no chance of upgrading any time soon. Are there any other solutions out there that don't rely on the .NET 3.5 framework, besides the most obvious one I can think of:
new List<int>(new int[] { 3, 6, 7, 1 }).Contains(5);
A: bool b = new int[] { 3,7,12,5 }.Contains(5);
A: You can do something similar with .NET 2.0, by taking advantage of the fact that an array of T implements IList<T>, and IList<T> has a Contains method. Therefore the following is equivalent to Corey's .NET 3.5 solution, though obviously less clear:
bool b = ((IList<int>)new int[] { 3, 7, 12, 5 }).Contains(5);
I often use IList<T> for array declarations, or at least for passing one-dimensional array arguments. It means you can use IList properties such as Count, and switch from an array to a list easily. E.g.
private readonly IList<int> someIntegers = new int[] { 1,2,3,4,5 };
A: I usually use CoreyN's solution for simple cases like that. Anything more complex, use a LINQ query.
A: Since you did not specify what type of data you have as input I'm going to assume you can partition your input into powers of 2 -> 2,4,8,16... This will allow you to use the bits to determine if your test value is one of the bits in the input.
4 => 0000100
16 => 0010000
64 => 1000000
using some binary math...
testList = 4 + 16 + 64 => 1010100
testValue = 16
testResult = testList & testValue
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Get a number from a sql string range I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons.
Possible values in the string:
'<5%'
'5-10%'
'10-15%'
...
'95-100%'
I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value.
I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations.
Any ideas?
A: Try this,
SELECT substring(replace(interest , '<',''), patindex('%[0-9]%',replace(interest , '<','')), patindex('%[^0-9]%',replace(interest, '<',''))-1) FROM table1
Tested at my end and it works, it's only my first try so you might be able to optimise it.
A: @Martin: Your solution works.
Here is another I came up with based on inspiration from @mercutio
select cast(replace(replace(replace(interest,'<',''),'%',''),'-','.0') as numeric) test
from table1 where interest is not null
A: You can convert char data to other types of char (convert char(10) to varchar(10)), but you won't be able to convert character data to integer data from within SQL.
A: I don't know if this works in SQL Server, but within MySQL, you can use several tricks to convert character data into numbers. Examples from your sample data:
"<5%" => 0
"5-10%" => 5
"95-100%" => 95
now obviously this fails your first test, but some clever string replacements on the start of the string would be enough to get it working.
One example of converting character data into numbers:
SELECT "5-10%" + 0 AS foo ...
Might not work in SQL Server, but future searches may help the odd MySQL user :-D
A: You'd probably be much better off changing <5% and 5-10% to store 2 values in 2 fields. Instead of storing <5%, you would store 0, and 5, and instead of 5-10%, yould end up with 5 and 10. You'd end up with 2 columns, one called lowerbound, and one called upperbound, and then just check value >= lowerbound AND value < upperbound.
A: You can do this in sql server with a cursor. If you can create a CLR function to pull out number groupings that will help. Its possible in T-SQL, just will be ugly.
Create the cursor to loop over the list.
Find the first number, If there is only 1 number group in their then return it. Otherwise find the second item grouping.
if there is only 1st item grouping returned and its the first item in the list set it to upper bound.
if there is only 1st item grouping returned and its the last item in the list set it to lower bound.
Otherwise set the 1st item grouping to lower, and the 2nd item grouping to upper bound
Just set the resulting values back to a table
A: The issue you are having is a symptom of not keeping the data atomic. In this case it looks purely unintentional (Legacy) but here is a link about it.
To design yourself out of this create a range_lookup table:
Create table rangeLookup(
rangeID int -- or rangeCD or not at all
,rangeLabel varchar(50)
,LowValue int--real or whatever
,HighValue int
)
To hack yourself out here some pseudo steps this will be a deeply nested mess.
normalize your input by replacing all your crazy charecters.
replace(replace(rangeLabel,"%",""),"<","")
--This will entail many nested replace statments.
Add a CASE and CHARINDEX to look for a space if there is none you have your number
else use your substring to take everything before the first " ".
-- theses steps are wrapped around the previous step.
A: It's complicated, but for the test cases you provided, this works. Just replace @Test with the column you are looking in from your table.
DECLARE @TEST varchar(10)
set @Test = '<5%'
--set @Test = '5-10%'
--set @Test = '10-15%'
--set @Test = '95-100%'
Select CASE WHEN
Substring(@TEST,1,1) = '<'
THEN
0
ELSE
CONVERT(integer,SUBSTRING(@TEST,1,CHARINDEX('-',@TEST)-1))
END
AS LowerBound
,
CASE WHEN
Substring(@TEST,1,1) = '<'
THEN
CONVERT(integer,Substring(@TEST,2,CHARINDEX('%',@TEST)-2))
ELSE
CONVERT(integer,Substring(@TEST,CHARINDEX('-',@TEST)+1,CHARINDEX('%',@TEST)-CHARINDEX('-',@TEST)-1))
END
AS UpperBound
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Elegant way to remove items from sequence in Python? When I am writing code in Python, I often need to remove items from a list or other sequence type based on some criteria. I haven't found a solution that is elegant and efficient, as removing items from a list you are currently iterating through is bad. For example, you can't do this:
for name in names:
if name[-5:] == 'Smith':
names.remove(name)
I usually end up doing something like this:
toremove = []
for name in names:
if name[-5:] == 'Smith':
toremove.append(name)
for name in toremove:
names.remove(name)
del toremove
This is innefficient, fairly ugly and possibly buggy (how does it handle multiple 'John Smith' entries?). Does anyone have a more elegant solution, or at least a more efficient one?
How about one that works with dictionaries?
A: Two easy ways to accomplish just the filtering are:
*
*Using filter:
names = filter(lambda name: name[-5:] != "Smith", names)
*Using list comprehensions:
names = [name for name in names if name[-5:] != "Smith"]
Note that both cases keep the values for which the predicate function evaluates to True, so you have to reverse the logic (i.e. you say "keep the people who do not have the last name Smith" instead of "remove the people who have the last name Smith").
Edit Funny... two people individually posted both of the answers I suggested as I was posting mine.
A: There are times when filtering (either using filter or a list comprehension) doesn't work. This happens when some other object is holding a reference to the list you're modifying and you need to modify the list in place.
for name in names[:]:
if name[-5:] == 'Smith':
names.remove(name)
The only difference from the original code is the use of names[:] instead of names in the for loop. That way the code iterates over a (shallow) copy of the list and the removals work as expected. Since the list copying is shallow, it's fairly quick.
A: You can also iterate backwards over the list:
for name in reversed(names):
if name[-5:] == 'Smith':
names.remove(name)
This has the advantage that it does not create a new list (like filter or a list comprehension) and uses an iterator instead of a list copy (like [:]).
Note that although removing elements while iterating backwards is safe, inserting them is somewhat trickier.
A: filter would be awesome for this. Simple example:
names = ['mike', 'dave', 'jim']
filter(lambda x: x != 'mike', names)
['dave', 'jim']
Edit: Corey's list comprehension is awesome too.
A: The obvious answer is the one that John and a couple other people gave, namely:
>>> names = [name for name in names if name[-5:] != "Smith"] # <-- slower
But that has the disadvantage that it creates a new list object, rather than reusing the original object. I did some profiling and experimentation, and the most efficient method I came up with is:
>>> names[:] = (name for name in names if name[-5:] != "Smith") # <-- faster
Assigning to "names[:]" basically means "replace the contents of the names list with the following value". It's different from just assigning to names, in that it doesn't create a new list object. The right hand side of the assignment is a generator expression (note the use of parentheses rather than square brackets). This will cause Python to iterate across the list.
Some quick profiling suggests that this is about 30% faster than the list comprehension approach, and about 40% faster than the filter approach.
Caveat: while this solution is faster than the obvious solution, it is more obscure, and relies on more advanced Python techniques. If you do use it, I recommend accompanying it with a comment. It's probably only worth using in cases where you really care about the performance of this particular operation (which is pretty fast no matter what). (In the case where I used this, I was doing A* beam search, and used this to remove search points from the search beam.)
A: names = filter(lambda x: x[-5:] != "Smith", names);
A: Both solutions, filter and comprehension requires building a new list. I don't know enough of the Python internals to be sure, but I think that a more traditional (but less elegant) approach could be more efficient:
names = ['Jones', 'Vai', 'Smith', 'Perez']
item = 0
while item <> len(names):
name = names [item]
if name=='Smith':
names.remove(name)
else:
item += 1
print names
Anyway, for short lists, I stick with either of the two solutions proposed earlier.
A: To answer your question about working with dictionaries, you should note that Python 3.0 will include dict comprehensions:
>>> {i : chr(65+i) for i in range(4)}
In the mean time, you can do a quasi-dict comprehension this way:
>>> dict([(i, chr(65+i)) for i in range(4)])
Or as a more direct answer:
dict([(key, name) for key, name in some_dictionary.iteritems if name[-5:] != 'Smith'])
A: If the list should be filtered in-place and the list size is quite big, then algorithms mentioned in the previous answers, which are based on list.remove(), may be unsuitable, because their computational complexity is O(n^2). In this case you can use the following no-so pythonic function:
def filter_inplace(func, original_list):
""" Filters the original_list in-place.
Removes elements from the original_list for which func() returns False.
Algrithm's computational complexity is O(N), where N is the size
of the original_list.
"""
# Compact the list in-place.
new_list_size = 0
for item in original_list:
if func(item):
original_list[new_list_size] = item
new_list_size += 1
# Remove trailing items from the list.
tail_size = len(original_list) - new_list_size
while tail_size:
original_list.pop()
tail_size -= 1
a = [1, 2, 3, 4, 5, 6, 7]
# Remove even numbers from a in-place.
filter_inplace(lambda x: x & 1, a)
# Prints [1, 3, 5, 7]
print a
Edit:
Actually, the solution at https://stackoverflow.com/a/4639748/274937 is superior to mine solution. It is more pythonic and works faster. So, here is a new filter_inplace() implementation:
def filter_inplace(func, original_list):
""" Filters the original_list inplace.
Removes elements from the original_list for which function returns False.
Algrithm's computational complexity is O(N), where N is the size
of the original_list.
"""
original_list[:] = [item for item in original_list if func(item)]
A: Using a list comprehension
list = [x for x in list if x[-5:] != "smith"]
A: The filter and list comprehensions are ok for your example, but they have a couple of problems:
*
*They make a copy of your list and return the new one, and that will be inefficient when the original list is really big
*They can be really cumbersome when the criteria to pick items (in your case, if name[-5:] == 'Smith') is more complicated, or has several conditions.
Your original solution is actually more efficient for very big lists, even if we can agree it's uglier. But if you worry that you can have multiple 'John Smith', it can be fixed by deleting based on position and not on value:
names = ['Jones', 'Vai', 'Smith', 'Perez', 'Smith']
toremove = []
for pos, name in enumerate(names):
if name[-5:] == 'Smith':
toremove.append(pos)
for pos in sorted(toremove, reverse=True):
del(names[pos])
print names
We can't pick a solution without considering the size of the list, but for big lists I would prefer your 2-pass solution instead of the filter or lists comprehensions
A: In the case of a set.
toRemove = set([])
for item in mySet:
if item is unwelcome:
toRemove.add(item)
mySets = mySet - toRemove
A: Here is my filter_inplace implementation that can be used to filter items from a list in-place, I came up with this on my own independently before finding this page. It is the same algorithm as what PabloG posted, just made more generic so you can use it to filter lists in place, it is also able to remove from the list based on the comparisonFunc if reversed is set True; a sort-of of reversed filter if you will.
def filter_inplace(conditionFunc, list, reversed=False):
index = 0
while index < len(list):
item = list[index]
shouldRemove = not conditionFunc(item)
if reversed: shouldRemove = not shouldRemove
if shouldRemove:
list.remove(item)
else:
index += 1
A: Well, this is clearly an issue with the data structure you are using. Use a hashtable for example. Some implementations support multiple entries per key, so one can either pop the newest element off, or remove all of them.
But this is, and what you're going to find the solution is, elegance through a different data structure, not algorithm. Maybe you can do better if it's sorted, or something, but iteration on a list is your only method here.
edit: one does realize he asked for 'efficiency'... all these suggested methods just iterate over the list, which is the same as what he suggested.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: How To Get Label Of Combobox to Fade In Flex I've got a combo-box that sits inside of a panel in Flex 3. Basically I want to fade the panel using a Fade effect in ActionScript. I can get the fade to work fine, however the label of the combo-box does not fade. I had this same issue with buttons and found that their fonts needed to be embedded. No problem. I embedded the font that I was using and the buttons' labels faded correctly. I've tried a similar approach to the combo-box, but it does not fade the selected item label.
Here is what I've done so far:
Embed code for the font at the top of my MXML in script:
[Embed("assets/trebuc.ttf", fontName="TrebuchetMS")]
public var trebuchetMSFont:Class;
In my init function
//register the font.
Font.registerFont(trebuchetMSFont);
The combobox's mxml:
<mx:ComboBox id="FilterFields" styleName="FilterDropdown"
left="10" right="10" top="10"
fontSize="14">
<mx:itemRenderer>
<mx:Component>
<mx:Label fontSize="10" />
</mx:Component>
</mx:itemRenderer>
</mx:ComboBox>
And a style that I wrote to get the fonts applied to the combo-box:
.FilterDropdown
{
embedFonts: true;
fontFamily: TrebuchetMS;
fontWeight: normal;
fontSize: 12;
}
The reason I had to write a style instead of placing it in the "FontFamily" attribute was that the style made all the text on the combo-box the correct font where the "FontFamily" attribute only made the items in the drop-down use the correct font.
A: You can often use <mx:Dissolve> instead of <mx:Fade>, it looks nearly identical and doesn't require embedded fonts.
A: Hmm, I am not sure why that isn't working for you. Here is an example of how I got it to work:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="fx.play([panel])">
<mx:Style>
@font-face {
src: local("Arial");
fontFamily: ArialEm;
}
@font-face {
src: local("Arial");
fontFamily: ArialEm;
fontWeight: bold;
}
@font-face {
src: local("Arial");
fontFamily: ArialEm;
font-style: italic;
}
</mx:Style>
<mx:XML id="items" xmlns="">
<items>
<item label="Item 1" />
<item label="Item 2" />
<item label="Item 3" />
</items>
</mx:XML>
<mx:Panel id="panel" x="10" y="10" width="250" height="200" layout="absolute">
<mx:ComboBox fontFamily="ArialEm" x="35" y="10" dataProvider="{items.item}" labelField="@label"></mx:ComboBox>
</mx:Panel>
<mx:Fade id="fx" alphaFrom="0" alphaTo="1" duration="5000" />
</mx:Application>
Hope this helps you out.
A: Dissolve works by fading a solid color rectangle in and out instead of fading the actual component. This works fine, especially when you wish to control the color to which the component should fade. However, sometimes you need transparency and thus must use Fade. There is a little trick to get Fade to work neatly with both device fonts and embedded fonts: use a blur filter with no blur.
Basically, when you set a bitmap filter the player internally creates a bitmap copy of your object to which it then applies the filter. If the blur is set to not blur, so to speak, it will still look good and be able to fade perfectly fine. This breaks the zoom feature of the player though since the text is now rasterized.
<mx:Label id="percentage" text="{progress} %" truncateToFit="false">
<mx:filters>
<mx:BlurFilter blurX="0" blurY="0" />
</mx:filters>
</mx:Label>
A: Thanks for your help.
Had exactly the same problem.
The trick is in the embedding the "bold" version of the font you are using.
Even though the font in your ComboBox isn't set to Bold ...
A: var htm = $('#comboboxId').find('option:selected').html();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to bind Windows Forms properties to ApplicationSettings in C#? In a desktop application needing some serious re-factoring, I have several chunks of code that look like this:
private void LoadSettings()
{
WindowState = Properties.Settings.Default.WindowState;
Location = Properties.Settings.Default.WindowLocation;
...
}
private void SaveSettings()
{
Properties.Settings.Default.WindowState = WindowState;
Properties.Settings.Default.WindowLocation = Location;
...
}
What's the best way to replace this? Project-imposed constraints:
*
*Visual Studio 2005
*C# / .NET 2.0
*Windows Forms
Update
For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings".
I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues.
A: If you open your windows form in the designer, look in the properties box. The first item should be "(ApplicationSetting)". Under that is "(PropertyBinding)". That's where you'll find the option to do exactly what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: I can share a SQL Server Reporting Services Data SOURCE... what about a Data SET? I am developing a Reporting Services solution for a DOD website. Frequently I'll have a report and want to have as a parameter the Service (in addition to other similar mundane, but repetitive parameters like Fiscal Year, Data Effective Date, etc). Basically everything I've seen of SSRS 2005 says it can't be done... but I personally refuse to believe that MS would be so stupid/naive/short-sited to leave something like sharing datasets out of reporting entirely.
Is there a clunky (or not so clunky way) to share datasets and still keep the reporting server happy? Will SSRS2008 do this?
EDIT:
I guess I worded that unclearly. I have a stack of reports. Since I'm in a DoD environment, one common parameter for these reports is Service (army, navy, etc. for those non US users). Since "Business rules" cause me to not be able to use stored procedures; is there a way I can make 1 dataset and link to it from the various reports? Will Reporting 2008 support something like this? I'm getting sick of re-typing the same query in a bunch of reports.
A: I am not clear if you need to share a dataset, since you have some SQL results that you need to use twice, and don't want to re-compute the same data twice, or you want to do something regarding parameters. So with this "I didn't really understand the question" preface...
*
*You cannot share a dataset. Meaning, you can't, lets say, have a dataset returning table A, and in dataset B try to join with A.
*If this is really what you want to do, you could use temporary tables to store A and then in dataset B use the temporary table. There are best practices around that, but since I am not sure this is what you need, I won't spend time talking about that right now.
A: If you cannot use Stored Procedures, I hope you can use a view.
Else you could leave SQL Server and use CSV sheets as data storage.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Encryption in C# Web-Services I'm looking for a simple way to encrypt my soap communication in my C# Web-Service.
I was looking into WSE 3.0 but it seems Microsoft dropped support for it, and therefore it's not straightforward to use.
It seems WCF could've been an option but I prefer not to upgrade from .NET 2.0 .
Any simple, straightforward encryption method?
A: I think this can help; last year we used this to compress the webservices and it performed very well, I believe it could be enhanced with encryption classes;
Creating Custom SOAP Extensions - Compression Extension
A: Anything you do to provide "encryption" that isn't using SSL/TLS is likely to be vulnerable. Now you have to ask yourself, is it worth burning dev hours you could be spending on features on a rubber-chicken security measure? Maybe it is.
.NET APIs like DPAPI and the Win32 crypt32 API make it easy to encrypt blobs of data with static keys. But how will your clients receive the keys? Any installed SOAP client will have to either have the key burned into its configuration, or receive it over the insecure Internet.
This is the problem SSL/TLS solves for you; the dance you do with TLS certificates is what solves the problem of communicating public keys over untrusted channels.
A:
Perhaps I'm being naive, but would
forcing the communication to be via
https be acceptable? I develop web
services that run on 2.0 and have had
success with just getting IIS to
enforce https on the virtual
directory.
That would be the simplest way to go
probably, but unfortunately I don't
have control over the IIS
configuration, and can't guarantee
that it can run https.
In that case, perhaps the best bet is to either case-by-case encrypt portions of the SOAP messages (after all, you may not need the entire message to be encrypted - just certain sensitive fields?), or you could opt to use an HttpModule to intercept all the messages and operate on the contents. In either case you're probably going to have to provide custom proxies.
A: Perhaps I'm being naive, but would forcing the communication to be via https be acceptable?
I develop web services that run on 2.0 and have had success with just getting IIS to enforce https on the virtual directory.
Alternatively, or in addition, you can check the HttpRequest.IsSecureConnection property.
A: We actually use WSE 3.0 in our web services, which were originally developed pre-WCF. For security, we use a SAML token based system built on the Cryptography classes in System.Security.
It works very well. However, this method is by no means "simple".
A: You can use parameters encryption in C# using the System.Security.Cryptography extension.
Encrypting your parameters and decrypting them would be harder but much more secure.
How To: Encrypt and Decrypt Data Using a Symmetric (Rijndael) Key (C#/VB.NET)
I'm using this aproach for an OTP (one time password) web service, and it works fine for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Dealing with PHP server and MySQL server in different time zones For those of us who use standard shared hosting packages, such as GoDaddy or Network Solutions, how do you handle datetime conversions when your hosting server (PHP) and MySQL server are in different time zones?
Also, does anybody have some best practice advice for determining what time zone a visitor to your site is in and manipulating a datetime variable appropriately?
A: RE the answer from Željko Živković, timezone descriptors like 'Europe/London' only work if the mySQL admin has added the timezone tables to the system, and keeps them updated.
Otherwise you are limited to numeric offsets like '-4:00'. Fortunately the php date('P') format provides it (as of 5.1.3)
So in say an app config file you might have
define('TZ', 'US/Pacific');
....
if (defined('TZ') && function_exists('date_default_timezone_set')) {
date_default_timezone_set(TZ);
$mdb2->exec("SET SESSION time_zone = " . $mdb2->quote(date('P')));
}
This means PHP and mySQL will agree on what timezone offset to use.
Always use TIMESTAMP for storing time values. The column is actually stored as UNIX_TIME (epoch) but implicitly converted from current time_zone offset when written, and back when read.
If you want to display times for users in other time zones, then instead of a global define(), set their given timezone in the above. TIMESTAMP values will be automatically converted by mySQL by the time your app sees the result set (which sometimes can be a problem, if you need to actually know the original timezone of the event too then it needs to be in another column)
and as far as, "why not just store all times as int's", that does lose you the ability to compare and validate dates, and means you always have to convert to date representation at the app level (and is hard on the eyes when you are looking at the data directly - quick, what happened at 1254369600?)
A: As of PHP 5.1.0 you can use date_default_timezone_set() function to set the default timezone used by all date/time functions in a script.
For MySql (quoted from MySQL Server Time Zone Support page)
Before MySQL 4.1.3, the server operates only in the system time zone set at startup. Beginning with MySQL 4.1.3, the server maintains several time zone settings, some of which can be modified at runtime.
Of interest to you is per-connection setting of the time zones, which you would use at the beginning of your scripts
SET timezone = 'Europe/London';
As for detecting the client timezone setting, you could use a bit of JavaScript to get and save that information to a cookie, and use it on subsequent page reads, to calculate the proper timezone.
//Returns the offset (time difference) between Greenwich Mean Time (GMT)
//and local time of Date object, in minutes.
var offset = new Date().getTimezoneOffset();
document.cookie = 'timezoneOffset=' + escape(offset);
Or you could offer users the chioce to set their time zones themselves.
A: Store everything as UTC. You can do conversions at the client level, or on the server side using client settings.
php - date
mysql - utc-timestamp
A: I save all my dates as a bigint due to having had issues with the dateTime type before. I save the result of the time() PHP function into it, now they count as being in the same timezone :)
A: In php set timezone by in the php.ini file:
ini_set("date.timezone", "America/Los_Angeles");
or in particular page you can do like:
date_default_timezone_set("America/Los_Angeles");
In mysql you can do like:
SET GLOBAL time_zone = 'America/Los_Angeles';
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Is Mono ready for prime time? Has anyone used Mono, the open source .NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible, ... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime?
A: Well, mono is great, but as far as I can see, it is unstable. It works, but faults when you give mono process a serious work to do.
TL;DR - Do not use mono if you :
*
*use AppDomains (Assembly Load\Unload) in multithreaded environments
*Can't sustain 'let-it-fail' model
*Experience occasional heavy-load events during process run
So, the facts.
We use mono-2.6.7 (.net v 3.5) on RHEL5, Ubuntu, and, to my point of view, it is most stable version built by Novell. It has an issue with Unloading AppDomains (segfaults), however, it fails very rare and this, by far, is acceptable (by us).
Okay. But if you want to use features of .net 4.0, you have to switch to versions 2.10.x, or 3.x, and that's where problems begin.
Compared to 2.6.7, new versions are just unacceptable to be used. I wrote a simple stress test application to tests mono installations.
It is here, with instructions to use : https://github.com/head-thrash/stress_test_mono
It uses Thread Pool Worker Threads. Worker loads dll to AppDomain and tries to do some math-work. Some of work is many-threaded, some is single. Almost all work is CPU-bound, although there are some reads of files from disk.
Results are not very good. In fact, for version 3.0.12:
*
*sgen GC segfaults process almost immediatly
*mono with boehm lives longer (from 2 to 5 hours), but segfaults eventually
As mentioned above, sgen gc just does not work (mono built from source):
* Assertion: should not be reached at sgen-scan-object.h:111
Stacktrace:
Native stacktrace:
mono() [0x4ab0ad]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x2b61ea830cb0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b61eaa74425]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x2b61eaa77b8b]
mono() [0x62b49d]
mono() [0x62b5d6]
mono() [0x5d4f84]
mono() [0x5cb0af]
mono() [0x5cb2cc]
mono() [0x5cccfd]
mono() [0x5cd944]
mono() [0x5d12b6]
mono(mono_gc_collect+0x28) [0x5d16f8]
mono(mono_domain_finalize+0x7c) [0x59fb1c]
mono() [0x596ef0]
mono() [0x616f13]
mono() [0x626ee0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x2b61ea828e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x2b61eab31ccd]
As for boehm segfauls - for example (Ubuntu 13.04, mono built from source):
mono: mini-amd64.c:492: amd64_patch: Assertion `0' failed.
Stacktrace:
at <unknown> <0xffffffff>
at System.Collections.Generic.Dictionary`2.Init (int,System.Collections.Generic.IEqualityComparer`1<TKey>) [0x00012] in /home/bkmz/my/mono/mcs/class/corlib/System.Collections.Generic/Dictionary.cs:264
at System.Collections.Generic.Dictionary`2..ctor () [0x00006] in /home/bkmz/my/mono/mcs/class/corlib/System.Collections.Generic/Dictionary.cs:222
at System.Security.Cryptography.CryptoConfig/CryptoHandler..ctor (System.Collections.Generic.IDictionary`2<string, System.Type>,System.Collections.Generic.IDictionary`2<string, string>) [0x00014] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/Crypto
Config.cs:582
at System.Security.Cryptography.CryptoConfig.LoadConfig (string,System.Collections.Generic.IDictionary`2<string, System.Type>,System.Collections.Generic.IDictionary`2<string, string>) [0x00013] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/CryptoCo
nfig.cs:473
at System.Security.Cryptography.CryptoConfig.Initialize () [0x00697] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:457
at System.Security.Cryptography.CryptoConfig.CreateFromName (string,object[]) [0x00027] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:495
at System.Security.Cryptography.CryptoConfig.CreateFromName (string) [0x00000] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:484
at System.Security.Cryptography.RandomNumberGenerator.Create (string) [0x00000] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/RandomNumberGenerator.cs:59
at System.Security.Cryptography.RandomNumberGenerator.Create () [0x00000] in /home/bkmz/my/mono/mcs/class/corlib/System.Security.Cryptography/RandomNumberGenerator.cs:53
at System.Guid.NewGuid () [0x0001e] in /home/bkmz/my/mono/mcs/class/corlib/System/Guid.cs:492
Or (RHEL5, mono is taken from rpm here ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home%3A/vmas%3A/mono-centos5)
Assertion at mini.c:3783, condition `code' not met
Stacktrace:
at <unknown> <0xffffffff>
at System.IO.StreamReader.ReadBuffer () [0x00012] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.IO/StreamReader.cs:394
at System.IO.StreamReader.Peek () [0x00006] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.IO/StreamReader.cs:429
at Mono.Xml.SmallXmlParser.Peek () [0x00000] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/Mono.Xml/SmallXmlParser.cs:271
at Mono.Xml.SmallXmlParser.Parse (System.IO.TextReader,Mono.Xml.SmallXmlParser/IContentHandler) [0x00020] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/Mono.Xml/SmallXmlParser.cs:346
at System.Security.Cryptography.CryptoConfig.LoadConfig (string,System.Collections.Generic.IDictionary`2<string, System.Type>,System.Collections.Generic.IDictionary`2<string, string>) [0x00021] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptog
raphy/CryptoConfig.cs:475
at System.Security.Cryptography.CryptoConfig.Initialize () [0x00697] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:457
at System.Security.Cryptography.CryptoConfig.CreateFromName (string,object[]) [0x00027] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:495
at System.Security.Cryptography.CryptoConfig.CreateFromName (string) [0x00000] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptography/CryptoConfig.cs:484
at System.Security.Cryptography.RandomNumberGenerator.Create (string) [0x00000] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptography/RandomNumberGenerator.cs:59
at System.Security.Cryptography.RandomNumberGenerator.Create () [0x00000] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Security.Cryptography/RandomNumberGenerator.cs:53
at System.Guid.NewGuid () [0x0001e] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System/Guid.cs:483
at System.Runtime.Remoting.RemotingServices.NewUri () [0x00020] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Runtime.Remoting/RemotingServices.cs:356
at System.Runtime.Remoting.RemotingServices.Marshal (System.MarshalByRefObject,string,System.Type) [0x000ba] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System.Runtime.Remoting/RemotingServices.cs:329
at System.AppDomain.GetMarshalledDomainObjRef () [0x00000] in /usr/src/redhat/BUILD/mono-3.0.3/mcs/class/corlib/System/AppDomain.cs:1363
Both failures are somehow connected to AppDomains logic, so, you should stay away from them in mono.
BTW, tested program worked 24 hours on Windows machine in MS .NET 4.5 env without any fail.
So, in conclusion, I would like to say - use mono with caution. It works from the first glance, but can easily fail whenever. You'd be left with a bunch of core dumps and major faith loss in opensource projects.
A: It has pretty extensive coverage up to .NET 4.0 and even include some features from .NET 4.5 APIs, but there are a few areas that we have chosen not to implement due to the APIs being deprecated, new alternatives being created or the scope being too large. The following APIs are not available in Mono:
*
*Windows Presentation Foundation
*Windows Workflow Foundation (neither of the two versions)
*Entity Framework
*The WSE1/WSE2 "add-ons" to the standard Web Services stack
Additionally, our WCF implementation is limited to what Silverlight supported.
The easiest way to check for your specific project is to run the Mono Migration Analyzer (MoMA). The benefit is that it will notify the Mono team of issues which will prevent you from using Mono (if any), which lets them prioritize their work.
I recently ran MoMA on SubSonic and found only one issue - a weird use of Nullable types. That's a big codebase, so the coverage there was pretty impressive.
Mono is in active use in several commercial as well as open source products. It's in use in some large applications, such as Wikipedia and the Mozilla Developer Center, and has been used in embedded applications such as the Sansa MP3 players and powers thousands of published games.
At the language level, the Mono compiler is fully compliant with the C# 5.0 language specification.
A: MoMA is a great tool for this, as someone else suggested. The biggest sources of incompatibility these days are applications which DllImport (or P/Invoke) into Win32 libraries. Some assemblies aren't implemented, but most of them are Windows-only and really wouldn't make sense on Linux. I think it's fairly safe to say that most ASP.NET applications can run on Mono with limited modifications.
(Disclosure: I've contributed to Mono itself, as well as written apps that run on top of it.)
A: There are a couple of scenarios to consider: (a) if you are porting an existing application and wondering if Mono is good enough for this task; (b) you are starting to write some new code, and you want to know if Mono is mature enough.
For the first case, you can use the Mono Migration Analyzer tool (Moma) to evaluate how far your application is from running on Mono. If the evaluation comes back with flying colors, you should start on your testing and QA and get ready to ship.
If your evaluation comes back with a report highlighting features that are missing or differ significantly in their semantics in Mono you will have to evaluate whether the code can be adapted, rewritten or in the worst case whether your application can work with reduced functionality.
According to our Moma statistics based on user submissions (this is from memory) about 50% of the applications work out of the box, about 25% require about a week worth of work (refactoring, adapting) another 15% require a serious commitment to redo chunks of your code, and the rest is just not worth bothering porting since they are so incredibly tied to Win32. At that point, either you start from zero, or a business decision will drive the effort to make your code portable, but we are talking months worth of work (at least from the reports we have).
If you are starting from scratch, the situation is a lot simpler, because you will only be using the APIs that are present in Mono. As long as you stay with the supported stack (which is pretty much .NET 2.0, plus all the core upgrades in 3.5 including LINQ and System.Core, plus any of the Mono cross-platform APIs) you will be fine.
Every once in a while you might run into bugs in Mono or limitations, and you might have to work around them, but that is not different than any other system.
As for portability: ASP.NET applications are the easier ones to port, as those have little to no dependencies on Win32 and you can even use SQL server or other popular databases (there are plenty of bundled database providers with Mono).
Windows.Forms porting is sometimes trickier because developers like to escape the .NET sandbox and P/Invoke their brains out to configure things as useful as the changing the cursor blinking rate expressed as two bezier points encoded in BCD form in a wParam. Or some junk like that.
A: In many cases, you can take existing code and just run it on Mono, particularly if you're porting an ASP.NET application.
In some cases, you may require whole new sections of code to make it work. If you use System.Windows.Forms, for example, the application won't work unmodified. Likewise if you use any Windows-specific code (registry access code, for example). But I think the worst offender is UI code. That's particularly bad on Macintosh systems.
A: We've been using it for a project here at work that needed to run on Linux but reuse some .NET libraries that we built in Managed C++. I've been very surprised at how well it has worked out. Our main executable is being written in C# and we can just reference our Managed C++ binaries with no issue. The only difference in the C# code between Windows and Linux is RS232 serial port code.
The only big issue I can think of happened about a month ago. The Linux build had a memory leak that wasn't seen on the Windows build. After doing some manual debugging (the basic profilers for Mono on Linux didn't help much), we were able to narrow the issue down to a specific chunk of code. We ended up patching a workaround, but I still need to find some time to go back and figure out what the root cause of the leak was.
A: On the desktop side, Mono works great if you commit to using GTK#. The Windows.Forms implementation is still a little buggy (for example, TrayIcon's don't work) but it has come a long way. Besides, GTK# is a better toolkit than Windows Forms as it is.
On the web side, Mono has implemented enough of ASP.NET to run most sites perfectly. The difficulty here is finding a host that has mod_mono installed on apache, or doing it yourself if you have shell access to your host.
Either way, Mono is great, and stable.
Key things to remember when creating a cross platform program:
*
*Use GTK# instead of Windows.Forms
*Ensure to properly case your filenames
*Use Path.Separator instead of hardcoding "\", also use Environment.NewLine instead of "\n".
*Do not use any P/Invoked calls to Win32 API.
*Do not use the Windows Registry.
A: I personally use Mono in a prime-time env.
I run mono servers dealing with giga-bytes of udp/tcp data processing related tasks and couldn't be happier.
There are peculiarities, and one of the most annoying things is that you can't just "build" your msbuild files due to Mono's current state:
*
*MonoDevelop (the IDE) has some partial msbuild support, but will basically bork on any "REAL" build conf beyond a simple hello-world (custom build tasks, dynamic "properties" like $(SolutionDir), real configuration to name a few dead-ends)
*xbuild which SHOULD have been the mono-supplied-msbuild-fully-compatible-build-system is even more horrible, so building from the command line is actually a worse experience than using the GUI, which is a very "unorthodox" state of the union for Linux environments...
Once/During getting your stuff actually BUILT, you might see some wildernesses even for code that SHOULD be supported like:
*
*the compiler getting borked on certain constructs
*and certain more advanced/new .NET classes throwing un-expected crap at you (XLinq anyone?)
*some immature runtime "features" (3GB heap limit ON x64... WTF!)
but heaving said that generally speaking things start working very quickly, and solutions/workarounds are abundant.
Once you've gone over those initial hurdles, my experience is that mono ROCKS, and keeps getting better with every iteration.
I've had servers running with mono, processing 300GB of data per day, with tons of p/invokes and generally speaking doing LOTS of work and staying UP for 5-6 months, even with the "bleeding edge" mono.
Hope this helps.
A: The recommendations for the accepted answer are a little out of date now.
*
*The windows forms implementation is pretty good now. (See Paint-Mono for a port of Paint.net which is a pretty involved Windows forms application. All that was required was an emulation layer for some of the P-Invoke and unsupported system calls).
*Path.Combine as well as Path.Seperator to join paths and filenames.
*The windows Registry is OK, as long as you are only using it for storing and retrieving data from your applications (i.e. you can't get any information about Windows from it, since it is basically a registry for Mono applications).
A:
Do you know how good Mono 2.0 preview's support is for Windows Forms 2.0?
From the little bit that I've played with it, it seemed relatively complete and almost usable. It just didn't quite look right in some places and is still a little hit or miss overall. It amazed me that it worked as well as it did with some of our forms, though honestly.
A: Yes it definitely is (if you're careful though)
We support Mono in Ra-Ajax (Ajax library found at http://ra-ajax.org) and we're mostly not having problems at all. You need to be careful with some of the "most insane things" from .Net like WSE etc, and also probably quite some few of your existing projects will not be 100% Mono compatible, but new projects if you test them during development will mostly be compatible without problems with Mono. And the gain from supporting Linux etc through using Mono is really cool ;)
A large portion of the secret of supporting Mono I think is to use the right tools from the beginning, e.g. ActiveRecord, log4net, ra-ajax etc...
A: For the type of application we're building Mono unfortunately doesn't seem ready for production. We were impressed with it overall, and impressed with its performance both on Windows and on EC2 machines, however, our program crashed consistenly with garbage collection errors on both Windows and linux.
The error message is: "fatal errors in GC: too many heap sections", here is a link to someone else experiencing the problem in a slightly different way:
http://bugzilla.novell.com/show_bug.cgi?id=435906
The first piece of code we ran in Mono was a simple programming challenge we'd developed... The code loads about 10mb data into some data structures (e.g. HashSets), then runs 10 queries against the data. We ran the queries 100 times in order to time them and get an average.
The code crashed around the 55th query on Windows. On linux it worked, but as soon as we moved to a bigger data set, it would crash too.
This code is very simple, e.g. put some data into HashSets and then query those HashSets etc, all native c#, nothing unsafe, no API calls. On the Microsoft CLR it never crashes, and runs on huge data sets 1000s times just fine.
One of our guys emailed Miguel and included the code that caused the problem, no response yet. :(
It also seems like many other people have encountered this problem without solution - one solution has been suggested to recompile Mono with different GC settings but that just appears to increase the threshold before which it crashes.
A: Just check www.plasticscm.com. Everything (client, server, GUI, merge tools) is written on mono.
A: If you want to use WPF you'rr out of luck Mono currently has no plans to implement it.
http://www.mono-project.com/WPF
A: It really depends on the namespaces and classes that you are using from the .NET framework. I had interest in converting one of my windows services to run on my email server, which is Suse, but we ran into several hard roadblocks with APIs that had not been completely implemented. There is a chart somewhere on the Mono website that lists all of the classes and their level of completion. If your application is covered, then go for it.
Like any other application, do prototyping and testing before you make a full commitment, of course.
Another problem we ran into is licensed software: if you are referencing someone else's DLL, you can't code your way around incompatibilities that are buried in that assembly.
A: I would imagine then if you have an application with some 3rd party components you may be stuffed. I doubt a lot of vendors will develop with Mono in mind
Example: http://community.devexpress.com/forums/p/55085/185853.aspx
A: No, mono is not ready for serious work. I wrote a few programs on Windows using F# and ran them on Mono. Those program used disk, memory and cpu quite intensively. I saw crashes in mono libraries (managed code), crashes in native code and crashes in the virtual machine. When mono worked the programs were at least two times slower than in .Net in Windows and used much more memory. Stay away from mono for serious work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "314"
} |
Q: Best practise to authorize all users for just one page What is the best way to authorize all users to one single page in a asp.net website.
For except the login page and one other page, I deny all users from viewing pages in the website.
How do you make this page accessible to all users?
A: I've been using forms authentication and creating the necessary GenericIdentity and CustomPrincipal objects that allows me to leverage the User.IsInRole type functions you typically only get with Windows authentication.
That way in my web.config file, I can do stuff like...
<location path="Login.aspx">
<system.web>
<authorization>
<allow users ="*" />
</authorization>
</system.web>
</location>
<location path="ManagementFolder">
<system.web>
<authorization>
<allow roles ="Administrator, Manager" />
</authorization>
</system.web>
</location>
A: I created a base "page" class that handles that sort of thing. All my pages can then be decorated with the RequiresLogin attribute if a login is required to view them. If the attribute is not present, the page is accessible to all.
Example:
<RequiresLogin()> _
<RequiresPermission("process")> _
Partial Class DesignReviewEditProgressPage
Inherits MyPage 'which inherits System.Web.UI.Page and deal with logins itself
...
End Class
The MyPage class checks what attributes are being tagged to itself and if RequiresLogin is present, it forwards you to a login page.
I believe this could be adapted to fit your own problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: .Net Parse versus Convert In .Net you can read a string value into another data type using either <datatype>.parse or Convert.To<DataType>.
I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate.
So - which way is best in what type of circumstances?
A: Here's an answer for you:
http://www.dotnetspider.com/forum/ViewForum.aspx?ForumId=77428
Though I think in modern versions of .NET, the best thing to do is use TryParse in any case, if there's any doubt that the conversion will work.
A: I'm a big fan of TryParse, since it saves you a lot of headache of error catching when there's a chance the value you're going to parse is not of the appropriate type.
My order is usually:
*
*Parse (if I can be sure the value will be the right type, and I do try to ensure this)
*TryParse (if I can't be sure, which happens whenever user input is involved, or input from a system you cannot control)
*Convert (which I think I have not used since I started using Parse and TryParse, but I could be wrong)
A: The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while .Parse() and .TryParse() are specifically for strings:
//o is actually a boxed int
object o = 12345;
//unboxes it
int castVal = (int) 12345;
//o is a boxed enum
object o = MyEnum.ValueA;
//this will get the underlying int of ValueA
int convVal = Convert.ToInt32( o );
//now we have a string
string s = "12345";
//this will throw an exception if s can't be parsed
int parseVal = int.Parse( s );
//alternatively:
int tryVal;
if( int.TryParse( s, out tryVal ) ) {
//do something with tryVal
}
If you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker.
A: There is also the DirectCast method which you should use only if you are sure what the type of the object is. It is faster, but doesn't do any proper checks. I use DirectCast when I'm extracting values from a loosely typed DataTable when I know the type for each column.
A: If you need speed, I'm pretty sure a direct cast is the fastest way. That being said, I normally use .Parse or .TryParse because is seems to make things easier to read, and behave in a more predictable manner.
Convert actually calls Parse under the hood, I believe. So there is little difference there, and its really just seems to be a matter of personal taste.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Sending a mouse click to a button in the taskbar using C# In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible?
A: Check out the section "How to steal focus on 2K/XP" at http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there.
A: It's possible. But it's extremely sketchy. Your application may also break with the next version of Windows, since it's undocumented. What you need to do is find the window handle of the taskbar, then find the window handle of the child window representing the button, then send it a WM_MOUSEDOWN (I think) message.
Here's a bit on finding the window handle of the taskbar:
http://www.codeproject.com/
FWIW, the restrictions on BringWindowToTop/SetForeground are there because it's irritating when a window steals focus. That may not matter if you're working on a corporate environment. Just keep it in mind. :)
A: I used this in a program where I needed to simulate clicks and mouse movements;
Global Mouse and Keyboard Library
A: To be honest I've never had an issue bringing a window to the foreground on XP/Vista/2003/2000.
You need to make sure you do the following:
*
*Check if IsIconic (minimized)
*If #1 results in true then call
ShowWindow passing SW_RESTORE
*Then call SetForegroundWindow
I've never had problems that I can think of doing it with those steps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Converting List to List I have a list of integers, List<Integer> and I'd like to convert all the integer objects into Strings, thus finishing up with a new List<String>.
Naturally, I could create a new List<String> and loop through the list calling String.valueOf() for each integer, but I was wondering if there was a better (read: more automatic) way of doing it?
A: Instead of using String.valueOf I'd use .toString(); it avoids some of the auto boxing described by @johnathan.holland
The javadoc says that valueOf returns the same thing as Integer.toString().
List<Integer> oldList = ...
List<String> newList = new ArrayList<String>(oldList.size());
for (Integer myInt : oldList) {
newList.add(myInt.toString());
}
A: The source for String.valueOf shows this:
public static String valueOf(Object obj) {
return (obj == null) ? "null" : obj.toString();
}
Not that it matters much, but I would use toString.
A: Here's a one-liner solution without cheating with a non-JDK library.
List<String> strings = Arrays.asList(list.toString().replaceAll("\\[(.*)\\]", "$1").split(", "));
A: As far as I know, iterate and instantiate is the only way to do this. Something like (for others potential help, since I'm sure you know how to do this):
List<Integer> oldList = ...
/* Specify the size of the list up front to prevent resizing. */
List<String> newList = new ArrayList<>(oldList.size());
for (Integer myInt : oldList) {
newList.add(String.valueOf(myInt));
}
A: Another Solution using Guava and Java 8
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
List<String> strings = Lists.transform(numbers, number -> String.valueOf(number));
A: What you're doing is fine, but if you feel the need to 'Java-it-up' you could use a Transformer and the collect method from Apache Commons, e.g.:
public class IntegerToStringTransformer implements Transformer<Integer, String> {
public String transform(final Integer i) {
return (i == null ? null : i.toString());
}
}
..and then..
CollectionUtils.collect(
collectionOfIntegers,
new IntegerToStringTransformer(),
newCollectionOfStrings);
A: To the people concerned about "boxing" in jsight's answer: there is none. String.valueOf(Object) is used here, and no unboxing to int is ever performed.
Whether you use Integer.toString() or String.valueOf(Object) depends on how you want to handle possible nulls. Do you want to throw an exception (probably), or have "null" Strings in your list (maybe). If the former, do you want to throw a NullPointerException or some other type?
Also, one small flaw in jsight's response: List is an interface, you can't use the new operator on it. I would probably use a java.util.ArrayList in this case, especially since we know up front how long the list is likely to be.
A: List<String> stringList = integerList.stream().map((Object s)->String.valueOf(s)).collect(Collectors.toList())
A: Not core Java, and not generic-ified, but the popular Jakarta commons collections library has some useful abstractions for this sort of task. Specifically, have a look at the collect methods on
CollectionUtils
Something to consider if you are already using commons collections in your project.
A: A slightly more concise solution using the forEach method on the original list:
List<Integer> oldList = Arrays.asList(1, 2, 3, 4, 5);
List<String> newList = new ArrayList<>(oldList.size());
oldList.forEach(e -> newList.add(String.valueOf(e)));
A: @Jonathan: I could be mistaken, but I believe that String.valueOf() in this case will call the String.valueOf(Object) function rather than getting boxed to String.valueOf(int). String.valueOf(Object) just returns "null" if it is null or calls Object.toString() if non-null, which shouldn't involve boxing (although obviously instantiating new string objects is involved).
A: I think using Object.toString() for any purpose other than debugging is probably a really bad idea, even though in this case the two are functionally equivalent (assuming the list has no nulls). Developers are free to change the behavior of any toString() method without any warning, including the toString() methods of any classes in the standard library.
Don't even worry about the performance problems caused by the boxing/unboxing process. If performance is critical, just use an array. If it's really critical, don't use Java. Trying to outsmart the JVM will only lead to heartache.
A: An answer for experts only:
List<Integer> ints = ...;
String all = new ArrayList<Integer>(ints).toString();
String[] split = all.substring(1, all.length()-1).split(", ");
List<String> strs = Arrays.asList(split);
A: Lambdaj allows to do that in a very simple and readable way. For example, supposing you have a list of Integer and you want to convert them in the corresponding String representation you could write something like that;
List<Integer> ints = asList(1, 2, 3, 4);
Iterator<String> stringIterator = convertIterator(ints, new Converter<Integer, String> {
public String convert(Integer i) { return Integer.toString(i); }
}
Lambdaj applies the conversion function only while you're iterating on the result.
A: Using Google Collections from Guava-Project, you could use the transform method in the Lists class
import com.google.common.collect.Lists;
import com.google.common.base.Functions
List<Integer> integers = Arrays.asList(1, 2, 3, 4);
List<String> strings = Lists.transform(integers, Functions.toStringFunction());
The List returned by transform is a view on the backing list - the transformation will be applied on each access to the transformed list.
Be aware that Functions.toStringFunction() will throw a NullPointerException when applied to null, so only use it if you are sure your list will not contain null.
A: Solution for Java 8. A bit longer than the Guava one, but at least you don't have to install a library.
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
//...
List<Integer> integers = Arrays.asList(1, 2, 3, 4);
List<String> strings = integers.stream().map(Object::toString)
.collect(Collectors.toList());
For Java 11,
List<String> strings = integers.stream().map(Object::toString)
.collect(Collectors.toUnmodifiableList());
Still no map convenience method, really?
A: You can't avoid the "boxing overhead"; Java's faux generic containers can only store Objects, so your ints must be boxed into Integers. In principle it could avoid the downcast from Object to Integer (since it's pointless, because Object is good enough for both String.valueOf and Object.toString) but I don't know if the compiler is smart enough to do that. The conversion from String to Object should be more or less a no-op, so I would be disinclined to worry about that one.
A: Just for fun, a solution using the jsr166y fork-join framework that should in JDK7.
import java.util.concurrent.forkjoin.*;
private final ForkJoinExecutor executor = new ForkJoinPool();
...
List<Integer> ints = ...;
List<String> strs =
ParallelArray.create(ints.size(), Integer.class, executor)
.withMapping(new Ops.Op<Integer,String>() { public String op(Integer i) {
return String.valueOf(i);
}})
.all()
.asList();
(Disclaimer: Not compiled. Spec is not finalised. Etc.)
Unlikely to be in JDK7 is a bit of type inference and syntactical sugar to make that withMapping call less verbose:
.withMapping(#(Integer i) String.valueOf(i))
A: This is such a basic thing to do I wouldn't use an external library (it will cause a dependency in your project that you probably don't need).
We have a class of static methods specifically crafted to do these sort of jobs. Because the code for this is so simple we let Hotspot do the optimization for us. This seems to be a theme in my code recently: write very simple (straightforward) code and let Hotspot do its magic. We rarely have performance issues around code like this - when a new VM version comes along you get all the extra speed benefits etc.
As much as I love Jakarta collections, they don't support Generics and use 1.4 as the LCD. I am wary of Google Collections because they are listed as Alpha support level!
A:
I didn't see any solution which is following the principal of space
complexity. If list of integers has large number of elements then it's
big problem.
It will be really good to remove the integer from the List<Integer> and free
the space, once it's added to List<String>.
We can use iterator to achieve the same.
List<Integer> oldList = new ArrayList<>();
oldList.add(12);
oldList.add(14);
.......
.......
List<String> newList = new ArrayList<String>(oldList.size());
Iterator<Integer> itr = oldList.iterator();
while(itr.hasNext()){
newList.add(itr.next().toString());
itr.remove();
}
A: I just wanted to chime in with an object oriented solution to the problem.
If you model domain objects, then the solution is in the domain objects. The domain here is a List of integers for which we want string values.
The easiest way would be to not convert the list at all.
That being said, in order to convert without converting, change the original list of Integer to List of Value, where Value looks something like this...
class Value {
Integer value;
public Integer getInt()
{
return value;
}
public String getString()
{
return String.valueOf(value);
}
}
This will be faster and take up less memory than copying the List.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
} |
Q: C#: What Else Do You Use Besides DataSet I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead.
I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options.
I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks!
-Eric Sipple
A: We've moved away from datasets and built our own ORM objects loosely based on CSLA. You can get the same job done with either a DataSet or LINQ or ORM but re-using it is (we've found) a lot easier. 'Less code make more happy'.
A: I was fed up with DataSets in .Net 1.1, at least they optimised it so that it doesn't slow as exponentially for large sets any more.
It was always a rather bloated model - I haven't seen many apps that use most of its features.
SqlDataReader was good, but I used to wrap it in an IEnumerable<T> where the T was some typed representation of my data row.
Linq is a far better replacement in my opinion.
A: I've been using the Data Transfer Objects pattern (originally from the Java world, I believe), with a SqDataReader to populate collections of DTOs from the data layer for use in other layers of the application.
The DTOs themselves are very lightweight and simple classes composed of properties with gets/sets. They can be easily serialized/deserialized, and used for databinding, making them pretty well suited to most of my development needs.
A: I'm a huge fan of SubSonic. A well-written batch/CMD file can generate an entire object model for your database in minutes; you can compile it into its own DLL and use it as needed. Wonderful model, wonderful tool. The site makes it sound like an ASP.NET deal, but generally speaking it works wonderfully just about anywhere if you're not trying to use its UI framework (which I'm moderately disappointed in) or its application-level auto-generation tools.
For the record, here is a version of the command I use to work with it (so that you don't have to fight it too hard initially):
sonic.exe generate /server [servername] /db [dbname] /out [outputPathForCSfiles] /generatedNamespace [myNamespace] /useSPs true /removeUnderscores true
That does it every time ... Then build the DLL off that directory -- this is part of an NAnt project, fired off by CruiseControl.NET -- and away we go. I'm using that in WinForms, ASP.NET, even some command-line utils. This generates the fewest dependencies and the greatest "portability" (between related projects, EG).
Note
The above is now well over a year old. While I still hold great fondness in my heart for SubSonic, I have moved on to LINQ-to-SQL when I have the luxury of working in .NET 3.5. In .NET 2.0, I still use SubSonic. So my new official advice is platform version-dependent. In case of .NET 3+, go with the accepted answer. In case of .NET 2.0, go with SubSonic.
A: I have used typed and untyped DataSets, DataViewManagers, DataViews, DataTables, DataRows, DataRowViews, and just about anything you can do with the stack since it firsts came out in multiple enterprise projects. It took me awhile to get used to how allow of it worked. I have written custom components that leverage the stack as ADO.NETdid not quite give me what I really needed. One such component compares DataSets and then updates backend stores. I really know how all of these items work well and those that have seen what I have done are very impressed that I managed to get beyond there feel that it was only useful for demo use.
I use ADO.NET binding in Winforms and I also use the code in console apps. I most recently have teamed with another developer to create a custom ORM that we used against a crazy datamodel that we were given from contractors that looked nothing like our normal data stores.
I searched today for replacement to ADO.NET and I do not see anything that I should seriously try to learn to replace what I currently use.
A: Since .NET 3.5 came out, I've exclusively used LINQ. It's really that good; I don't see any reason to use any of those old crutches any more.
As great as LINQ is, though, I think any ORM system would allow you to do away with that dreck.
A: DataSets are great for demos.
I wouldn't know what to do with one if you made me use it.
I use ObservableCollection
Then again i'm in the client app space, WPF and Silverlight. So passing a dataset or datatable through a service is ... gross.
DataReaders are fast, since they are a forward only stream of the result set.
A: I use them extensively but I don't make use of any of the "advanced" features that Microsoft was really pushing when the framework first came out. I'm basically just using them as Lists of Hashtables, which I find perfectly useful.
I have not seen good results when people have tried to make complex typed DataSets, or tried to actually set up the foreign key relationships between tables with DataSets.
Of course, I am one of the weird ones that actually prefers a DataRow to an entity object instance.
A: Pre linq I used DataReader to fill List of my own custom domain objects, but post linq I have been using L2S to fill L2S entities, or L2S to fill domain objects.
Once I get a bit more time to investigate I suspect that Entity Framework objects will be my new favourite solution!
A: Selecting a modern, stable, and actively supported ORM tool has to be probably the single biggest boost to productivity just about any project of moderate size and complexity can get. If you're concluding that you absolutely, absolutely, absolutely have to write your own DAL and ORM, you're probably doing it wrong (or you're using the world's most obscure database).
If you're doing raw datasets and rows and what not, spend the day to try an ORM and you'll be amazed at how much more productive you can be w/o all the drudgery of mapping columns to fields or all the time filling Sql command objects and all the other hoop jumping we all once went through.
I love me some Subsonic, though for smaller scale projects along with demos/prototypes, I find Linq to Sql pretty damn useful too. I hate EF with a passion though. :P
A: I've used typed DataSets for several projects. They model the database well, enforce constraints on the client side, and in general are a solid data access technology, especially with the changes in .NET 2.0 with TableAdapters.
Typed DataSets get a bad rap from people who like to use emotive words like "bloated" to describe them. I'll grant that I like using a good O/R mapper more than using DataSets; it just "feels" better to use objects and collections instead of typed DataTables, DataRows, etc. But what I've found is that if for whatever reason you can't or don't want to use an O/R mapper, typed DataSets are a good solid choice that are easy enough to use and will get you 90% of the benefits of an O/R mapper.
EDIT:
Some here suggest that DataReaders are the "fast" alternative. But if you use Reflector to look at the internals of a DataAdapter (which DataTables are filled by), you'll see that it uses...a DataReader. Typed DataSets may have a larger memory footprint than other options, but I've yet to see the application where this makes a tangible difference.
Use the best tool for the job. Don't make your decision on the basis of emotive words like "gross" or "bloated" which have no factual basis.
A: I just build my business objects from scratch, and almost never use the DataTable and especially not the DataSet anymore, except to initially populate the business objects. The advantages to building your own are testability, type safety and IntelliSense, extensibility (try adding to a DataSet) and readability (unless you enjoy reading things like Convert.ToDecimal(dt.Rows[i]["blah"].ToString())).
If I were smarter I'd also use an ORM and 3rd party DI framework, but just haven't yet felt the need for those. I'm doing lots of smaller size projects or additions to larger projects.
A: I NEVER use datasets. They are big heavyweight objects only usable (as someone pointed out here) for "demoware". There are lot's of great alternatives shown here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: shortcut for creating a Map from a List in groovy? I'd like some sorthand for this:
Map rowToMap(row) {
def rowMap = [:];
row.columns.each{ rowMap[it.name] = it.val }
return rowMap;
}
given the way the GDK stuff is, I'd expect to be able to do something like:
Map rowToMap(row) {
row.columns.collectMap{ [it.name,it.val] }
}
but I haven't seen anything in the docs... am I missing something? or am I just way too lazy?
A: Also, if you're use google collections (http://code.google.com/p/google-collections/), you can do something like this:
map = Maps.uniqueIndex(list, Functions.identity());
A: ok... I've played with this a little more and I think this is a pretty cool method...
def collectMap = {Closure callback->
def map = [:]
delegate.each {
def r = callback.call(it)
map[r[0]] = r[1]
}
return map
}
ExpandoMetaClass.enableGlobally()
Collection.metaClass.collectMap = collectMap
Map.metaClass.collectMap = collectMap
now any subclass of Map or Collection have this method...
here I use it to reverse the key/value in a Map
[1:2, 3:4].collectMap{[it.value, it.key]} == [2:1, 4:3]
and here I use it to create a map from a list
[1,2].collectMap{[it,it]} == [1:1, 2:2]
now I just pop this into a class that gets called as my app is starting and this method is available throughout my code.
EDIT:
to add the method to all arrays...
Object[].metaClass.collectMap = collectMap
A: Check out "inject". Real functional programming wonks call it "fold".
columns.inject([:]) { memo, entry ->
memo[entry.name] = entry.val
return memo
}
And, while you're at it, you probably want to define methods as Categories instead of right on the metaClass. That way, you can define it once for all Collections:
class PropertyMapCategory {
static Map mapProperty(Collection c, String keyParam, String valParam) {
return c.inject([:]) { memo, entry ->
memo[entry[keyParam]] = entry[valParam]
return memo
}
}
}
Example usage:
use(PropertyMapCategory) {
println columns.mapProperty('name', 'val')
}
A: I've recently came across the need to do exactly that: converting a list into a map. This question was posted before Groovy version 1.7.9 came out, so the method collectEntries didn't exist yet. It works exactly as the collectMap method that was proposed:
Map rowToMap(row) {
row.columns.collectEntries{[it.name, it.val]}
}
If for some reason you are stuck with an older Groovy version, the inject method can also be used (as proposed here). This is a slightly modified version that takes only one expression inside the closure (just for the sake of character saving!):
Map rowToMap(row) {
row.columns.inject([:]) {map, col -> map << [(col.name): col.val]}
}
The + operator can also be used instead of the <<.
A: Was the groupBy method not available when this question was asked?
A: If what you need is a simple key-value pair, then the method collectEntries should suffice. For example
def names = ['Foo', 'Bar']
def firstAlphabetVsName = names.collectEntries {[it.charAt(0), it]} // [F:Foo, B:Bar]
But if you want a structure similar to a Multimap, in which there are multiple values per key, then you'd want to use the groupBy method
def names = ['Foo', 'Bar', 'Fooey']
def firstAlphabetVsNames = names.groupBy { it.charAt(0) } // [F:[Foo, Fooey], B:[Bar]]
A: I can't find anything built in... but using the ExpandoMetaClass I can do this:
ArrayList.metaClass.collectMap = {Closure callback->
def map = [:]
delegate.each {
def r = callback.call(it)
map[r[0]] = r[1]
}
return map
}
this adds the collectMap method to all ArrayLists... I'm not sure why adding it to List or Collection didn't work.. I guess that's for another question... but now I can do this...
assert ["foo":"oof", "42":"24", "bar":"rab"] ==
["foo", "42", "bar"].collectMap { return [it, it.reverse()] }
from List to calculated Map with one closure... exactly what I was looking for.
Edit: the reason I couldn't add the method to the interfaces List and Collection was because I did not do this:
List.metaClass.enableGlobally()
after that method call, you can add methods to interfaces.. which in this case means my collectMap method will work on ranges like this:
(0..2).collectMap{[it, it*2]}
which yields the map: [0:0, 1:2, 2:4]
A: What about something like this?
// setup
class Pair {
String k;
String v;
public Pair(def k, def v) { this.k = k ; this.v = v; }
}
def list = [ new Pair('a', 'b'), new Pair('c', 'd') ]
// the idea
def map = [:]
list.each{ it -> map.putAt(it.k, it.v) }
// verify
println map['c']
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119"
} |
Q: Exact age calculation
Possible Duplicate:
How do I calculate someone's age in C#?
Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, isn't it better to calculate the exact age?
Maybe
DateTime dt1 = DateTime.Now;
TimeSpan dt2;
dt2 = dt1.Subtract(new DateTime(1975, 12, 01));
double year = dt2.TotalDays / 365;
The result of year is 32.77405678074
Could this code be OK?
A:
Maybe this could be silly but and I don't have issues with my age but sometimes it is good to calculate the exact age of someone, I have introduced my birthdate in my profile (01/12/1975) "dd/mm/yyyy" and it calculated 33 and I'm 32 actually still, doesn't it better to calculate the exact age?
My guess would be that this is a localization issue, though I don't know how it would happen, since (at least for me) the profile has you fill out your age in the format "YYYY/MM/DD". But your birthday is one that reads as a valid date (January 12th) in traditional U.S. settings, so this is the area I'd look into. I was born in 1975, also, and my birthday is next month, and it's got my age right.
A: Actually, because of leap years, your code would be off. Since the timespan object has no TotalYears property the best way to get it would be this
Pardon the VB.Net
Dim myAge AS Integer = DateTime.Now.year - BirthDate.year
If Birthdate.month < DateTime.Now.Month _
OrElse BirthDate.Month = DateTime.Now.Month AndAlso Birthdate.Day < DateTime.Now.Day Then
MyAge -= 1
END IF
A: If you were born on January 12th 1975, you would be 33 years old today.
If you were born on December 1st 1975, you would be 32 years old today.
If you read the note by the birthday field when editing your profile you'll see it says "YYYY/MM/DD", I'm sure it will try to interpret dates of other formats but it looks like it interprets MM/DD/YYYY (US standard dates) in preference to DD/MM/YYYY (European standard dates). The easy fix is to enter the date of your birthday according to the suggested input style.
A: int ag1;
string st, ag;
void agecal()
{
st = TextBox4.Text;
DateTimeFormatInfo dtfi = new DateTimeFormatInfo();
dtfi.ShortDatePattern = "MM/dd/yyyy";
dtfi.DateSeparator = "/";
DateTime dt = Convert.ToDateTime(st, dtfi);
ag1 = int.Parse(dt.Year.ToString());
int years = DateTime.Now.Year - ag1;
ag = years.ToString();
TextBox3.Text = ag.ToString();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Why can't you bind the Size of a windows form to ApplicationSettings? Update: Solved, with code
I got it working, see my answer below for the code...
Original Post
As Tundey pointed out in his answer to my last question, you can bind nearly everything about a windows forms control to ApplicationSettings pretty effortlessly. So is there really no way to do this with form Size? This tutorial says you need to handle Size explicitly so you can save RestoreBounds instead of size if the window is maximized or minimized. However, I hoped I could just use a property like:
public Size RestoreSize
{
get
{
if (this.WindowState == FormWindowState.Normal)
{
return this.Size;
}
else
{
return this.RestoreBounds.Size;
}
}
set
{
...
}
}
But I can't see a way to bind this in the designer (Size is notably missing from the PropertyBinding list).
A: The reason why the Form.Size property is not available in the settings binding UI is because this property is marked DesignerSerializationVisibility.Hidden. This means that the designer doesn't know how to serialise it, let alone generate a data binding for it. Instead the Form.ClientSize property is the one that gets serialised.
If you try and get clever by binding Location and ClientSize, you'll see another problem. When you try to resize your form from the left or top edge, you'll see weird behaviour. This is apparently related to the way that two-way data binding works in the context of property sets that mutually affect each other. Both Location and ClientSize eventually call into a common method, SetBoundsCore().
Also, data binding to properties like Location and Size is just not efficient. Each time the user moves or resizes the form, Windows sends hundreds of messages to the form, causing the data binding logic to do a lot of processing, when all you really want is to store the last position and size before the form is closed.
This is a very simplified version of what I do:
private void MyForm_FormClosing(object sender, FormClosingEventArgs e)
{
Properties.Settings.Default.MyState = this.WindowState;
if (this.WindowState == FormWindowState.Normal)
{
Properties.Settings.Default.MySize = this.Size;
Properties.Settings.Default.MyLoc = this.Location;
}
else
{
Properties.Settings.Default.MySize = this.RestoreBounds.Size;
Properties.Settings.Default.MyLoc = this.RestoreBounds.Location;
}
Properties.Settings.Default.Save();
}
private void MyForm_Load(object sender, EventArgs e)
{
this.Size = Properties.Settings.Default.MySize;
this.Location = Properties.Settings.Default.MyLoc;
this.WindowState = Properties.Settings.Default.MyState;
}
Why is this a very simplified version? Because doing this properly is a lot trickier than it looks :-)
A: One of the reason I imagine size binding is not allowed is because the screen may change between sessions.
Loading the size back when the resolution has reduced could result in the title bar being beyond the limits of the screen.
You also need to be wary of multiple monitor setups, where monitors may no longer be available when you app next runs.
A: I finally came up with a Form subclass that solves this, once and for all. To use it:
*
*Inherit from RestorableForm instead of Form.
*Add a binding in (ApplicationSettings) -> (PropertyBinding) to WindowRestoreState.
*Call Properties.Settings.Default.Save() when the window is about to close.
Now window position and state will be remembered between sessions. Following the suggestions from other posters below, I included a function ConstrainToScreen that makes sure the window fits nicely on the available displays when restoring itself.
Code
// Consider this code public domain. If you want, you can even tell
// your boss, attractive women, or the other guy in your cube that
// you wrote it. Enjoy!
using System;
using System.Windows.Forms;
using System.ComponentModel;
using System.Drawing;
namespace Utilities
{
public class RestorableForm : Form, INotifyPropertyChanged
{
// We invoke this event when the binding needs to be updated.
public event PropertyChangedEventHandler PropertyChanged;
// This stores the last window position and state
private WindowRestoreStateInfo windowRestoreState;
// Now we define the property that we will bind to our settings.
[Browsable(false)] // Don't show it in the Properties list
[SettingsBindable(true)] // But do enable binding to settings
public WindowRestoreStateInfo WindowRestoreState
{
get { return windowRestoreState; }
set
{
windowRestoreState = value;
if (PropertyChanged != null)
{
// If anybody's listening, let them know the
// binding needs to be updated:
PropertyChanged(this,
new PropertyChangedEventArgs("WindowRestoreState"));
}
}
}
protected override void OnClosing(CancelEventArgs e)
{
WindowRestoreState = new WindowRestoreStateInfo();
WindowRestoreState.Bounds
= WindowState == FormWindowState.Normal ?
Bounds : RestoreBounds;
WindowRestoreState.WindowState = WindowState;
base.OnClosing(e);
}
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
if (WindowRestoreState != null)
{
Bounds = ConstrainToScreen(WindowRestoreState.Bounds);
WindowState = WindowRestoreState.WindowState;
}
}
// This helper class stores both position and state.
// That way, we only have to set one binding.
public class WindowRestoreStateInfo
{
Rectangle bounds;
public Rectangle Bounds
{
get { return bounds; }
set { bounds = value; }
}
FormWindowState windowState;
public FormWindowState WindowState
{
get { return windowState; }
set { windowState = value; }
}
}
private Rectangle ConstrainToScreen(Rectangle bounds)
{
Screen screen = Screen.FromRectangle(WindowRestoreState.Bounds);
Rectangle workingArea = screen.WorkingArea;
int width = Math.Min(bounds.Width, workingArea.Width);
int height = Math.Min(bounds.Height, workingArea.Height);
// mmm....minimax
int left = Math.Min(workingArea.Right - width,
Math.Max(bounds.Left, workingArea.Left));
int top = Math.Min(workingArea.Bottom - height,
Math.Max(bounds.Top, workingArea.Top));
return new Rectangle(left, top, width, height);
}
}
}
Settings Bindings References
*
*SettingsBindableAttribute
*INotifyPropertyChanged
A: Well I have had a quick play with this and you are correct, while there is no way to directly bind the size of the form to AppSettings, you can add your own values and change the size on load.
I would perhaps recommend that if this is a common feature, you subclass Form and make it automatically prob the App.Config for the forms size settings.
(Or you could roll your own file.. Get it to query an Xml file "formname.settings.xml" or something? - thinking out loud!)..
Heres what I had (very rough, no error checking etc).
App.Config
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key ="FormHeight" value="500" />
<add key ="FormWidth" value="200"/>
</appSettings>
</configuration>
Form Code
private void Form1_Load(object sender, EventArgs e)
{
string height = ConfigurationManager.AppSettings["FormHeight"];
int h = int.Parse(height);
string width = ConfigurationManager.AppSettings["FormWidth"];
int w = int.Parse(width);
this.Size = new Size(h, w);
}
A: I agree with Rob Cooper's answer. But I think Martin makes a very good point. Nothing like having users open your application and the app is off-screen!
So in reality, you'll want to combine both answers and bear in mind the current screen dimensions before setting your form's size.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Best practice for integrating TDD with web application development? Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests.
The cause of this pain point generally revolves around the hassle of writing UI automation mid-development.
How do you or your organization integrate best TDD practices with web application development?
A: I layer out the application and at least unit test from the presenter/controller (whichever is your preference, mvc/mvp) to the data layer. That way I have good test coverage over most of the code that is written.
I have looked at FitNesse, Watin and Selenium as options to automate the UI testing but I haven't got around to using these on any projects yet, so we stick with human testing. FitNesse was the one I was leaning toward but I couldn't introduce this as well as introducing TDD (does that make me bad? I hope not!).
A: This is a good question, one that I will be subscribing too :)
I am still relatively new to web dev, and I too am looking at a lot of code that is largely untested.
For me, I keep the UI as light as possible (normally only a few lines of code) and test the crap out of everything else. At least I can then have some confidence that everything that makes it to the UI is as correct as it can be.
Is it perfect? Perhaps not, but at least it as still quite highly automated and the core code (where most of the "magic" happens) still has pretty good coverage..
A: A common practice is to move all the code you can out of the codebehind and into an object you can test in isolation. Such code will usually follow the MVP or MVC design patterns. If you search on "Rhino Igloo" you will probably find the link to its Subversion repository. That code is worth a study, as it demonstrate one of the best MVP implementations on Web Forms that I have seen.
Your codebehind will, when following this pattern, do two things:
*
*Transit all user actions to the presenter.
*Render data provided by the presenter.
Unit testing the presenter should be trivial.
Update: Rhino Igloo can be found here: https://svn.sourceforge.net/svnroot/rhino-tools/trunk/rhino-igloo/
A: I would generally avoid testing that involves relying on UI elements. I favor integration testing, which tests everything from your database layer up to the view layer (but not the actual layout).
Try to start a test suite before writing a line of actual code in a new project, since it's harder to write tests later.
Choose carefully what you test - don't mindlessly write tests for everything. Sometimes it's a boring task, so don't make it harder. If you write too many tests, you risk abandoning that task under the weight of time-consuming maintenance.
Try to bundle as much functionality as possible into a single test. That way, if something goes wrong, the errors will propagate anyway. For example, if you have a digest-generating class - test the actual output, not every single helper function.
Don't trust yourself. Assume that you will always make mistakes, and so you write tests to make your life easier, not harder.
If you are not feeling good about writing tests, you are probably doing it wrong ;)
A: Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually.
To test the GUI some people like selenium. Others complain that is a pain to set up.
A: There have been tries on getting Microsoft's free UI Automation (included in .NET Framework 3.0) to work with web applications (ASP.NET). A german company called Artiso happens to have written a blog entry that explains how to achieve that (link).
However, their blogpost also links an MSDN Webcasts that explains the UI Automation Framework with winforms and after I had a look at this, I noticed you need the AutomationId to get a reference to the respecting controls. However, in web applications, the controls do not have an AutomationId.
I asked Thomas Schissler (Artiso) about this and he explained that this was a major drawback on InternetExplorer. He referenced an older technology of Microsoft (MSAA) and was hoping himself that IE8 will do this better.
However, I was also giving Watin a try and it seems to work pretty well. I even liked Wax, which allows to implement simple testcases via Microsoft Excel worksheets.
A: Ivonna can unit test your views. I'd still recommend moving most of the code to other parts. However, some code just belongs there, like references to controls or control event handlers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Accessing an Exchange Server without Outlook Is there a method of accessing an Exchange server that does not have IMAP or POP3 enabled without Outlook?
It does not appear that Outlook Express supports Exchange (only IMAP and POP3).
A: The only way I can think of is if the Exchange server has Outlook Web Access (OWA) turned on. You can test this by trying the server name in your browser like so: http://server/exchange.
If you mean programmatically then the recommended way is to use WebDAV (which is what OWA uses).
@Jon I think the method you linked to uses IMAP.
Edit: @Pat: SimpleMAPI is the protocol that allows applications such as Word etc to talk to your email client, not your email client to the server - ExtendedMAPI is needed for that, which Thunderbird doesn't support.
A: The Outlook Web Access URL is more likely http(s)://server/owa
Exchange 2007 is SSL only by default.
A: There's also Exchange Web Services in newer versions of Exchange.
If you need to use Outlook Express and talk to an Exchange server which doesn't support IMAP/POP3, you're stuck, sadly.
A: You can use Thunderbird to access Exchange e-mail and contact lists.
Edit - Oops, this uses IMAP, didn't answer the question.
A: I know thunderbird and eudora have support for SimpleMAPI so they can talk to an exchange server but the command set of what they can do is rather basic (you need Extended MAPI support for the whole set)
afaik Outlook is the only windows client with full support for Extended MAPI.
A: If you're willing to consider a major change, you could look at Snow Leopard's support of Exchange through iCal and Mail programs. I find it isn't as robust as outlook (expected), but it gives me better off-site performance as the RPC protocol isn't active through TCP/IP connections on our server. I find that the RPC via VPN or HTTPs to both be immensely slow. Mac Exchange to the rescue, unless I'm on my windows box, then it's Outlook Web Access (aka "owa" aka webmail).
A: Yes! You can use ExQuilla plugin for Linux (How to guide) to access Exchange 2007/2010 email and address book. Original reply
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is it a bad idea to expose inheritance hierarchy in namespace structure? I've got a group of inter-related classes that are all overridden together to create a particular implementation. I'm wondering if it is a good idea to enclose the interrelated subclasses in a namespace.
For example purposes, consider the following namespaces and classes:
namespace Protocol
{
public abstract class Message { }
public abstract class Driver { }
}
namespace Protocol.Tcp
{
public class TcpMessage : Message { }
public class TcpDriver : Driver { }
}
namespace Protocol.Ftp
{
public class FtpMessage : Message { }
public class FtpDriver : Driver { }
}
What is the best way to structure the namespaces? It seems unavoidable to expose the inheritance in the namespace since the base classes don't really belong in either the Protocol.Tcp namespace or the Protocol.Ftp namespace.
A: I think you are perhaps worrying too much!
Does it make sense logically? Do you know where to find your code within the namespaces?
I would much rather see a codebase like the above with a small number of classes, relevant to the name with a hierarchy, than one large namespace where everything is interrelated..
Remember, namespacing is there for precisely this, to organise your codebase logically
What you have seems logical :)
EDIT:
As an example:
using System.Data;
using System.Data.Sql;
;)
A: The original tags show that this post is about C# - therefore multiple inheritance is an irrelevancy - you can't multiply inherit in C#.
Maybe you should consider defining some interfaces that define what the basic contracts of a Message and a Driver are and then you may feel a little free-er to use the namespace structure to mimic the technology differences.
A: If this were me, I would define 2 namespaces:
Protocol
and
Protocol.Driver
Dividing the namespace like this separates your "library code" vs your "executable / test code."
I also create my namespaces to match the directory structure; it will give logic to your programs structure and codefiles. (maybe you already do this...)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is the best way to write a form in ASP.NET MVC? What is the the best way to write a form to submit some data in ASP.NET MVC? Is it as Scott Guthrie demonstrates here? Are there better approaches? Perhaps with less using of strings?
A: I don't really like strings in my code, as it isn't possible to refactor. A nice way is to use Linq Expressions. If you get passed a model as ViewData you can use the following statement:
<%= ShowDropDownBox(viewData => viewData.Name); %>
...
public static string ShowDropDownList<T>(this HtmlHelper html, Expression<Action<T>> property)
{
var body = action.Body as MethodCallExpression;
if (body == null)
throw new InvalidOperationException("Expression must be a method call.");
if (body.Object != action.Parameters[0])
throw new InvalidOperationException("Method call must target lambda argument.");
string propertyName = body.Method.Name;
string typeName = typeof(T).Name;
// now you can call the original method
html.Select(propertyName, ... );
}
I know the original solution is performing faster but I think this one is much cleaner.
Hope this helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses? How do you configure tomcat to bind to a single ip address (localhost) instead of all addresses?
A: It may be worth mentioning that running tomcat as a non root user (which you should be doing) will prevent you from using a port below 1024 on *nix. If you want to use TC as a standalone server -- as its performance no longer requires it to be fronted by Apache or the like -- you'll want to bind to port 80 along with whatever IP address you're specifying.
You can do this by using IPTABLES to redirect port 80 to 8080.
A: Several connectors are configured, and each connector has an optional "address" attribute where you can set the IP address.
*
*Edit tomcat/conf/server.xml.
*Specify a bind address for that connector:
<Connector
port="8080"
protocol="HTTP/1.1"
address="127.0.0.1"
connectionTimeout="20000"
redirectPort="8443"
/>
A: it's well documented here:
https://cwiki.apache.org/confluence/display/TOMCAT/Connectors#Connectors-Q6
How do I bind to a specific ip address? - "Each Connector element allows an address property. See the HTTP Connector docs or the AJP Connector docs". And HTTP Connectors docs:
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
Standard Implementation -> address
"For servers with more than one IP address, this attribute specifies which address will be used for listening on the specified port. By default, this port will be used on all IP addresses associated with the server."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83"
} |
Q: How to monitor a text file in realtime For debugging purposes in a somewhat closed system, I have to output text to a file.
Does anyone know of a tool that runs on windows (console based or not) that detects changes to a file and outputs them in real-time?
A: Tail is the best answer so far.
If you don't use Windows, you probably already have tail.
If you do use Windows, you can get a whole slew of Unix command line tools from here. Unzip them and put them somewhere in your PATH.
Then just do this at the command prompt from the same folder your log file is in:
tail -n 50 -f whatever.log
This will show you the last 50 lines of the file and will update as the file updates.
You can combine grep with tail with great results - something like this:
tail -n 50 -f whatever.log | grep Error
gives you just lines with "Error" in it.
Good luck!
A: I like tools that will perform more than one task, Notepad++ is a great notepad replacement and has a Document Monitor plugin (installs with standard msi) that works great. It also is portable so you can have it on a thumb drive for use anywhere.
For a command line option, PowerShell (which is really a new command line) has a great feature already mentioned.
Get-Content someFile.txt -wait
But you can also filter at the command line using a regular expression
Get-Content web.log -wait | where { $_ -match "ERROR" }
A: FileSystemWatcher works a treat, although you do have to be a little careful about duplicate events firing - 1st link from Google - but bearing that in mind can produce great results.
A: Late answer, though might be helpful for someone -- LOGEXPERT seems to be interesting tail utility for windows.
A: Try SMSTrace from Microsoft (now called CMTrace, and directly available in the Start Menu on some versions of Windows)
Its a brilliant GUI tool that monitors updates to any text file in real time, even if its locked for writing by another file.
Don't be fooled by the description, its capable of monitoring any file, including .txt, .log or .csv.
Its ability to monitor locked files is extremely useful, and is one of the reasons why this utility shines.
One of the nicest features is line coloring. If it sees the word "ERROR", the line becomes red. If it sees the word "WARN", the line becomes yellow. This makes the logs a lot easier to follow.
A: *
*Tail for Win32
*Apache Chainsaw - used this with log4net logs, may require file to be in a certain format
A: I have used FileSystemWatcher for monitoring of text files for a component I recently built. There may be better options (I never found anything in my limited research) but that seemed to do the trick nicely :)
Crap, my bad, you're actually after a tool to do it all for you..
Well if you get unlucky and want to roll your own ;)
A: Yor can use the FileSystemWatcher in System.Diagnostics.
From MSDN:
public class Watcher
{
public static void Main()
{
Run();
}
[PermissionSet(SecurityAction.Demand, Name="FullTrust")]
public static void Run()
{
string[] args = System.Environment.GetCommandLineArgs();
// If a directory is not specified, exit program.
if(args.Length != 2)
{
// Display the proper way to call the program.
Console.WriteLine("Usage: Watcher.exe (directory)");
return;
}
// Create a new FileSystemWatcher and set its properties.
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = args[1];
/* Watch for changes in LastAccess and LastWrite times, and
the renaming of files or directories. */
watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite
| NotifyFilters.FileName | NotifyFilters.DirectoryName;
// Only watch text files.
watcher.Filter = "*.txt";
// Add event handlers.
watcher.Changed += new FileSystemEventHandler(OnChanged);
watcher.Created += new FileSystemEventHandler(OnChanged);
watcher.Deleted += new FileSystemEventHandler(OnChanged);
watcher.Renamed += new RenamedEventHandler(OnRenamed);
// Begin watching.
watcher.EnableRaisingEvents = true;
// Wait for the user to quit the program.
Console.WriteLine("Press \'q\' to quit the sample.");
while(Console.Read()!='q');
}
// Define the event handlers.
private static void OnChanged(object source, FileSystemEventArgs e)
{
// Specify what is done when a file is changed, created, or deleted.
Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType);
}
private static void OnRenamed(object source, RenamedEventArgs e)
{
// Specify what is done when a file is renamed.
Console.WriteLine("File: {0} renamed to {1}", e.OldFullPath, e.FullPath);
}
}
You can also follow this link Watching Folder Activity in VB.NET
A: Snake Tail. It is a good option.
http://snakenest.com/snaketail/
A: When using Windows PowerShell you can do the following:
Get-Content someFile.txt -wait
A: I use "tail -f" under cygwin.
A: I use BareTail for doing this on Windows. It's free and has some nice features, such as tabs for tailing multiple files and configurable highlighting.
A: Just a shameless plug to tail onto the answer, but I have a free web based app called Hacksaw used for viewing log4net files. I've put in an auto refresh options so you can give yourself near real time updates without having to refresh the browser all the time.
A: Yeah I've used both Tail for Win32 and tail on Cygwin. I've found both to be excellent, although I prefer Cygwin slightly as I'm able to tail files over the internet efficiently without crashes (Tail for Win32 has crashed on me in some instances).
So basically, I would use tail on Cygwin and redirect the output to a file on my local machine. I would then have this file open in Vim and reload (:e) it when required.
A: +1 for BareTail. I actually use BareTailPro, which provides real-time filtering on the tail with basic search strings or search strings using regex.
A: To make the list complete here's a link to the GNU WIN32 ports of many useful tools (amongst them is tail).
GNUWin32 CoreUtils
A: Surprised no one has mentioned Trace32 (or Trace64). These are great (free) Microsoft utilities that give a nice GUI and highlight any errors, etc. It also has filtering and sounds like exactly what you need.
A: Here's a utility I wrote to do just that:
It uses a FileSystemWatcher to look for changes in log files within local folders or network shares (don't have to be mounted, just provide the UNC path) and appends the new content to the console.
on github: https://github.com/danbyrne84/multitail
http://www.danielbyrne.net/projects/multitail
Hope this helps
A: @echo off
set LoggingFile=C:\foo.txt
set lineNr=0
:while1
for /f "usebackq delims=" %%i in (`more +%lineNr% %LoggingFile%`) DO (
echo %%i
set /a lineNr+=1
REM Have an appropriate stop condition here by checking i
)
goto :while1
A command prompt way of doing it.
A: FileMon is a free stand alone tool that can detect all kinds of file access. You can filter out any unwanted. It does not show you the data that has actually changed though.
A: I second "tail -f" in cygwin. I assume that Tail for Win32 will accomplish the same thing.
A: Tail for Win32
A: I did a tiny viewer by my own:
https://github.com/enexusde/Delphi/wiki/TinyLog
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Q: Why do we need entity objects? I really need to see some honest, thoughtful debate on the merits of the currently accepted enterprise application design paradigm.
I am not convinced that entity objects should exist.
By entity objects I mean the typical things we tend to build for our applications, like "Person", "Account", "Order", etc.
My current design philosophy is this:
*
*All database access must be accomplished via stored procedures.
*Whenever you need data, call a stored procedure and iterate over a SqlDataReader or the rows in a DataTable
(Note: I have also built enterprise applications with Java EE, java folks please substitute the equvalent for my .NET examples)
I am not anti-OO. I write lots of classes for different purposes, just not entities. I will admit that a large portion of the classes I write are static helper classes.
I am not building toys. I'm talking about large, high volume transactional applications deployed across multiple machines. Web applications, windows services, web services, b2b interaction, you name it.
I have used OR Mappers. I have written a few. I have used the Java EE stack, CSLA, and a few other equivalents. I have not only used them but actively developed and maintained these applications in production environments.
I have come to the battle-tested conclusion that entity objects are getting in our way, and our lives would be so much easier without them.
Consider this simple example: you get a support call about a certain page in your application that is not working correctly, maybe one of the fields is not being persisted like it should be. With my model, the developer assigned to find the problem opens exactly 3 files. An ASPX, an ASPX.CS and a SQL file with the stored procedure. The problem, which might be a missing parameter to the stored procedure call, takes minutes to solve. But with any entity model, you will invariably fire up the debugger, start stepping through code, and you may end up with 15-20 files open in Visual Studio. By the time you step down to the bottom of the stack, you forgot where you started. We can only keep so many things in our heads at one time. Software is incredibly complex without adding any unnecessary layers.
Development complexity and troubleshooting are just one side of my gripe.
Now let's talk about scalability.
Do developers realize that each and every time they write or modify any code that interacts with the database, they need to do a throrough analysis of the exact impact on the database? And not just the development copy, I mean a mimic of production, so you can see that the additional column you now require for your object just invalidated the current query plan and a report that was running in 1 second will now take 2 minutes, just because you added a single column to the select list? And it turns out that the index you now require is so big that the DBA is going to have to modify the physical layout of your files?
If you let people get too far away from the physical data store with an abstraction, they will create havoc with an application that needs to scale.
I am not a zealot. I can be convinced if I am wrong, and maybe I am, since there is such a strong push towards Linq to Sql, ADO.NET EF, Hibernate, Java EE, etc. Please think through your responses, if I am missing something I really want to know what it is, and why I should change my thinking.
[Edit]
It looks like this question is suddenly active again, so now that we have the new comment feature I have commented directly on several answers. Thanks for the replies, I think this is a healthy discussion.
I probably should have been more clear that I am talking about enterprise applications. I really can't comment on, say, a game that's running on someone's desktop, or a mobile app.
One thing I have to put up here at the top in response to several similar answers: orthogonality and separation of concerns often get cited as reasons to go entity/ORM. Stored procedures, to me, are the best example of separation of concerns that I can think of. If you disallow all other access to the database, other than via stored procedures, you could in theory redesign your entire data model and not break any code, so long as you maintained the inputs and outputs of the stored procedures. They are a perfect example of programming by contract (just so long as you avoid "select *" and document the result sets).
Ask someone who's been in the industry for a long time and has worked with long-lived applications: how many application and UI layers have come and gone while a database has lived on? How hard is it to tune and refactor a database when there are 4 or 5 different persistence layers generating SQL to get at the data? You can't change anything! ORMs or any code that generates SQL lock your database in stone.
A: I think you may be "biting off more than you can chew" on this topic. Ted Neward was not being flippant when he called it the "Vietnam of Computer Science".
One thing I can absolutely guarantee you is that it will change nobody's point of view on the matter, as has been proven so often on innumerable other blogs, forums, podcasts etc.
It's certainly ok to have open disucssion and debate about a controversial topic, it's just this one has been done so many times that both "sides" have agreed to disagree and just got on with writing software.
If you want to do some further reading on both sides, see articles on Ted's blog, Ayende Rahein, Jimmy Nilson, Scott Bellware, Alt.Net, Stephen Forte, Eric Evans etc.
A: @Dan, sorry, that's not the kind of thing I'm looking for. I know the theory. Your statement "is a very bad idea" is not backed up by a real example. We are trying to develop software in less time, with less people, with less mistakes, and we want the ability to easily make changes. Your multi-layer model, in my experience, is a negative in all of the above categories. Especially with regards to making the data model the last thing you do. The physical data model must be an important consideration from day 1.
A: I think it comes down to how complicated the "logic" of the application is, and where you have implemented it. If all your logic is in stored procedures, and all your application does is call those procedures and display the results, then developing entity objects is indeed a waste of time. But for an application where the objects have rich interactions with one another, and the database is just a persistence mechanism, there can be value to having those objects.
So, I'd say there is no one-size-fits-all answer. Developers do need to be aware that, sometimes, trying to be too OO can cause more problems than it solves.
A: I found your question really interesting.
Usually I need entities objects to encapsulate the business logic of an application. It would be really complicated and inadequate to push this logic into the data layer.
What would you do to avoid these entities objects? What solution do you have in mind?
A: Entity Objects can facilitate cacheing on the application layer. Good luck caching a datareader.
A: There are other good reasons for entity objects besides abstraction and loose coupling. One of the things I like most is the strong typing that you can't get with a DataReader or a DataTable. Another reason is that when done well, proper entity classes can make the code more maintanable by using first-class constructs for domain-specific terms that anyone looking at the code is likely to understand rather than a bunch of strings with field names in them used for indexing a DataRow. Stored procedures are really orthogonal to the use of an ORM since a lot of mapping frameworks give you the ability to map to sprocs.
I wouldn't consider sprocs + datareaders a substitute for a good ORM. With stored procedures, you're still constrained by, and tightly-coupled to, the procedure's type signature, which uses a different type system than the calling code. Stored procedures can be subject to modification to acommodate additional options and schema changes. An alternative to stored procedures in the case where the schema is subject to change is to use views--you can map objects to views and then re-map views to the underlying tables when you change them.
I can understand your aversion to ORMs if your experience mainly consists of Java EE and CSLA. You might want to have a look at LINQ to SQL, which is a very lightweight framework and is primarily a one-to-one mapping with the database tables but usually only needs minor extension for them to be full-blown business objects. LINQ to SQL can also map input and output objects to stored procedures' paramaters and results.
The ADO.NET Entity framework has the added advantage that your database tables can be viewed as entity classes inheriting from each other, or as columns from multiple tables aggregated into a single entity. If you need to change the schema, you can change the mapping from the conceptual model to the storage schema without changing the actual application code. And again, stored procedures can be used here.
I think that more IT projects in enterprises fail because of unmaintainability of the code or poor developer productivity (which can happen from, e.g., context switching between sproc-writing and app-writing) than scalability problems of an application.
A: We should also talk about the notion what entities really are.
When I read through this discussion, I get the impression that most people here are looking at entities in the sense of an Anemic Domain Model.
A lot of people are considering the Anemic Domain Model as an antipattern!
There is value in rich domain models. That is what Domain Driven Design is all about.
I personally believe that OO is a way to conquer complexity. This means not only technical complexity (like data-access, ui-binding, security ...) but also complexity in the business domain!
If we can apply OO techniques to analyze, model, design and implement our business problems, this is a tremendous advantage for maintainability and extensibility of non-trivial applications!
There are differences between your entities and your tables. Entities should represent your model, tables just represent the data-aspect of your model!
It is true that data lives longer than apps, but consider this quote from David Laribee: Models are forever ... data is a happy side effect.
Some more links on this topic:
*
*Why Setters and Getters are evil
*Return of pure OO
*POJO vs. NOJO
*Super Models Part 2
*TDD, Mocks and Design
A: Really interesting question. Honestly I can not prove why entities are good. But I can share my opinion why I like them. Code like
void exportOrder(Order order, String fileName){...};
is not concerned where order came from - from DB, from web request, from unit test, etc. It makes this method more explicitly declare what exactly it requires, instead of taking DataRow and documenting which columns it expects to have and which types they should be. Same applies if you implement it somehow as stored procedure - you still need to push record id to it, while it not necessary should be present in DB.
Implementation of this method would be done based on Order abstraction, not based on how exactly it is presented in DB. Most of such operations which I implemented really do not depend on how this data is stored. I do understand that some operations require coupling with DB structure for perfomance and scalability purposes, just in my experience there are not too much of them. In my experience very often it is enough to know that Person has .getFirstName() returning String, and .getAddress() returning Address, and address has .getZipCode(), etc - and do not care which tables are involed to store that data.
If you have to deal with such problems as you described, like when additional column breaks report perfomance, then for your tasks DB is a critical part, and you indeed should be as close as possible to it. While entities can provide some convenient abstractions they can hide some important details as well.
Scalability is interesting point here - most of websites which require enormous scalability (like facebook, livejournal, flickr) tend to use DB-ascetic approach, when DB is used as rare as possible and scalability issues are solved by caching, especially by RAM usage. http://highscalability.com/ has some interesting articles on it.
A: I would also like to add to Dan's answer that separating both models could enable your application to be run on different database servers or even database models.
A: What if you need to scale your app by load balancing more than one web server? You could install the full app on all web servers, but a better solution is to have the web servers talk to an application server.
But if there aren't any entity objects, they won't have very much to talk about.
I'm not saying that you shouldn't write monoliths if its a simple, internal, short life application. But as soon as it gets moderately complex, or it should last a significant amount of time, you really need to think about a good design.
This saves time when it comes to maintaining it.
By splitting application logic from presentation logic and data access, and by passing DTOs between them, you decouple them. Allowing them to change independently.
A: You might find this post on comp.object interesting.
I'm not claiming to agree or disagree but it's interesting and (I think) relevant to this topic.
A: A question: How do you handle disconnected applications if all your business logic is trapped in the database?
In the type of Enterprise application I'm interested in, we have to deal with multiple sites, some of them must be able to function in a disconnected state.
If your business logic is encapsulated in a Domain layer that is simple to incorporate into various application types -say, as a dll- then I can build applications that are aware of the business rules and are able, when necessary, to apply them locally.
In keeping the Domain layer in stored procedures on the database you have to stick with a single type of application that needs a permanent line-of-sight to the database.
It's ok for a certain class of environments, but it certainly doesn't cover the whole spectrum of Enterprise applications.
A: Theory says that highly cohesive, loosely coupled implementations are the way forward.
So I suppose you are questioning that approach, namely separating concerns.
Should my aspx.cs file be interacting with the database, calling a sproc, and understanding IDataReader?
In a team environment, especially where you have less technical people dealing with the aspx portion of the application, I don't need these people being able to "touch" this stuff.
Separating my domain from my database protects me from structural changes in the database, surely a good thing? Sure database efficacy is absolutely important, so let someone who is most excellent at that stuff deal with that stuff, in one place, with as little impact on the rest of the system as possible.
Unless I am misunderstanding your approach, one structural change in the database could have a large impact area with the surface of your application. I see that this separation of concerns enables me and my team to minimise this. Also any new member of the team should understand this approach better.
Also, your approach seems to advocate the business logic of your application to reside in your database? This feels wrong to me, SQL is really good at querying data, and not, imho, expressing business logic.
Interesting thought though, although it feels one step away from SQL in the aspx, which from my bad old unstructured asp days, fills me with dread.
A: One reason - separating your domain model from your database model.
What I do is use Test Driven Development so I write my UI and Model layers first and the Data layer is mocked, so the UI and model is build around domain specific objects, then later I map these objects to what ever technology I'm using the the Data Layer. Its a bad idea to let the database structure determine the design of your application. Where possible write the app first and let that influence the structure of your database, not the other way around.
A: For me it boils down to I don't want my application to be concerned with how the data is stored. I'll probably get slapped for saying this...but your application is not your data, data is an artifact of the application. I want my application to be thinking in terms of Customers, Orders and Items, not a technology like DataSets, DataTables and DataRows...cuz who knows how long those will be around.
I agree that there is always a certain amount of coupling, but I prefer that coupling to reach upwards rather than downwards. I can tweak the limbs and leaves of a tree easier than I can alter it's trunk.
I tend to reserve sprocs for reporting as the queries do tend to get a little nastier than the applications general data access.
I also tend to think with proper unit testing early on that scenario's like that one column not being persisted is likely not to be a problem.
A: @jdecuyper, one maxim I repeat to myself often is "if your business logic is not in your database, it is only a recommendation". I think Paul Nielson said that in one of his books. Application layers and UI come and go, but data usually lives for a very long time.
How do I avoid entity objects? Stored procedures mostly. I also freely admit that business logic tends to reach through all layers in an application whether you intend it to or not. A certain amount of coupling is inherent and unavoidable.
A: I have been thinking about this same thing a lot lately; I was a heavy user of CSLA for a while, and I love the purity of saying that "all of your business logic (or at least as much as is reasonably possible) is encapsulated in business entities".
I have seen the business entity model provide a lot of value in cases where the design of the database is different than the way you work with the data, which is the case in a lot of business software.
For example, the idea of a "customer" may consist of a main record in a Customer table, combined with all of the orders the customer has placed, as well as all the customer's employees and their contact information, and some of the properties of a customer and its children may be determined from lookup tables. It's really nice from a development standpoint to be able to work with the Customer as a single entity, since from a business perspective, the concept of Customer contains all of these things, and the relationships may or may not be enforced in the database.
While I appreciate the quote that "if your business rule is not in your database, it's only a suggestion", I also believe that you shouldn't design the database to enforce business rules, you should design it to be efficient, fast and normalized.
That said, as others have noted above, there is no "perfect design", the tool has to fit the job. But using business entities can really help with maintenance and productivity, since you know where to go to modify business logic, and objects can model real-world concepts in an intuitive way.
A: Eric,
No one is stopping you from choosing the framework/approach that you would wish. If you are going to go the "data driven/stored procedure-powered" path, then by all means, go for it! Especially if it really, really helps you deliver your applications on-spec and on-time.
The caveat being (a flipside to your question that is), ALL of your business rules should be on stored procedures, and your application is nothing more than a thin client.
That being said, same rules apply if you do your application in OOP : be consistent. Follow OOP's tenets, and that includes creating entity objects to represent your domain models.
The only real rule here is the word consistency. Nobody is stopping you from going DB-centric. No one is stopping you from doing old-school structured (aka, functional/procedural) programs. Hell, no one is stopping anybody from doing COBOL-style code. BUT an application has to be very, very consistent once going down this path, if it wishes to attain any degree of success.
A: I'm really not sure what you consider "Enterprise Applications". But I'm getting the impression you are defining it as an Internal Application where the RDBMS would be set in stone and the system wouldn't have to be interoperable with any other systems whether internal or external.
But what if you had a database with 100 tables which equate to 4 Stored Procedures for each table just for basic CRUD operations that's 400 stored procedures which need to be maintained and aren't strongly-typed so are susceptible to typos nor can be Unit Tested. What happens when you get a new CTO who is an Open Source Evangelist and wants to change the RDBMS from SQL Server to MySql?
A lot of software today whether Enterprise Applications or Products are using SOA and have some requirements for exposing Web Services, at least the software I am and have been involved with do.
Using your approach you would end up exposing a Serialized DataTable or DataRows. Now this may be deemed acceptable if the Client is guaranteed to be .NET and on an internal network. But when the Client is not known then you should be striving to Design an API which is intuitive and in most cases you would not want to be exposing the Full Database schema.
I certainly wouldn't want to explain to a Java developer what a DataTable is and how to use it. There's also the consideration of Bandwith and payload size and serialized DataTables, DataSets are very heavy.
There is no silver bullet with software design and it really depends on where the priorities lie, for me it's in Unit Testable code and loosely coupled components that can be easily consumed be any client.
just my 2 cents
A: I'd like to offer another angle to the problem of distance between OO and RDB: history.
Any software has a model of reality that is to some degree an abstraction of reality. No computer program can capture all the complexities of reality, and programs are written just to solve a set of problems from reality. Therefore any software model is a reduction of reality. Sometimes the software model forces reality to reduce itself. Like when you want the car rental company to reserve any car for you as long as it is blue and has alloys, but the operator can't comply because your request won't fit in the computer.
RDB comes from a very old tradition of putting information into tables, called accounting. Accounting was done on paper, then on punch cards, then in computers. But accounting is already a reduction of reality. Accounting has forced people to follow its system so long that it has become accepted reality. That's why it is relatively easy to make computer software for accounting, accounting has had its information model, long before the computer came along.
Given the importance of good accounting systems, and the acceptance you get from any business managers, these systems have become very advanced. The database foundations are now very solid and noone hesitates about keeping vital data in something so trustworthy.
I guess that OO must have come along when people have found that other aspects of reality are harder to model than accounting (which is already a model). OO has become a very successful idea, but persistance of OO data is relatively underdeveloped. RDB/Accounting has had easy wins, but OO is a much larger field (basically everything that isn't accounting).
So many of us have wanted to use OO but we still want safe storage of our data. What can be safer than to store our data the same way as the esteemed accounting system does? It is an enticing prospects, but we all run into the same pitfalls. Very few have taken the trouble to think of OO persistence compared to the massive efforts by the RDB industry, who has had the benefit of accounting's tradition and position.
Prevayler and db4o are some suggestions, I'm sure there are others I haven't heard of, but none have seemed to get half the press as, say, hibernation.
Storing your objects in good old files doesn't even seem to be taken seriously for multiuser applications, and especially web applications.
In my everyday struggle to close the chasm between OO and RDB I use OO as much as possible but try to keep inheritance to a minimum. I don't often use SPs. I'll use the advanced query stuff only in aspects that look like accounting.
I'll be happily supprised when the chasm is closed for good. I think the solution will come when Oracle launches something like "Oracle Object Instance Base". To really catch on, it will have to have a reassuring name.
A: Eric,
You are dead on. For any really scalable / easily maintained / robust application the only real answer is to dispense with all the garbage and stick to the basics.
I've followed a similiar trajectory with my career and have come to the same conclusions. Of course, we're considered heretics and looked at funny. But my stuff works and works well.
Every line of code should be looked at with suspicion.
A: I would like to answer with an example similar to the one you proposed.
On my company I had to build a simple CRUD section for products, I build all my entities and a separate DAL. Later another developer had to change a related table and he even renamed several fields. The only file I had to change to update my form was the DAL for that table.
What (in my opinion) entities brings to a project is:
Ortogonality: Changes in one layer might not affect other layers (off course if you make a huge change on the database it would ripple through all the layers but most small changes won't).
Testability: You can test your logic with out touching your database. This increases performance on your tests (allowing you to run them more frequently).
Separation of concerns: In a big product you can assign the database to a DBA and he can optimize the hell out of it. Assign the Model to a business expert that has the knowledge necessary to design it. Assign individual forms to developers more experienced on webforms etc..
Finally I would like to add that most ORM mappers support stored procedures since that's what you are using.
Cheers.
A: Not a lot of time at the moment, but just off the top of my head...
The entity model lets you give a consistent interface to the database (and other possible systems) even beyond what a stored procedure interface can do. By using enterprise-wide business models you can make sure that all applications affect the data consistently which is a VERY important thing. Otherwise you end up with bad data, which is just plain evil.
If you only have one application then you don't really have an "enterprise" system, regardless of how big that application or your data are. In that case you can use an approach similar to what you talk about. Just be aware of the work that will be needed if you decide to grow your systems in the future.
Here are a few things that you should keep in mind (IMO) though:
*
*Generated SQL code is bad
(exceptions to follow). Sorry, I
know that a lot of people think that
it's a huge time saver, but I've
never found a system that could
generate more efficient code than
what I could write and often the
code is just plain horrible. You
also often end up generating a ton
of SQL code that never gets used.
The exception here is very simple
patterns, like maybe lookup tables.
A lot of people get carried away on
it though.
*Entities <> Tables (or even logical data model entities necessarily). A data model often has data rules that should be enforced as closely to the database as possible which can include rules around how table rows relate to each other or other similar rules that are too complex for declarative RI. These should be handled in stored procedures. If all of your stored procedures are simple CRUD procs, you can't do that. On top of that, the CRUD model usually creates performance issues because it doesn't minimize round trips across the network to the database. That's often the biggest bottleneck in an enterprise application.
A: Sometimes, your application and data layer are not that tightly coupled. For example, you may have a telephone billing application. You later create a separate application which monitors phone usage to a) better advertise to you b) optimise your phone plan.
These applications have different concerns and data requirements (even the data is coming out of the same database), they would drive different designs. Your code base can end up an absolute mess (in either application) and a nightmare to maintain if you let the database drive the code.
A: Applications that have domain logic separated from the data storage logic are adaptable to any kind of data source (database or otherwise) or UI (web or windows(or linux etc.)) application.
Your pretty much stuck in your database, which isn't bad if your with a company who is satisfied with the current database system your using. However, because databases evolve overtime there might be a new database system that is really neat and new that your company wants to use. What if they wanted to switch to a web services method of data access (like Service Orientated architecture sometime does). You might have to port your stored procedures all over the place.
Also the domain logic abstracts away the UI, which can be more important in large complex systems that have ever evolving UIs (especially when they are constantly searching for more customers).
Also, while I agree that there is no definitive answer to the question of stored procedures versus domain logic. I'm in the domain logic camp (and I think they are winning over time), because I believe that elaborate stored procedures are harder to maintain than elaborate domain logic. But that's a whole other debate
A: I think that you are just used to writing a specific kind of application, and solving a certain kind of problem. You seem to be attacking this from a "database first" perspective. There are lots of developers out there where data is persisted to a DB but performance is not a top priority. In lots of cases putting an abstraction over the persistence layer simplifies code greatly and the performance cost is a non-issue.
Whatever you are doing, it's not OOP. It's not wrong, it's just not OOP, and it doesn't make sense to apply your solutions to every othe problem out there.
A: Interesting question. A couple thoughts:
*
*How would you unit test if all of your business logic was in your database?
*Wouldn't changes to your database structure, specifically ones that affect several pages in your app, be a major hassle to change throughout the app?
A: Good Question!
One approach I rather like is to create an iterator/generator object that emits instances of objects that are relevant to a specific context. Usually this object wraps some underlying database access stuff, but I don't need to know that when using it.
For example,
An AnswerIterator object generates AnswerIterator.Answer objects. Under the hood it's iterating over a SQL Statement to fetch all the answers, and another SQL statement to fetch all related comments. But when using the iterator I just use the Answer object that has the minimum properties for this context. With a little bit of skeleton code this becomes almost trivial to do.
I've found that this works well when I have a huge dataset to work on, and when done right, it gives me small, transient objects that are relatively easy to test.
It's basically a thin veneer over the Database Access stuff, but it still gives me the flexibility of abstracting it when I need to.
A: The objects in my apps tend to relate one-to-one to the database, but I'm finding using Linq To Sql rather than sprocs makes it much easier writing complicated queries, especially being able to build them up using the deferred execution. e.g. from r in Images.User.Ratings where etc. This saves me trying to work out several join statements in sql, and having Skip & Take for paging also simplifies the code rather than having to embed the row_number & 'over' code.
A: Why stop at entity objects? If you don't see the value with entity objects in an enterprise level app, then just do your data access in a purely functional/procedural language and wire it up to a UI. Why not just cut out all the OO "fluff"?
A: I've been speculating whether relational databases driven by SQL aren't a bit at cross-purposes with these frameworks that use the ActiveRecord paradigm. One fundamental problem is that AR (and good OO design, for that matter), drive us to decompose logic; and SQL simply isn't amenable to statement decomposition.
I wonder if using an isam persistence model for the database wouldn't be a better idea; a better impedance match to OO; more agreement on the basic idea of data as tables; more consistent with the conventional artifacts of OO persistence. One good example is that FKs and their associations can be more explicit.
RoR has a rep for being a database slug, and I suspect this issue is a large part of the reason.
Has anyone tried to use an isam database for an ActiveRecord implementation?
A: I'm puzzled about the "lock your database in stone" argument in favor of stored procs. I can take my ActiveRecord model and move it from MySQL to Postgres to SQLite, thank you very much. I couldn't do that with anything stored proc-based unless I wanted to rewrite them all.
I assume you mean that you're locking your database schema in stone. That argument is more interesting. To some extent I think it's argued from the perspective of an application with minimal unit tests and code coverage - the applications where you don't change your code out of sheer fear you're going to break "something."
My experience with stored-proc-based systems is minimal though. I'm curious, in large applications, how do you manage all of the data relations? On one page I show a product with a picture. On another page I show a product and the user who created it. On another page I show a product and the comments about it. On another page I need to show that product with no picture joined with a table of specifications about it.... etc. etc. I have a data model with a lot of relationships. I assume you don't write a stored proc for every combination? The DRY principle is the one that I worry about. How many queries am I writing where I'm re-left-joining (effectively re-coding) my relationships? And, while we're talking about locking the schema, how many stored procs am I going to need to re-write?
A: I think Entity objects are over emphasized in enterprise solution nowadays. They cannot contain business layer functions, since those belong in the Services in the service layer, or UI layer for UI specific functions, etc. Entity objects do allow the designers to think better in terms of designing the application well, but they do not necessarily have to contain all the application logic in them. They can be dumb objects that follow certain rules and interfaces and can be used to build other layers on top of them and act as data carriers between the layers.
A: I don't see what entity objects have to do with scalability, you're probably talking about using ORM tools, in this case I agree with you.
I'm very interested in scalability. Entity objects are never in your way of building a highly scalable application but you have to do it the right way, in other words you need a hand-written DAL, as opposed to a DAL generated using some ORM. Actually this is why I don't like ORMs, there's nothing that beats a hand-written DAL, I also don't use LINQ as I read in many places that it has a big overhead. I tweak every query in my apps and create the needed indexes, I don't let some ORM generate the code for me.
I don't agree with you that Entity objects make the code harder to maintain, actually the whole purpose of this architecture is to make it to easier to maintain and modify your code and this is what I see in practice, I wrote spaghetti code for a long time (didn't use 3-tier or n-tier architectures) so I know what I'm talking about.
Also Entity objects are needed for caching, I wonder how you cache the data in your applications if you don't use Entity objects, do you use datasets or datatables?
A: To be honest, I think if you can get away with data over forms, go for it! But the minute things get sticky, you would be wise to learn how to strucure things to gain some simplicity.
I haven't read all the answers but common points thigns get sticky:
*
*Code is repeated Buggy, unstable code
*HUGE classes loaded with static
classes
*Logic is everywhere and
anywhere ( aspx, static methods, sql,
triggers )
*Interacting with multiple
objects, sharing common features will
proove difficult
As far as domain vs data. I think Data will always win, functionality is ALL that matters to the client. It has to work. I'm a proponent of refactoring when you can if you break a principle to deliver something that works on time.. you can always go back and refactor.
Also a quick word on debugger, complex domain. I have seem many people get scared because they hit interfaces, don't understand all the acrobatics that are possible in very advanced OOP/polymorphic code. I TOTALLY understand, sometimes you can get lost and deterred. This is why they make tools.. I'm less scared of a solution with 1000 files than a humongous method with 1000 lines. And I have seen both believe it or not.
There is a happy medium also if your'e willing to write tests you won't worry so much about the debugger and steppign through code. If you get good tools and find a balance you'll solve all the problems above and also keep things simple enough to get around.
A: Well, I want to thank you for a fascinating discussion. I'm working my way through Stephen Walther's ASP.NET MVC Framework Unleashed, and I'm enjoying it as a sort of philosophical exercise, but I'm somewhat aghast at the amount of plumbing code his approach entails. Now that's not inherent in using an ORM -- Rails prides itself on freeing you from such housekeeping matters, but I'm really wrestling with whether I think it's worth it to have to write and maintain a separate Record class that can be used by the application and an EntityRecord class that maps the Record class to the database.
His gloss on the benefits are that you end up with a testable application where the tests run quickly, but frankly I'd rather trade some testing speed for executing code that's actually in the application. I think by the time you're spending your day slogging along and copying properties around so that your tests can run quickly, the testing tail has begun to wag the programmer dog -- who'd rather be chasing rabbits or having a nap in front of the fire.
The second cited benefit is that you can take your application and run it on a different database. Yeah, OK, maybe if you're writing something like a SalesForce for resale or something, that might be a goal, but for 90% or more of the applications out there, so what? I'm reminded of the neighbor in "It's a Wonderful Life" who gave George a jar of money and said: "I was saving this for a divorce in case I ever got a husband." Don't write it till you need it.
On the other hand, I do have a practical objection to stored procedures. It's not necessarily inherent in their use but more a feature of some of the brain-dead shops I've worked in: they sometimes put a DBA in the way of the code I want to write. I like to think I'm not a cowboy, but on the opposite end I don't like to have to convene a UN committee to add a field to a table.
A: One question: what if your data source were a web service? I write applications using only distributed data via web services. Am I expected to write that using a different paradigm than if my data source were an RDBMS?
I'm not asking what do you do if you switch from RDBMS to web services (because, in an internal shop, that's unlikely), I'm asking what do you do when the data comes from web services from the start?
Is your programming model drastically different than if it'd have been an RDBMS? If it is, you need to consider maintainability. My developers would have an awful time, if every app they jump into is programmed using different paradigms.
A: Some logic such as operations related to sets tend to be better represented in stored procedures. Yet there are times when an algorithm that has many branches and conditions is best represented in programming code. A grammar used for parsing commands that supports a runtime function for scripting actions can not be implemented in stored procedures.
The one weakness I see with stored procedures is that you tend to get a new stored procedure for new list or grid in the application. Or worse, one stored proc to rule them all, 10 parameters and case statements to further define them. In addition, the stored proc's become HUGE and even more difficult to debug.
All that said, I'm with you that an ORM may get in the way many times for the reasons you sited. In the end, it boils down to how you rule the technology.
A: I have just recently stumbled upon this question. Realizing that this question has been pretty old, and that there are many answers, I understand that my response many not be looked at even once. Still, I would like to leave my comments here.
I would look at this question in three aspects. But before that, I have to state: 8 out of 10, a programmer coming from the imperative/OO-design world (C/C++, JAVA, C#, etc.) does not know how to write optimized, efficient SQL code. From my experience, it is rare to have someone who can do well at both application development and SQL development.
With that said, I would like to give three aspects for looking at this question.
First: Seperation of concern not according to program, but organizational hierarchy.
Frankly, there are many kinds of "enterprise" in this world, and each one has its own organizational hierarchy, varied by history and philosophy. In one particular company I have worked with, the programmers cannot modify or develop upon the database. They can read and consume the database API (i.e. stored procedure in SQL server), but not directly reading the database, and cannot write any query or stored procedure.
Any database request (data or functionality) has to be delegated to another role: Data Architect. S/he would be the one dealing with the development, and possibly the maintenance, of the database. (Although, maintenance part should be the job of DB Admin.) In such environment, the stored procedure is only consumable; even the source of the stored procedure in the PROD environment would be encrypted, and programmers are not allowed to see the SP's source.
However, in some other organizations, programmers are expect to do development of all aspects, including the interface, middleware and data storage. This is the majority case.
In these two scenarios (albeit the first one is rather extreme but real), it would affect how you view the author's question. In the first case, I would say the author would agree upon the Data Architect's role, but any non-database programmer in the organization would greatly despise. In the second case, however, because of my previous disclaimer about many developers not knowing how to write good SQL codes (and generally not liking to deal with it either), it is only natural to opt for the simpler approach: ORM.
Second: The role of database: pure data storage up to different interpretations, or provider of predefined schemes of information?
Definition: "data" is raw, while "information" is interpreted.
In many real-world situations, the database is only regarded as a pure data storage. It may contain data logic (e.g. integrity of relational data), but it does not contain business logic (e.g. any formula applied to the data not because of the data's nature, but because of this is how this particular section of business works).
In the aforementioned organization I have worked with, in one database, it stores various financial information of a customer. At first, there is only one formula to calculate an index regarding the customer's financial health, and this formula, along with the customer's status based on the formula, is stored within a stored procedure of the database. As the government kept changing rules in the past few years, however, many more formulas have been created to accommodate with the government.
Hence the problem: each branch in the organization, with its own distinct programming department (and little inter-organizational business between each branch), uses the same set of financial data, but with different formulas and operations.
In this case, the original model of storing the formula in the database brought a maintenance and office politics hell. At first, the Data Architect would create typed stored procedures to accommodate the changes, but soon the organization started to have trouble with this model. The HQ had determined that, each branch would maintain its own set of formulas, and nobody except that branch should know the owned formulas. The Data Architect, in this case, knew all the formulas, and that did not sit well with the HQ's policy. The quick pace in changing the formulas has also brought efficiency problem for testing between each branch, because every formula tweak had to go through the Data Architect.
In this case, the organization faces a rather profound question: should the database serve the interpreted information, or should it only serve the data without any meaning?
That is a good way to jump into the third aspect.
Third: Ideological warfare: single- vs multi-purpose, and monolithic vs modular?
The aforementioned example is a clear demonstration of data being used in a multipurpose fashion. The data, while remaining the same for every branch, had different interpretation and usage in different scenarios. And here is my take.
Does your database store and serve data that, in nature, has multipurpose, and performance is not a big concern?
If yes, then I would say, the database should be reduced to only serving the data, and any logic not related to data integrity should be stored somewhere else. This is more of a modular approach: others can plug whatever operations and interpretations they would like to have, but not in SQL.
If any part of the question is a negative (i.e. it's single purpose, or performance is a big big concern), and assuming that no office politics are in the way, then I would say, monolithic approach of putting the majority stuff into the database is fine. Preferred or not, that becomes an ideological choice.
I have the impression that the author, upon writing and editing the question, supports the opinion of monolithic approach. I do consider this case-by-case, but generally, I am in such approach:
*
*Simple CRUD and nothing else: ORM
*Formulas and Workflows based on data: middleware (like CSLA), not in database (unless performance is a concern)
*Reporting: definitely in database (for performance reason)
Above is my 2 cents.
A: Software that solves a large problem in a very generic way applicable to lots of real situations necessarily comes with a performance cost in and of itself. It takes code to handle all that genericity and code takes time to run.
Also, descending through a layer of abstraction always reveals that some things cost a little and some things cost a lot and those differences were hidden by the abstraction. The isolation the abstraction gives the developer from what ever is beneath always causes the developer to casually introduce more expensive operations than were necessary.
Whatever else can be said about this question, from a performance only perspective and from a scaling performance only perspective avoiding the double whammy caused by the extra isolation from the realities of ones own database is going to pay off in performance.
I currently spend my working days battling exactly the performance problems caused by these issues and they are terrible monsters to fight.
A: Each approach has strengths. Their appropriateness for a particular problem must be judged on a case-by-case basis.
I wholeheartedly believe that entity- (and hence object oriented-) designs simplify otherwise complex business logic, as others have noted. But in my opinion, the greatest strength of an entity-based designs is modularity via well-defined inputs and outputs, which is easier to achieve outside of the database and with an object oriented model. I'll elaborate below.
I'm a Linux user. One point in the Unix philosophy is that developers should "write programs to handle text streams, because that is a universal interface". This is true of Linux, because it is very text-centric. You can chain unrelated processes together to achieve something new, like grep ^col /var/log/bim-sync | sed 's/.*alt:\([0-9]\{1,\}\).*/\1/g | xargs -I replstr bim-transcoder replstr. These programs are completely ignorant of each other, and yet can be easily joined together to achieve a new purpose. The reason this is possible is because you (the author of the "glue") know the input and output formats of each process.
Now, I don't believe that text streams are appropriate everywhere. Text streams are common, but not universal. My development philosophy is to "write programs with well-defined inputs and outputs". The input/output I'm talking about here is not necessarily standard input/output and not necessarily textual - it could be the arguments to a command line program, raw bytes sent over a network socket, an in memory data structure passed between layers of code, etc, etc. Thinking about software in terms of ye old Input-Process-Output "black boxes" allows you to compose your application like the command-line utilities in Unix - independently with a thin layer of glue that joins them together.
As an example, say you're writing software for the Australian New South Wale's Births, Deaths and Marriages. When a registration of a child's birth comes in, an operator enters the details, scans the signed forms and hits the Submit button. Hitting the Submit button issues a RegisterBirth command. The software validates the details of the command (date and time of birth, hospital, etc), and emits a BirthRegistered event. This event includes many details on the birth, such as the delivering doctor, whether it was a natural birth or by C-section, if it was a C-section, whether it was an emergency, who the biological parents are, etc, etc. A lot of different code can "plug into" this event. For example, one piece of code could issue a simple insert into person... SQL statement, while another piece of code could issue a series Neo4j cypher commands to store the the newborn baby and the relationship to its biological parents. This second piece of code would allow an extremely fast query of hierarchical "family tree" data. Such queries would be slower (and more complicated) in an SQL database, regardless of whether you use adjacency lists or nested sets. Still another piece of code could update the statistics on the number of emergency C-sections that month, which is stored in an XML file for historical reasons.
The modularity doesn't stop with abstracting the persistence mechanism. For example, you could write a FastCGI "glue layer" to web-enable your application: an "input" HTTP request is accepted by your web server, which emits a "output" FastCGI request to your "glue layer". Your FastCGI "glue layer" accepts this as input and transforms it into an output form appropriate to your application. Your application accepts the input command and emits events or errors, which can be picked up by other "glue layers" (such as the SQL and Neo4j examples given above).
The modularity can continue in almost any direction. You could have a command-line interface, or a GUI interface. You could create a comprehensive, automated test suite. You could open your application up to being scripted by third parties. A lot of the concepts here relate to Domain Driven Design, Command Query Responsibility Segregation and Event Sourcing, three inter-related patterns which I've found to be incredible powerful.
When using an entity-based approach, there are many relates architectural. For example, there is the Onion Architecture by Jeffrey Palermo and Ports and Adapters (or Hexagonal Architecute) by Alistair Cockburn. What all these architectures have in common is modularity and abstraction via defined inputs and outputs, regardless of whether those input/output boundaries are within a single program, or whether they span many processes and even networks.
An entity-based approach provides modularity and flexibility. However, there are downsides to this approach, three of which are significant:
*
*Firstly, the initial investment is high. This means that such an approach doesn't make sense for projects that are small in scope.
*Secondly, the amount of glue code you have to write can become large. Writing glue code can be tedious, but it can also be rewarding. For example, say your application integrates loosely with PostgreSQL as it's storage backend. When the company directors decide that the application should support Microsoft SQL Server, it's very satisfying (and builds team morale) when the goal is reached before the due date and below budget.
*Thirdly, my experience has taught me that an entity-based approach can be worse than a simple SQL solution, depending on the expertise of those implementing it. E.g., if the entity-first approach is full of getters and setters which are nothing more than in-memory representations of the database tables, you can be sure that the problem has not been thought out. It's cases like these which understandably leave developers wondering "Why don't we just write SQL?"
References:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "141"
} |
Q: Getting UI text from external app in C# Is it possible to get UI text from an external application in C#.
In particular, is there a way to read Unicode text from a label (I assume it's a normal Windows label control) from an external Win32 app that was written by a 3rd party? The text is visible, but not selectable by mouse in the UI.
I assume there is some accessibility API (e.g. meant for screen readers) that allows this.
Edit: Currently looking into using something like the Managed Spy App but would still appreciate any other leads.
A: If you just care about the standard Win32 label, then WM_GETTEXT will work fine, as outlined in the other answers.
--
There is an accessibility API - UIAutomation - for standard labels, it too uses WM_GETTEXT behind the scenes. One advantage to it, however, is that it can get text from several other types of controls, including most system controls, and often UI using non-system controls - including WPF, text in IE and Firefox, and others.
// compile as:
// csc file.cs /r:UIAutomationClient.dll /r:UIAutomationTypes.dll /r:WindowsBase.dll
using System.Windows.Automation;
using System.Windows.Forms;
using System;
class Test
{
public static void Main()
{
// Get element under pointer. You can also get an AutomationElement from a
// HWND handle, or by navigating the UI tree.
System.Drawing.Point pt = Cursor.Position;
AutomationElement el = AutomationElement.FromPoint(new System.Windows.Point(pt.X, pt.Y));
// Prints its name - often the context, but would be corresponding label text for editable controls. Can also get the type of control, location, and other properties.
Console.WriteLine( el.Current.Name );
}
}
A: You could do it if that unicode text is actually a window with a caption by sending a WM_GETTEXT message.
[DllImport("user32.dll")]
public static extern int SendMessage (IntPtr hWnd, int msg, int Param, System.Text.StringBuilder text);
System.Text.StringBuilder text = new System.Text.StringBuilder(255) ; // or length from call with GETTEXTLENGTH
int RetVal = Win32.SendMessage( hWnd , WM_GETTEXT, text.Capacity, text);
If it is just painted on the canvas you might have some luck if you know what framework the application uses. If it uses WinForms or Borland's VCL you could use that knowledge to get to the text.
A: didn't see the values for wm_gettext or wm_gettextlength in that article, so just in case..
const int WM_GETTEXT = 0x0D;
const int WM_GETTEXTLENGTH = 0x0E;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Quick easy way to migrate SQLite3 to MySQL? Anyone know a quick easy way to migrate a SQLite3 database to MySQL?
A: *
*http://sqlfairy.sourceforge.net/
*http://search.cpan.org/dist/SQL-Translator/
aptitude install sqlfairy libdbd-sqlite3-perl
sqlt -f DBI --dsn dbi:SQLite:../.open-tran/ten-sq.db -t MySQL --add-drop-table > mysql-ten-sq.sql
sqlt -f DBI --dsn dbi:SQLite:../.open-tran/ten-sq.db -t Dumper --use-same-auth > sqlite2mysql-dumper.pl
chmod +x sqlite2mysql-dumper.pl
./sqlite2mysql-dumper.pl --help
./sqlite2mysql-dumper.pl --add-truncate --mysql-loadfile > mysql-dump.sql
sed -e 's/LOAD DATA INFILE/LOAD DATA LOCAL INFILE/' -i mysql-dump.sql
echo 'drop database `ten-sq`' | mysql -p -u root
echo 'create database `ten-sq` charset utf8' | mysql -p -u root
mysql -p -u root -D ten-sq < mysql-ten-sq.sql
mysql -p -u root -D ten-sq < mysql-dump.sql
A: I wrote this simple script in Python3. It can be used as an included class or standalone script invoked via a terminal shell. By default it imports all integers as int(11)and strings as varchar(300), but all that can be adjusted in the constructor or script arguments respectively.
NOTE: It requires MySQL Connector/Python 2.0.4 or higher
Here's a link to the source on GitHub if you find the code below hard to read: https://github.com/techouse/sqlite3-to-mysql
#!/usr/bin/env python3
__author__ = "Klemen Tušar"
__email__ = "[email protected]"
__copyright__ = "GPL"
__version__ = "1.0.1"
__date__ = "2015-09-12"
__status__ = "Production"
import os.path, sqlite3, mysql.connector
from mysql.connector import errorcode
class SQLite3toMySQL:
"""
Use this class to transfer an SQLite 3 database to MySQL.
NOTE: Requires MySQL Connector/Python 2.0.4 or higher (https://dev.mysql.com/downloads/connector/python/)
"""
def __init__(self, **kwargs):
self._properties = kwargs
self._sqlite_file = self._properties.get('sqlite_file', None)
if not os.path.isfile(self._sqlite_file):
print('SQLite file does not exist!')
exit(1)
self._mysql_user = self._properties.get('mysql_user', None)
if self._mysql_user is None:
print('Please provide a MySQL user!')
exit(1)
self._mysql_password = self._properties.get('mysql_password', None)
if self._mysql_password is None:
print('Please provide a MySQL password')
exit(1)
self._mysql_database = self._properties.get('mysql_database', 'transfer')
self._mysql_host = self._properties.get('mysql_host', 'localhost')
self._mysql_integer_type = self._properties.get('mysql_integer_type', 'int(11)')
self._mysql_string_type = self._properties.get('mysql_string_type', 'varchar(300)')
self._sqlite = sqlite3.connect(self._sqlite_file)
self._sqlite.row_factory = sqlite3.Row
self._sqlite_cur = self._sqlite.cursor()
self._mysql = mysql.connector.connect(
user=self._mysql_user,
password=self._mysql_password,
host=self._mysql_host
)
self._mysql_cur = self._mysql.cursor(prepared=True)
try:
self._mysql.database = self._mysql_database
except mysql.connector.Error as err:
if err.errno == errorcode.ER_BAD_DB_ERROR:
self._create_database()
else:
print(err)
exit(1)
def _create_database(self):
try:
self._mysql_cur.execute("CREATE DATABASE IF NOT EXISTS `{}` DEFAULT CHARACTER SET 'utf8'".format(self._mysql_database))
self._mysql_cur.close()
self._mysql.commit()
self._mysql.database = self._mysql_database
self._mysql_cur = self._mysql.cursor(prepared=True)
except mysql.connector.Error as err:
print('_create_database failed creating databse {}: {}'.format(self._mysql_database, err))
exit(1)
def _create_table(self, table_name):
primary_key = ''
sql = 'CREATE TABLE IF NOT EXISTS `{}` ( '.format(table_name)
self._sqlite_cur.execute('PRAGMA table_info("{}")'.format(table_name))
for row in self._sqlite_cur.fetchall():
column = dict(row)
sql += ' `{name}` {type} {notnull} {auto_increment}, '.format(
name=column['name'],
type=self._mysql_string_type if column['type'].upper() == 'TEXT' else self._mysql_integer_type,
notnull='NOT NULL' if column['notnull'] else 'NULL',
auto_increment='AUTO_INCREMENT' if column['pk'] else ''
)
if column['pk']:
primary_key = column['name']
sql += ' PRIMARY KEY (`{}`) ) ENGINE = InnoDB CHARACTER SET utf8'.format(primary_key)
try:
self._mysql_cur.execute(sql)
self._mysql.commit()
except mysql.connector.Error as err:
print('_create_table failed creating table {}: {}'.format(table_name, err))
exit(1)
def transfer(self):
self._sqlite_cur.execute("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
for row in self._sqlite_cur.fetchall():
table = dict(row)
# create the table
self._create_table(table['name'])
# populate it
print('Transferring table {}'.format(table['name']))
self._sqlite_cur.execute('SELECT * FROM "{}"'.format(table['name']))
columns = [column[0] for column in self._sqlite_cur.description]
try:
self._mysql_cur.executemany("INSERT IGNORE INTO `{table}` ({fields}) VALUES ({placeholders})".format(
table=table['name'],
fields=('`{}`, ' * len(columns)).rstrip(' ,').format(*columns),
placeholders=('%s, ' * len(columns)).rstrip(' ,')
), (tuple(data) for data in self._sqlite_cur.fetchall()))
self._mysql.commit()
except mysql.connector.Error as err:
print('_insert_table_data failed inserting data into table {}: {}'.format(table['name'], err))
exit(1)
print('Done!')
def main():
""" For use in standalone terminal form """
import sys, argparse
parser = argparse.ArgumentParser()
parser.add_argument('--sqlite-file', dest='sqlite_file', default=None, help='SQLite3 db file')
parser.add_argument('--mysql-user', dest='mysql_user', default=None, help='MySQL user')
parser.add_argument('--mysql-password', dest='mysql_password', default=None, help='MySQL password')
parser.add_argument('--mysql-database', dest='mysql_database', default=None, help='MySQL host')
parser.add_argument('--mysql-host', dest='mysql_host', default='localhost', help='MySQL host')
parser.add_argument('--mysql-integer-type', dest='mysql_integer_type', default='int(11)', help='MySQL default integer field type')
parser.add_argument('--mysql-string-type', dest='mysql_string_type', default='varchar(300)', help='MySQL default string field type')
args = parser.parse_args()
if len(sys.argv) == 1:
parser.print_help()
exit(1)
converter = SQLite3toMySQL(
sqlite_file=args.sqlite_file,
mysql_user=args.mysql_user,
mysql_password=args.mysql_password,
mysql_database=args.mysql_database,
mysql_host=args.mysql_host,
mysql_integer_type=args.mysql_integer_type,
mysql_string_type=args.mysql_string_type
)
converter.transfer()
if __name__ == '__main__':
main()
A: Here is a list of converters (not updated since 2011):
*
*https://www2.sqlite.org/cvstrac/wiki?p=ConverterTools (or snapshot at archive.org)
An alternative method that would work nicely but is rarely mentioned is: use an ORM class that abstracts specific database differences away for you. e.g. you get these in PHP (RedBean), Python (Django's ORM layer, Storm, SqlAlchemy), Ruby on Rails (ActiveRecord), Cocoa (CoreData)
i.e. you could do this:
*
*Load data from source database using the ORM class.
*Store data in memory or serialize to disk.
*Store data into destination database using the ORM class.
A: Here is a python script, built off of Shalmanese's answer and some help from Alex martelli over at Translating Perl to Python
I'm making it community wiki, so please feel free to edit, and refactor as long as it doesn't break the functionality (thankfully we can just roll back) - It's pretty ugly but works
use like so (assuming the script is called dump_for_mysql.py:
sqlite3 sample.db .dump | python dump_for_mysql.py > dump.sql
Which you can then import into mysql
note - you need to add foreign key constrains manually since sqlite doesn't actually support them
here is the script:
#!/usr/bin/env python
import re
import fileinput
def this_line_is_useless(line):
useless_es = [
'BEGIN TRANSACTION',
'COMMIT',
'sqlite_sequence',
'CREATE UNIQUE INDEX',
'PRAGMA foreign_keys=OFF',
]
for useless in useless_es:
if re.search(useless, line):
return True
def has_primary_key(line):
return bool(re.search(r'PRIMARY KEY', line))
searching_for_end = False
for line in fileinput.input():
if this_line_is_useless(line):
continue
# this line was necessary because '');
# would be converted to \'); which isn't appropriate
if re.match(r".*, ''\);", line):
line = re.sub(r"''\);", r'``);', line)
if re.match(r'^CREATE TABLE.*', line):
searching_for_end = True
m = re.search('CREATE TABLE "?(\w*)"?(.*)', line)
if m:
name, sub = m.groups()
line = "DROP TABLE IF EXISTS %(name)s;\nCREATE TABLE IF NOT EXISTS `%(name)s`%(sub)s\n"
line = line % dict(name=name, sub=sub)
else:
m = re.search('INSERT INTO "(\w*)"(.*)', line)
if m:
line = 'INSERT INTO %s%s\n' % m.groups()
line = line.replace('"', r'\"')
line = line.replace('"', "'")
line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
# Add auto_increment if it is not there since sqlite auto_increments ALL
# primary keys
if searching_for_end:
if re.search(r"integer(?:\s+\w+)*\s*PRIMARY KEY(?:\s+\w+)*\s*,", line):
line = line.replace("PRIMARY KEY", "PRIMARY KEY AUTO_INCREMENT")
# replace " and ' with ` because mysql doesn't like quotes in CREATE commands
if line.find('DEFAULT') == -1:
line = line.replace(r'"', r'`').replace(r"'", r'`')
else:
parts = line.split('DEFAULT')
parts[0] = parts[0].replace(r'"', r'`').replace(r"'", r'`')
line = 'DEFAULT'.join(parts)
# And now we convert it back (see above)
if re.match(r".*, ``\);", line):
line = re.sub(r'``\);', r"'');", line)
if searching_for_end and re.match(r'.*\);', line):
searching_for_end = False
if re.match(r"CREATE INDEX", line):
line = re.sub('"', '`', line)
if re.match(r"AUTOINCREMENT", line):
line = re.sub("AUTOINCREMENT", "AUTO_INCREMENT", line)
print line,
A: I recently had to migrate from MySQL to JavaDB for a project that our team is working on. I found a Java library written by Apache called DdlUtils that made this pretty easy. It provides an API that lets you do the following:
*
*Discover a database's schema and export it as an XML file.
*Modify a DB based upon this schema.
*Import records from one DB to another, assuming they have the same schema.
The tools that we ended up with weren't completely automated, but they worked pretty well. Even if your application is not in Java, it shouldn't be too difficult to whip up a few small tools to do a one-time migration. I think I was able to pull of our migration with less than 150 lines of code.
A: Get a SQL dump
moose@pc08$ sqlite3 mySqliteDatabase.db .dump > myTemporarySQLFile.sql
Import dump to MySQL
For small imports:
moose@pc08$ mysql -u <username> -p
Enter password:
....
mysql> use somedb;
Database changed
mysql> source myTemporarySQLFile.sql;
or
mysql -u root -p somedb < myTemporarySQLFile.sql
This will prompt you for a password. Please note: If you want to enter your password directly, you have to do it WITHOUT space, directly after -p:
mysql -u root -pYOURPASS somedb < myTemporarySQLFile.sql
For larger dumps:
mysqlimport or other import tools like BigDump.
BigDump gives you a progress bar:
A: Based on Jims's solution:
Quick easy way to migrate SQLite3 to MySQL?
sqlite3 your_sql3_database.db .dump | python ./dump.py > your_dump_name.sql
cat your_dump_name.sql | sed '1d' | mysql --user=your_mysql_user --default-character-set=utf8 your_mysql_db -p
This works for me. I use sed just to throw the first line, which is not mysql-like, but you might as well modify dump.py script to throw this line away.
A: I usually use the Export/import tables feature of IntelliJ DataGrip.
You can see the progress in the bottom right corner.
[]
A: There is no need to any script,command,etc...
you have to only export your sqlite database as a .csv file and then import it in Mysql using phpmyadmin.
I used it and it worked amazing...
A: If you are using Python/Django it's pretty easy:
create two databases in settings.py (like here https://docs.djangoproject.com/en/1.11/topics/db/multi-db/)
then just do like this:
objlist = ModelObject.objects.using('sqlite').all()
for obj in objlist:
obj.save(using='mysql')
A: Probably the quick easiest way is using the sqlite .dump command, in this case create a dump of the sample database.
sqlite3 sample.db .dump > dump.sql
You can then (in theory) import this into the mysql database, in this case the test database on the database server 127.0.0.1, using user root.
mysql -p -u root -h 127.0.0.1 test < dump.sql
I say in theory as there are a few differences between grammars.
In sqlite transactions begin
BEGIN TRANSACTION;
...
COMMIT;
MySQL uses just
BEGIN;
...
COMMIT;
There are other similar problems (varchars and double quotes spring back to mind) but nothing find and replace couldn't fix.
Perhaps you should ask why you are migrating, if performance/ database size is the issue perhaps look at reoginising the schema, if the system is moving to a more powerful product this might be the ideal time to plan for the future of your data.
A: Everyone seems to starts off with a few greps and perl expressions and you sorta kinda get something that works for your particular dataset but you have no idea if it's imported the data correctly or not. I'm seriously surprised nobody's built a solid library that can convert between the two.
Here a list of ALL the differences in SQL syntax that I know about between the two file formats:
The lines starting with:
*
*BEGIN TRANSACTION
*COMMIT
*sqlite_sequence
*CREATE UNIQUE INDEX
are not used in MySQL
*
*SQLite uses CREATE TABLE/INSERT INTO "table_name" and MySQL uses CREATE TABLE/INSERT INTO table_name
*MySQL doesn't use quotes inside the schema definition
*MySQL uses single quotes for strings inside the INSERT INTO clauses
*SQLite and MySQL have different ways of escaping strings inside INSERT INTO clauses
*SQLite uses 't' and 'f' for booleans, MySQL uses 1 and 0 (a simple regex for this can fail when you have a string like: 'I do, you don't' inside your INSERT INTO)
*SQLLite uses AUTOINCREMENT, MySQL uses AUTO_INCREMENT
Here is a very basic hacked up perl script which works for my dataset and checks for many more of these conditions that other perl scripts I found on the web. Nu guarantees that it will work for your data but feel free to modify and post back here.
#! /usr/bin/perl
while ($line = <>){
if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){
if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/i){
$name = $1;
$sub = $2;
$sub =~ s/\"//g;
$line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n";
}
elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/i){
$line = "INSERT INTO $1$2\n";
$line =~ s/\"/\\\"/g;
$line =~ s/\"/\'/g;
}else{
$line =~ s/\'\'/\\\'/g;
}
$line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g;
$line =~ s/THIS_IS_TRUE/1/g;
$line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g;
$line =~ s/THIS_IS_FALSE/0/g;
$line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g;
print $line;
}
}
A: I've just gone through this process, and there's a lot of very good help and information in this Q/A, but I found I had to pull together various elements (plus some from other Q/As) to get a working solution in order to successfully migrate.
However, even after combining the existing answers, I found that the Python script did not fully work for me as it did not work where there were multiple boolean occurrences in an INSERT. See here why that was the case.
So, I thought I'd post up my merged answer here. Credit goes to those that have contributed elsewhere, of course. But I wanted to give something back, and save others time that follow.
I'll post the script below. But firstly, here's the instructions for a conversion...
I ran the script on OS X 10.7.5 Lion. Python worked out of the box.
To generate the MySQL input file from your existing SQLite3 database, run the script on your own files as follows,
Snips$ sqlite3 original_database.sqlite3 .dump | python ~/scripts/dump_for_mysql.py > dumped_data.sql
I then copied the resulting dumped_sql.sql file over to a Linux box running Ubuntu 10.04.4 LTS where my MySQL database was to reside.
Another issue I had when importing the MySQL file was that some unicode UTF-8 characters (specifically single quotes) were not being imported correctly, so I had to add a switch to the command to specify UTF-8.
The resulting command to input the data into a spanking new empty MySQL database is as follows:
Snips$ mysql -p -u root -h 127.0.0.1 test_import --default-character-set=utf8 < dumped_data.sql
Let it cook, and that should be it! Don't forget to scrutinise your data, before and after.
So, as the OP requested, it's quick and easy, when you know how! :-)
As an aside, one thing I wasn't sure about before I looked into this migration, was whether created_at and updated_at field values would be preserved - the good news for me is that they are, so I could migrate my existing production data.
Good luck!
UPDATE
Since making this switch, I've noticed a problem that I hadn't noticed before. In my Rails application, my text fields are defined as 'string', and this carries through to the database schema. The process outlined here results in these being defined as VARCHAR(255) in the MySQL database. This places a 255 character limit on these field sizes - and anything beyond this was silently truncated during the import. To support text length greater than 255, the MySQL schema would need to use 'TEXT' rather than VARCHAR(255), I believe. The process defined here does not include this conversion.
Here's the merged and revised Python script that worked for my data:
#!/usr/bin/env python
import re
import fileinput
def this_line_is_useless(line):
useless_es = [
'BEGIN TRANSACTION',
'COMMIT',
'sqlite_sequence',
'CREATE UNIQUE INDEX',
'PRAGMA foreign_keys=OFF'
]
for useless in useless_es:
if re.search(useless, line):
return True
def has_primary_key(line):
return bool(re.search(r'PRIMARY KEY', line))
searching_for_end = False
for line in fileinput.input():
if this_line_is_useless(line): continue
# this line was necessary because ''); was getting
# converted (inappropriately) to \');
if re.match(r".*, ''\);", line):
line = re.sub(r"''\);", r'``);', line)
if re.match(r'^CREATE TABLE.*', line):
searching_for_end = True
m = re.search('CREATE TABLE "?([A-Za-z_]*)"?(.*)', line)
if m:
name, sub = m.groups()
line = "DROP TABLE IF EXISTS %(name)s;\nCREATE TABLE IF NOT EXISTS `%(name)s`%(sub)s\n"
line = line % dict(name=name, sub=sub)
line = line.replace('AUTOINCREMENT','AUTO_INCREMENT')
line = line.replace('UNIQUE','')
line = line.replace('"','')
else:
m = re.search('INSERT INTO "([A-Za-z_]*)"(.*)', line)
if m:
line = 'INSERT INTO %s%s\n' % m.groups()
line = line.replace('"', r'\"')
line = line.replace('"', "'")
line = re.sub(r"(?<!')'t'(?=.)", r"1", line)
line = re.sub(r"(?<!')'f'(?=.)", r"0", line)
# Add auto_increment if it's not there since sqlite auto_increments ALL
# primary keys
if searching_for_end:
if re.search(r"integer(?:\s+\w+)*\s*PRIMARY KEY(?:\s+\w+)*\s*,", line):
line = line.replace("PRIMARY KEY", "PRIMARY KEY AUTO_INCREMENT")
# replace " and ' with ` because mysql doesn't like quotes in CREATE commands
# And now we convert it back (see above)
if re.match(r".*, ``\);", line):
line = re.sub(r'``\);', r"'');", line)
if searching_for_end and re.match(r'.*\);', line):
searching_for_end = False
if re.match(r"CREATE INDEX", line):
line = re.sub('"', '`', line)
print line,
A: Ha... I wish I had found this first! My response was to this post... script to convert mysql dump sql file into format that can be imported into sqlite3 db
Combining the two would be exactly what I needed:
When the sqlite3 database is going to be used with ruby you may want to change:
tinyint([0-9]*)
to:
sed 's/ tinyint(1*) / boolean/g ' |
sed 's/ tinyint([0|2-9]*) / integer /g' |
alas, this only half works because even though you are inserting 1's and 0's into a field marked boolean, sqlite3 stores them as 1's and 0's so you have to go through and do something like:
Table.find(:all, :conditions => {:column => 1 }).each { |t| t.column = true }.each(&:save)
Table.find(:all, :conditions => {:column => 0 }).each { |t| t.column = false}.each(&:save)
but it was helpful to have the sql file to look at to find all the booleans.
A: This script is ok except for this case that of course, I've met :
INSERT INTO "requestcomparison_stopword" VALUES(149,'f');
INSERT INTO "requestcomparison_stopword" VALUES(420,'t');
The script should give this output :
INSERT INTO requestcomparison_stopword VALUES(149,'f');
INSERT INTO requestcomparison_stopword VALUES(420,'t');
But gives instead that output :
INSERT INTO requestcomparison_stopword VALUES(1490;
INSERT INTO requestcomparison_stopword VALUES(4201;
with some strange non-ascii characters around the last 0 and 1.
This didn't show up anymore when I commented the following lines of the code (43-46) but others problems appeared:
line = re.sub(r"([^'])'t'(.)", "\1THIS_IS_TRUE\2", line)
line = line.replace('THIS_IS_TRUE', '1')
line = re.sub(r"([^'])'f'(.)", "\1THIS_IS_FALSE\2", line)
line = line.replace('THIS_IS_FALSE', '0')
This is just a special case, when we want to add a value being 'f' or 't' but I'm not really comfortable with regular expressions, I just wanted to spot this case to be corrected by someone.
Anyway thanks a lot for that handy script !!!
A: This simple solution worked for me:
<?php
$sq = new SQLite3( 'sqlite3.db' );
$tables = $sq->query( 'SELECT name FROM sqlite_master WHERE type="table"' );
while ( $table = $tables->fetchArray() ) {
$table = current( $table );
$result = $sq->query( sprintf( 'SELECT * FROM %s', $table ) );
if ( strpos( $table, 'sqlite' ) !== false )
continue;
printf( "-- %s\n", $table );
while ( $row = $result->fetchArray( SQLITE3_ASSOC ) ) {
$values = array_map( function( $value ) {
return sprintf( "'%s'", mysql_real_escape_string( $value ) );
}, array_values( $row ) );
printf( "INSERT INTO `%s` VALUES( %s );\n", $table, implode( ', ', $values ) );
}
}
A: echo ".dump" | sqlite3 /tmp/db.sqlite > db.sql
watch out for CREATE statements
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "254"
} |
Q: I/O permission settings using .net installer I am creating a program that will be installed using the .net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem?
A: Writing to the Program Files folder is a really bad idea, you should assume that this location is "read only" once installed.
Saving user settings in Program Files causes problems if more than two people use the computer at once (eg. Terminal Services) who's settings should be saved, do you want other users to know 'your' settings? What happens if your program writes settings to the file as user A, but user B can't edit the file? User B may have access to the directory, but not read/delete the preference file as this is owned by user A.
Legacy win9x programs often write to the program files folder, Windows Vista actually does some neat trickery to let these programs work. When your program writes a file, vista actually puts it someplace else that is only accessible to that user. The same is done for registry writes to HKLM (or so I discovered after hours of debugging...) and Server 2008 does the same thing.
If you're needing to save user settings the best alternative would be to save the settings to the Application Data folder (Environment Variable %APPDATA%)
If the settings are system wide, then the administrative user should set these after install or on first run and they should not be able to be overwritten by limited users.
So to answer your question - YES there is a way to do what you've asked. But it's a bad idea, it's insecure and will probably cause problems in the long run.
A: You can write a custom installer class which can change the security permissions of the folder. This would assume the installation is done by a user who has permission to change file/directory security.
The best option is to not write to directories under Program Files at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Random integer in VB.NET I need to generate a random integer between 1 and n (where n is a positive whole number) to use for a unit test. I don't need something overly complicated to ensure true randomness - just an old-fashioned random number.
How would I do that?
A: As has been pointed out many times, the suggestion to write code like this is problematic:
Public Function GetRandom(ByVal Min As Integer, ByVal Max As Integer) As Integer
Dim Generator As System.Random = New System.Random()
Return Generator.Next(Min, Max)
End Function
The reason is that the constructor for the Random class provides a default seed based on the system's clock. On most systems, this has limited granularity -- somewhere in the vicinity of 20 ms. So if you write the following code, you're going to get the same number a bunch of times in a row:
Dim randoms(1000) As Integer
For i As Integer = 0 to randoms.Length - 1
randoms(i) = GetRandom(1, 100)
Next
The following code addresses this issue:
Public Function GetRandom(ByVal Min As Integer, ByVal Max As Integer) As Integer
' by making Generator static, we preserve the same instance '
' (i.e., do not create new instances with the same seed over and over) '
' between calls '
Static Generator As System.Random = New System.Random()
Return Generator.Next(Min, Max)
End Function
I threw together a simple program using both methods to generate 25 random integers between 1 and 100. Here's the output:
Non-static: 70 Static: 70
Non-static: 70 Static: 46
Non-static: 70 Static: 58
Non-static: 70 Static: 19
Non-static: 70 Static: 79
Non-static: 70 Static: 24
Non-static: 70 Static: 14
Non-static: 70 Static: 46
Non-static: 70 Static: 82
Non-static: 70 Static: 31
Non-static: 70 Static: 25
Non-static: 70 Static: 8
Non-static: 70 Static: 76
Non-static: 70 Static: 74
Non-static: 70 Static: 84
Non-static: 70 Static: 39
Non-static: 70 Static: 30
Non-static: 70 Static: 55
Non-static: 70 Static: 49
Non-static: 70 Static: 21
Non-static: 70 Static: 99
Non-static: 70 Static: 15
Non-static: 70 Static: 83
Non-static: 70 Static: 26
Non-static: 70 Static: 16
Non-static: 70 Static: 75
A: To get a random integer value between 1 and N (inclusive) you can use the following.
CInt(Math.Ceiling(Rnd() * n)) + 1
A: Microsoft Example Rnd Function
https://msdn.microsoft.com/en-us/library/f7s023d2%28v=vs.90%29.aspx
1- Initialize the random-number generator.
Randomize()
2 - Generate random value between 1 and 6.
Dim value As Integer = CInt(Int((6 * Rnd()) + 1))
A: Public Function RandomNumber(ByVal n As Integer) As Integer
'initialize random number generator
Dim r As New Random(System.DateTime.Now.Millisecond)
Return r.Next(1, n)
End Function
A: All the answers so far have problems or bugs (plural, not just one). I will explain. But first I want to compliment Dan Tao's insight to use a static variable to remember the Generator variable so calling it multiple times will not repeat the same # over and over, plus he gave a very nice explanation. But his code suffered the same flaw that most others have, as i explain now.
MS made their Next() method rather odd. the Min parameter is the inclusive minimum as one would expect, but the Max parameter is the exclusive maximum as one would NOT expect. in other words, if you pass min=1 and max=5 then your random numbers would be any of 1, 2, 3, or 4, but it would never include 5. This is the first of two potential bugs in all code that uses Microsoft's Random.Next() method.
For a simple answer (but still with other possible but rare problems) then you'd need to use:
Private Function GenRandomInt(min As Int32, max As Int32) As Int32
Static staticRandomGenerator As New System.Random
Return staticRandomGenerator.Next(min, max + 1)
End Function
(I like to use Int32 rather than Integer because it makes it more clear how big the int is, plus it is shorter to type, but suit yourself.)
I see two potential problems with this method, but it will be suitable (and correct) for most uses. So if you want a simple solution, i believe this is correct.
The only 2 problems i see with this function is:
1: when Max = Int32.MaxValue so adding 1 creates a numeric overflow. altho, this would be rare, it is still a possibility.
2: when min > max + 1. when min = 10 and max = 5 then the Next function throws an error. this may be what you want. but it may not be either. or consider when min = 5 and max = 4. by adding 1, 5 is passed to the Next method, but it does not throw an error, when it really is an error, but Microsoft .NET code that i tested returns 5. so it really is not an 'exclusive' max when the max = the min. but when max < min for the Random.Next() function, then it throws an ArgumentOutOfRangeException. so Microsoft's implementation is really inconsistent and buggy too in this regard.
you may want to simply swap the numbers when min > max so no error is thrown, but it totally depends on what is desired. if you want an error on invalid values, then it is probably better to also throw the error when Microsoft's exclusive maximum (max + 1) in our code equals minimum, where MS fails to error in this case.
handling a work-around for when max = Int32.MaxValue is a little inconvenient, but i expect to post a thorough function which handles both these situations. and if you want different behavior than how i coded it, suit yourself. but be aware of these 2 issues.
Happy coding!
Edit:
So i needed a random integer generator, and i decided to code it 'right'. So if anyone wants the full functionality, here's one that actually works. (But it doesn't win the simplest prize with only 2 lines of code. But it's not really complex either.)
''' <summary>
''' Generates a random Integer with any (inclusive) minimum or (inclusive) maximum values, with full range of Int32 values.
''' </summary>
''' <param name="inMin">Inclusive Minimum value. Lowest possible return value.</param>
''' <param name="inMax">Inclusive Maximum value. Highest possible return value.</param>
''' <returns></returns>
''' <remarks></remarks>
Private Function GenRandomInt(inMin As Int32, inMax As Int32) As Int32
Static staticRandomGenerator As New System.Random
If inMin > inMax Then Dim t = inMin : inMin = inMax : inMax = t
If inMax < Int32.MaxValue Then Return staticRandomGenerator.Next(inMin, inMax + 1)
' now max = Int32.MaxValue, so we need to work around Microsoft's quirk of an exclusive max parameter.
If inMin > Int32.MinValue Then Return staticRandomGenerator.Next(inMin - 1, inMax) + 1 ' okay, this was the easy one.
' now min and max give full range of integer, but Random.Next() does not give us an option for the full range of integer.
' so we need to use Random.NextBytes() to give us 4 random bytes, then convert that to our random int.
Dim bytes(3) As Byte ' 4 bytes, 0 to 3
staticRandomGenerator.NextBytes(bytes) ' 4 random bytes
Return BitConverter.ToInt32(bytes, 0) ' return bytes converted to a random Int32
End Function
A: Use System.Random:
Dim MyMin As Integer = 1, MyMax As Integer = 5, My1stRandomNumber As Integer, My2ndRandomNumber As Integer
' Create a random number generator
Dim Generator As System.Random = New System.Random()
' Get a random number >= MyMin and <= MyMax
My1stRandomNumber = Generator.Next(MyMin, MyMax + 1) ' Note: Next function returns numbers _less than_ max, so pass in max + 1 to include max as a possible value
' Get another random number (don't create a new generator, use the same one)
My2ndRandomNumber = Generator.Next(MyMin, MyMax + 1)
A: You should create a pseudo-random number generator only once:
Dim Generator As System.Random = New System.Random()
Then, if an integer suffices for your needs, you can use:
Public Function GetRandom(myGenerator As System.Random, ByVal Min As Integer, ByVal Max As Integer) As Integer
'min is inclusive, max is exclusive (dah!)
Return myGenerator.Next(Min, Max + 1)
End Function
as many times as you like. Using the wrapper function is justified only because the maximum value is exclusive - I know that the random numbers work this way but the definition of .Next is confusing.
Creating a generator every time you need a number is in my opinion wrong; the pseudo-random numbers do not work this way.
First, you get the problem with initialization which has been discussed in the other replies. If you initialize once, you do not have this problem.
Second, I am not at all certain that you get a valid sequence of random numbers; rather, you get a collection of the first number of multiple different sequences which are seeded automatically based on computer time. I am not certain that these numbers will pass the tests that confirm the randomness of the sequence.
A: If you are using Joseph's answer which is a great answer, and you run these back to back like this:
dim i = GetRandom(1, 1715)
dim o = GetRandom(1, 1715)
Then the result could come back the same over and over because it processes the call so quickly. This may not have been an issue in '08, but since the processors are much faster today, the function doesn't allow the system clock enough time to change prior to making the second call.
Since the System.Random() function is based on the system clock, we need to allow enough time for it to change prior to the next call. One way of accomplishing this is to pause the current thread for 1 millisecond. See example below:
Public Function GetRandom(ByVal min as Integer, ByVal max as Integer) as Integer
Static staticRandomGenerator As New System.Random
max += 1
Return staticRandomGenerator.Next(If(min > max, max, min), If(min > max, min, max))
End Function
A: Dim rnd As Random = New Random
rnd.Next(n)
A: Just for reference, VB NET Fuction definition for RND and RANDOMIZE (which should give the same results of BASIC (1980 years) and all versions after is:
Public NotInheritable Class VBMath
' Methods
Private Shared Function GetTimer() As Single
Dim now As DateTime = DateTime.Now
Return CSng((((((60 * now.Hour) + now.Minute) * 60) + now.Second) + (CDbl(now.Millisecond) / 1000)))
End Function
Public Shared Sub Randomize()
Dim timer As Single = VBMath.GetTimer
Dim projectData As ProjectData = ProjectData.GetProjectData
Dim rndSeed As Integer = projectData.m_rndSeed
Dim num3 As Integer = BitConverter.ToInt32(BitConverter.GetBytes(timer), 0)
num3 = (((num3 And &HFFFF) Xor (num3 >> &H10)) << 8)
rndSeed = ((rndSeed And -16776961) Or num3)
projectData.m_rndSeed = rndSeed
End Sub
Public Shared Sub Randomize(ByVal Number As Double)
Dim num2 As Integer
Dim projectData As ProjectData = ProjectData.GetProjectData
Dim rndSeed As Integer = projectData.m_rndSeed
If BitConverter.IsLittleEndian Then
num2 = BitConverter.ToInt32(BitConverter.GetBytes(Number), 4)
Else
num2 = BitConverter.ToInt32(BitConverter.GetBytes(Number), 0)
End If
num2 = (((num2 And &HFFFF) Xor (num2 >> &H10)) << 8)
rndSeed = ((rndSeed And -16776961) Or num2)
projectData.m_rndSeed = rndSeed
End Sub
Public Shared Function Rnd() As Single
Return VBMath.Rnd(1!)
End Function
Public Shared Function Rnd(ByVal Number As Single) As Single
Dim projectData As ProjectData = ProjectData.GetProjectData
Dim rndSeed As Integer = projectData.m_rndSeed
If (Number <> 0) Then
If (Number < 0) Then
Dim num1 As UInt64 = (BitConverter.ToInt32(BitConverter.GetBytes(Number), 0) And &HFFFFFFFF)
rndSeed = CInt(((num1 + (num1 >> &H18)) And CULng(&HFFFFFF)))
End If
rndSeed = CInt((((rndSeed * &H43FD43FD) + &HC39EC3) And &HFFFFFF))
End If
projectData.m_rndSeed = rndSeed
Return (CSng(rndSeed) / 1.677722E+07!)
End Function
End Class
While the Random CLASS is:
Public Class Random
' Methods
<__DynamicallyInvokable> _
Public Sub New()
Me.New(Environment.TickCount)
End Sub
<__DynamicallyInvokable> _
Public Sub New(ByVal Seed As Integer)
Me.SeedArray = New Integer(&H38 - 1) {}
Dim num4 As Integer = If((Seed = -2147483648), &H7FFFFFFF, Math.Abs(Seed))
Dim num2 As Integer = (&H9A4EC86 - num4)
Me.SeedArray(&H37) = num2
Dim num3 As Integer = 1
Dim i As Integer
For i = 1 To &H37 - 1
Dim index As Integer = ((&H15 * i) Mod &H37)
Me.SeedArray(index) = num3
num3 = (num2 - num3)
If (num3 < 0) Then
num3 = (num3 + &H7FFFFFFF)
End If
num2 = Me.SeedArray(index)
Next i
Dim j As Integer
For j = 1 To 5 - 1
Dim k As Integer
For k = 1 To &H38 - 1
Me.SeedArray(k) = (Me.SeedArray(k) - Me.SeedArray((1 + ((k + 30) Mod &H37))))
If (Me.SeedArray(k) < 0) Then
Me.SeedArray(k) = (Me.SeedArray(k) + &H7FFFFFFF)
End If
Next k
Next j
Me.inext = 0
Me.inextp = &H15
Seed = 1
End Sub
Private Function GetSampleForLargeRange() As Double
Dim num As Integer = Me.InternalSample
If ((Me.InternalSample Mod 2) = 0) Then
num = -num
End If
Dim num2 As Double = num
num2 = (num2 + 2147483646)
Return (num2 / 4294967293)
End Function
Private Function InternalSample() As Integer
Dim inext As Integer = Me.inext
Dim inextp As Integer = Me.inextp
If (++inext >= &H38) Then
inext = 1
End If
If (++inextp >= &H38) Then
inextp = 1
End If
Dim num As Integer = (Me.SeedArray(inext) - Me.SeedArray(inextp))
If (num = &H7FFFFFFF) Then
num -= 1
End If
If (num < 0) Then
num = (num + &H7FFFFFFF)
End If
Me.SeedArray(inext) = num
Me.inext = inext
Me.inextp = inextp
Return num
End Function
<__DynamicallyInvokable> _
Public Overridable Function [Next]() As Integer
Return Me.InternalSample
End Function
<__DynamicallyInvokable> _
Public Overridable Function [Next](ByVal maxValue As Integer) As Integer
If (maxValue < 0) Then
Dim values As Object() = New Object() { "maxValue" }
Throw New ArgumentOutOfRangeException("maxValue", Environment.GetResourceString("ArgumentOutOfRange_MustBePositive", values))
End If
Return CInt((Me.Sample * maxValue))
End Function
<__DynamicallyInvokable> _
Public Overridable Function [Next](ByVal minValue As Integer, ByVal maxValue As Integer) As Integer
If (minValue > maxValue) Then
Dim values As Object() = New Object() { "minValue", "maxValue" }
Throw New ArgumentOutOfRangeException("minValue", Environment.GetResourceString("Argument_MinMaxValue", values))
End If
Dim num As Long = (maxValue - minValue)
If (num <= &H7FFFFFFF) Then
Return (CInt((Me.Sample * num)) + minValue)
End If
Return (CInt(CLng((Me.GetSampleForLargeRange * num))) + minValue)
End Function
<__DynamicallyInvokable> _
Public Overridable Sub NextBytes(ByVal buffer As Byte())
If (buffer Is Nothing) Then
Throw New ArgumentNullException("buffer")
End If
Dim i As Integer
For i = 0 To buffer.Length - 1
buffer(i) = CByte((Me.InternalSample Mod &H100))
Next i
End Sub
<__DynamicallyInvokable> _
Public Overridable Function NextDouble() As Double
Return Me.Sample
End Function
<__DynamicallyInvokable> _
Protected Overridable Function Sample() As Double
Return (Me.InternalSample * 4.6566128752457969E-10)
End Function
' Fields
Private inext As Integer
Private inextp As Integer
Private Const MBIG As Integer = &H7FFFFFFF
Private Const MSEED As Integer = &H9A4EC86
Private Const MZ As Integer = 0
Private SeedArray As Integer()
End Class
A: I see a lot of answers of users that are not satisfied with using System.Random.
Despite the fact that I personally would still use System.Random, I was thinking about a way to use a GUID as the base for a random value. A GUID can be converted to a byte array using its ToByteArray method, and the resulting byte array can be converted to a numeric value using a BitConverter.
'Function for reuse (min is inclusive and max is exclusive)
Function GetRandom(min As Integer, max As Integer) As Integer
Return BitConverter.ToUInt64(Guid.NewGuid.ToByteArray) Mod (max - min) + min
End Function
'one-liner specific for your purpose (n is exclusive)
BitConverter.ToUInt64(Guid.NewGuid.ToByteArray) Mod (n - 1) + 1
Note that this is just a little thought experiment. I haven't tested the performance, nor have I investigated the actual "randomness" of the results. But for your purpose, it might just do the job.
The accepted answer uses the Microsoft.VisualBasic.VBMath.Rnd method, which indeed offers a simple and attractive oneliner, but I personally would avoid writing new code that uses the Microsoft.VisualBasic namespace.
A: Function xrand() As Long
Dim r1 As Long = Now.Day & Now.Month & Now.Year & Now.Hour & Now.Minute & Now.Second & Now.Millisecond
Dim RAND As Long = Math.Max(r1, r1 * 2)
Return RAND
End Function
[BBOYSE]
This its the best way, from scratch :P
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: How to display "12 minutes ago" etc in a PHP webpage? Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page?
A: Here is the php code for the same:
function time_since($since) {
$chunks = array(
array(60 * 60 * 24 * 365 , 'year'),
array(60 * 60 * 24 * 30 , 'month'),
array(60 * 60 * 24 * 7, 'week'),
array(60 * 60 * 24 , 'day'),
array(60 * 60 , 'hour'),
array(60 , 'minute'),
array(1 , 'second')
);
for ($i = 0, $j = count($chunks); $i < $j; $i++) {
$seconds = $chunks[$i][0];
$name = $chunks[$i][1];
if (($count = floor($since / $seconds)) != 0) {
break;
}
}
$print = ($count == 1) ? '1 '.$name : "$count {$name}s";
return $print;
}
The function takes the number of seconds as input and outputs text such as:
*
*10 seconds
*1 minute
etc
A: PHP's \DateTime::diff returns a \DateInterval object on which you can get the minutes by the public i property.
A: function timeAgo($timestamp){
$datetime1=new DateTime("now");
$datetime2=date_create($timestamp);
$diff=date_diff($datetime1, $datetime2);
$timemsg='';
if($diff->y > 0){
$timemsg = $diff->y .' year'. ($diff->y > 1?"'s":'');
}
else if($diff->m > 0){
$timemsg = $diff->m . ' month'. ($diff->m > 1?"'s":'');
}
else if($diff->d > 0){
$timemsg = $diff->d .' day'. ($diff->d > 1?"'s":'');
}
else if($diff->h > 0){
$timemsg = $diff->h .' hour'.($diff->h > 1 ? "'s":'');
}
else if($diff->i > 0){
$timemsg = $diff->i .' minute'. ($diff->i > 1?"'s":'');
}
else if($diff->s > 0){
$timemsg = $diff->s .' second'. ($diff->s > 1?"'s":'');
}
$timemsg = $timemsg.' ago';
return $timemsg;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: GUI Automation testing - Window handle questions Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
A: Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
A: If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
A: use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
A: I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
A: Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Webservice alive forever I often use webservice this way
public void CallWebservice()
{
mywebservice web = new mywebservice();
web.call();
}
but sometimes I do this
private mywebservice web;
public Constructor()
{
web = new mywebservice();
}
public void CallWebservice()
{
web.call();
}
The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why).
I found an article (Web Service Woes (A light at the end of the tunnel?)) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code:
Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest
Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest)
webRequest.KeepAlive = False
Return webRequest
End Function
The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue?
A: The classes generated by Visual Studio for webservices are just proxies with little state so creating them is pretty cheap. I wouldn't worry about memory consumption for them.
If what you are looking for is a way to call the webmethod in one line you can simply do this:
new mywebservice().call()
Cheers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to create Projects/Tasks for Project Server 2003 via C#? I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items?
A: Developing against Project Server 2003 isn't the friendliest experience around, but I have worked a little bit with the PDS (Project Data Services) which is SOAP based
http://msdn.microsoft.com/en-us/library/aa204408(office.11).aspx
It contains .NET samples there
A: As far as I know, the only programatic access to PS 2003 is through PWS.
I don't know if it would work, but you could try writing a managed extension for Microsoft Project 2003 (The client application) .There is a managed API for MS Project 2003, and you might be able to leverage that to communicate with the server, get a project and update it all in code.
Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits