text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Are foreign keys really necessary in a database design? As far as I know, foreign keys (FK) are used to aid the programmer to manipulate data in the correct way. Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys?
Are there any other uses for foreign keys? Am I missing something here?
A: I suppose you are talking about foreign key constraints enforced by the database. You probably already are using foreign keys, you just haven't told the database about it.
Suppose a programmer is actually doing
this in the right manner already, then
do we really need the concept of
foreign keys?
Theoretically, no. However, there have never been a piece of software without bugs.
Bugs in application code are typically not that dangerous - you identify the bug and fix it, and after that the application runs smoothly again. But if a bug allows currupt data to enter the database, then you are stuck with it! It's very hard to recover from corrupt data in the database.
Consider if a subtle bug in FogBugz allowed a corrupt foreign key to be written in the database. It might be easy to fix the bug and quickly push the fix to customers in a bugfix release. However, how should the corrupt data in dozens of databases be fixed? Correct code might now suddenly break because the assumptions about the integrity of foreign keys dont hold anymore.
In web applications you typically only have one program speaking to the database, so there is only one place where bugs can corrupt the data. In an enterprise application there might be several independent applications speaking to the same database (not to mention people working directly with the database shell). There is no way to be sure that all applications follow the same assumptions without bugs, always and forever.
If constraints are encoded in the database, then the worst that can happen with bugs is that the user is shown an ugly error message about some SQL constraint not satisfied. This is much prefereable to letting currupt data into your enterprise database, where it in turn will break all your applications or just lead to all kinds of wrong or misleading output.
Oh, and foreign key constraints also improves performance because they are indexed by default. I can't think of any reason not to use foreign key constraints.
A: Is there a benefit to not having foreign keys? Unless you are using a crappy database, FKs aren't that hard to set up. So why would you have a policy of avoiding them? It's one thing to have a naming convention that says a column references another, it's another to know the database is actually verifying that relationship for you.
A: FKs are very important and should always exist in your schema, unless you are eBay.
A: Foreign keys can also help the programmer write less code using things like ON DELETE CASCADE. This means that if you have one table containing users and another containing orders or something, then deleting a user could automatically delete all orders that point to that user.
A: I think some single thing at some point must be responsible for ensuring valid relationships.
For example, Ruby on Rails does not use foreign keys, but it validates all the relationships itself. If you only ever access your database from that Ruby on Rails application, this is fine.
However, if you have other clients which are writing to the database, then without foreign keys they need to implement their own validation. You then have two copies of the validation code which are most likely different, which any programmer should be able to tell is a cardinal sin.
At that point, foreign keys really are neccessary, as they allow you to move the responsibility to a single point again.
A: Foreign keys allow someone who has not seen your database before to determine the relationship between tables.
Everything may be fine now, but think what will happen when your programmer leaves and someone else has to take over.
Foreign keys will allow them to understand the database structure without trawling through thousand of lines of code.
A:
As far as I know, foreign keys are used to aid the programmer to manipulate data in the correct way.
FKs allow the DBA to protect data integrity from the fumbling of users when the programmer fails to do so, and sometimes to protect against the fumbling of programmers.
Suppose a programmer is actually doing this in the right manner already, then do we really need the concept of foreign keys?
Programmers are mortal and fallible. FKs are declarative which makes them harder to screw up.
Are there any other uses for foreign keys? Am I missing something here?
Although this is not why they were created, FKs provide strong reliable hinting to diagramming tools and to query builders. This is passed on to end users, who desperately need strong reliable hints.
A: I can't imagine designing a database without foreign keys. Without them, eventually you are bound to make a mistake and corrupt the integrity of your data.
They are not required, strictly speaking, but the benefits are huge.
I'm fairly certain that FogBugz does not have foreign key constraints in the database. I would be interested to hear how the Fog Creek Software team structures their code to guarantee that they will never introduce an inconsistency.
A: A database schema without FK constraints is like driving without a seat belt.
One day, you'll regret it. Not spending that little extra time on the design fundamentals and data integrity is a sure fire way of assuring headaches later.
Would you accept code in your application that was that sloppy? That directly accessed the member objects and modified the data structures directly.
Why do you think this has been made hard and even unacceptable within modern languages?
A: They are not strictly necessary, in the way that seatbelts are not strictly necessary. But they can really save you from doing something stupid that messes up your database.
It's so much nicer to debug a FK constraint error than have to reconstruct a delete that broke your application.
A: They are important, because your application is not the only way data can be manipulated in the database. Your application may handle referential integrity as honestly as it wants, but all it takes is one bozo with the right privileges to come along and issue an insert, delete or update command at the database level, and all your application referential integrity enforcement is bypassed. Putting FK constraints in at the database level means that, barring this bozo choosing to disable the FK constraint before issuing their command, the FK constraint will cause a bad insert/update/delete statement to fail with a referential integrity violation.
A: I think about it in terms of cost/benefit... In MySQL, adding a constraint is a single additional line of DDL. It's just a handful of key words and a couple of seconds of thought. That's the only "cost" in my opinion...
Tools love foreign keys. Foreign keys prevent bad data (that is, orphaned rows) that may not affect business logic or functionality and therefor go unnoticed, and build up. It also prevents developers who are unfamiliar with the schema from implementing entire chunks of work without realizing they're missing a relationship. Perhaps everything is great within the scope of your current application, but if you missed something and someday something unexpected is added (think fancy reporting), you might be in a spot where you have to manually clean up bad data that's been accumulating since the inception of the schema without a database enforced check.
The little time it takes to codify what's already in your head when you're putting things together could save you or someone else a bunch of grief months or years down the road.
The question:
Are there any other uses for foreign
keys? Am I missing something here?
It is a bit loaded. Insert comments, indentation or variable naming in place of "foreign keys"... If you already understand the thing in question perfectly, it's "no use" to you.
A: Yes.
*
*They keep you honest
*They keep new developers honest
*You can do ON DELETE CASCADE
*They help you to generate nice diagrams that self explain the links between tables
A: Entropy reduction. Reduce the potential for chaotic scenarios to occur in the database.
We have a hard time as it is considering all the possiblilites so, in my opinion, entropy reduction is key to the maintenance of any system.
When we make an assumption for example: each order has a customer that assumption should be enforced by something. In databases that "something" is foreign keys.
I think this is worth the tradeoff in development speed. Sure, you can code quicker with them off and this is probably why some people don't use them. Personally I have killed a number of hours with NHibernate and some foreign key constraint that gets angry when I perform some operation. HOWEVER, I know what the problem is so it's less of a problem. I'm using normal tools and there are resources to help me work around this, possibly even people to help!
The alternative is allow a bug to creep into the system (and given enough time, it will) where a foreign key isn't set and your data becomes inconsistent. Then, you get an unusual bug report, investigate and "OH". The database is screwed. Now how long is that going to take to fix?
A:
Suppose a programmer is actually doing this in the right manner already
Making such a supposition seems to me to be an extremely bad idea; in general software is phenomenally buggy.
And that's the point, really. Developers can't get things right, so ensuring the database can't be filled with bad data is a Good Thing.
Although in an ideal world, natural joins would use relationships (i.e. FK constraints) rather than matching column names. This would make FKs even more useful.
A: Personally, I am in favor of foreign keys because it formalizes the relationship between the tables. I realize that your question presupposes that the programmer is not introducing data that would violate referential integrity, but I have seen way too many instances where data referential integrity is violated, despite best intentions!
Pre-foreign key constraints (aka declarative referential integrity or DRI) lots of time was spent implementing these relationships using triggers. The fact that we can formalize the relationship by a declarative constraint is very powerful.
@John - Other databases may automatically create indexes for foreign keys, but SQL Server does not. In SQL Server, foreign key relationships are only constraints. You must defined your index on foreign keys separately (which can be of benefit.)
Edit: I'd like to add that, IMO, the use of foreign keys in support of ON DELETE or ON UPDATE CASCADE is not necessarily a good thing. In practice, I have found that cascade on delete should be carefully considered based on the relationship of the data -- e.g. do you have a natural parent-child where this may be OK or is the related table a set of lookup values. Using cascaded updates implies you are allowing the primary key of one table to be modified. In that case, I have a general philosophical disagreement in that the primary key of a table should not change. Keys should be inherently constant.
A: Foreign keys help enforce referential integrity at the data level. They also improve performance because they're normally indexed by default.
A: Without a foreign key how do you tell that two records in different tables are related?
I think what you are referring to is referential integrity, where the child record is not allowed to be created without an existing parent record etc. These are often known as foreign key constraints - but are not to be confused with the existence of foreign keys in the first place.
A: You can view foreign keys as a constraint that,
*
*Help maintain data integrity
*Show how data is related to each other (which can help in enforcing business logic and rules)
*If used correctly, can help increase the efficiency with which the data is fetched from the tables.
A: We don't currently use foreign keys. And for the most part we don't regret it.
That said - we're likely to start using them a lot more in the near future for several reasons, both of them for similar reasons:
*
*Diagramming. It's so much easier to produce a diagram of a database if there are foreign key relationships correctly used.
*Tool support. It's a lot easier to build data models using Visual Studio 2008 that can be used for LINQ to SQL if there are proper foreign key relationships.
So I guess my point is that we've found that if we're doing a lot of manual SQL work (construct query, run query, blahblahblah) foreign keys aren't necessarily essential. Once you start getting into using tools, though, they become a lot more useful.
A: The best thing about foreign key constraints (and constraints in general, really) are that you can rely on them when writing your queries. A lot of queries can become a lot more complicated if you can't rely on the data model holding "true".
In code, we'll generally just get an exception thrown somewhere - but in SQL, we'll generally just get the "wrong" answers.
In theory, SQL Server could use constraints as part of a query plan - but except for check constraints for partitioning, I can't say that I've ever actually witnessed that.
A: Foreign keys had never been explicit (FOREIGN KEY REFERENCES table(column)) declared in projects (business applications and social networking websites) which I worked on.
But there always was a kind of convention of naming columns which were foreign keys.
It's like with database normalization -- you have to know what are you doing and what are consequence of that (mainly performance).
I am aware of advantages of foreign keys (data integrity, index for foreign key column, tools aware of database schema), but also I am afraid of using foreign keys as general rule.
Also various database engines could serve foreign keys in a different way, which could lead to subtle bugs during migration.
Removing all orders and invoices of deleted client with ON DELETE CASCADE is the perfect example of nice looking, but wrong designed, database schema.
A: Yes. The ON DELETE [RESTRICT|CASCADE] keeps developers from stranding data, keeping the data clean. I recently joined a team of Rails developers who did not focus on database constraints such as foreign keys.
Luckily, I found these: http://www.redhillonrails.org/foreign_key_associations.html -- RedHill on Ruby on Rails plug-ins generate foreign keys using the convention over configuration style. A migration with product_id will create a foreign key to the id in the products table.
Check out the other great plug-ins at RedHill, including migrations wrapped in transactions.
A: If you plan on generating your data access code, ie, Entity Framework or any other ORM you entirely lose the ability to generate a hierarchical model without Foreign Keys
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "134"
} |
Q: How can I prevent a server from becoming locked after a Remote Desktop session As part of our databuild run a 3rd party program (3D Studio Max) to export a number of assets. Unfortunately if a user is not currently logged in, or the machine is locked, then Max does not run correctly.
This can be solved for freshly booted machines by using a method such as TweakUI for automatic login. However when a user connects via Remote Desktop (to initiate a non-scheduled build, change a setting, whatever) then after the session ends the machine is left in a locked state with Max unable to run.
I'm looking for a way to configure windows (via fair means or foul) so either it does not lock when the remote session ends, or it "unlocks" itself a short while after. I'm aware of a method under XP where you can run a batchfile on the machine which kicks the remote user off, but this does not appear to work on Windows Server.
A: There is a separate terminal service connection available called the 'console' connection.
You can connect to this space using mstsc /console /v:servername. Use mstsc /? for full command line options.
This allows you to connect, open up the terminal services manager and boot the bad sessions.
A: Logging in over RDP shouldn't affect whether the console locks. If you don't log out of RDP (just closing the client keeps your session pending), then your session will be locked. You can solve that with idle timeouts in Terminal Services Manager.
If your console is locking, that's a seperate policy in Local Computer Settings or some such. If you have a domain, set it with a GPO. If you need the exact name of the policy, let me know and I'll dig it up for you.
A: I assume by unlock you want to make sure that disconnected sessions are logged off. To do this
*
*Administrative Tools | Terminal Services Configuration
*Right-Click RDP-TCP on the Connections folder and choose Properties
*Go to the Sessions tab and select the Override user settings check box
*Configure the End a Disconnected session to your needed timeout value
more reading at http://technet.microsoft.com/en-us/library/cc758177.aspx
A: You might want to look at using the "shadow" utility. This allows you to essentially proxy into an existing remote desktop session. You could log into the console of the machine with the account you need, then users could open non-console remote desktop sessions to the machine (or to another machine) then use shadow to connect to the same console session. The users will have to be in the administrators group on the machine.
Although, this might be as simple as telling people not to use the console session when logging into the machine using remote desktop.
A: Possible Solution from here.
To disable the Lock Computer button,
open Regedit and browse to
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\
System and
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\
System and create a new REG_DWORD
value in each called
DisableLockWorkstation. Setting this
value to 0 will allow the Lock
Computer button to be used, while 1
will disable it.
A: There may be a problem if you are running these tasks as Administrator and others are logging in via Remote Desktop as Administrator. The task should be run from its own account.
A: With the most recent terminal services client you can connect to the console using the /ADMIN switch.
So "Computer:" will be something like:
myworkstation.mydomain.local /ADMIN
-Ed
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How can I convert Markdown documents to HTML en masse? I'm writing some documentation in Markdown, and creating a separate file for each section of the doc. I would like to be able to convert all the files to HTML in one go, but I can't find anyone else who has tried the same thing. I'm on a Mac, so I would think a simple bash script should be able to handle it, but I've never done anything in bash and haven't had any luck. It seems like it should be simple to write something so I could just run:
markdown-batch ./*.markdown
Any ideas?
A: Use pandoc — it's a commandline tool that lets you convert from one format to another. This tool supports Markdown to HTML and back.
E.g. to generate HTML from Markdown, run:
pandoc -f markdown index.md > index.html
A: This is how you would do it in Bash.
for i in ./*.markdown; do perl markdown.pl --html4tags $i > $i.html; done;
Of course, you need the Markdown script.
A: If you have Node.js installed, then you can use the [MdPugToHtml] converter (https://www.npmjs.com/package/md-pug-to-html). It massively converts Markdown to Html. Moreover, it is possible to use Pug templates, but you can use them without templates.
The conversion is performed in the terminal with just one command:
npx md-pug-to-html /home/content
where:
*
*npx is an npm command that installs md-pug-to-html at the first launch, and then launches the md-pug-to-html converter.
*/home/content is a directory with your Markdown files. You may have another one.
The converter has various settings and can be used both in the CLI command line and has an API for use in applications.
There is detailed documentation on the MdPugToHtml converter in English and Russian.
A: You can do this really easily with VS Code. (Well, this is not a command line tool, but proved itself to be super helpful.)
*
*Install the Markdown All In One extension by Yu Zhang
*Open the VS Code Command Palette (Ctrl-Shift-P), and select Markdown All In One: Print documents to HTML (select a source folder)
*Tip: If you want to make your export portable, you want to change absolute image paths to relative paths by using the following setting in your settings.json (Ctrl-Shift-P -> Preferences: Open Settings (JSON))
"markdown.extension.print.absoluteImgPath": false
In this way, after conversion, just copy all non-markdown files (images) to the destination folder and the HTML pages are portable.
A: I use this in a .bat file:
@echo off
for %i in (*.txt) python markdown.py "%i"
A: // using Bash in mac
for i in *.md; do asciidoc $i; done;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: Using ASP.NET MVC, how to best avoid writing both the Add View and Edit View? The Add view and the Edit view are often incredibly similar that it is unwarranted to write 2 views. As the app evolves you would be making the same changes to both.
However, there are usually subtle differences. For instance, a field might be read-only once it's been added, and if that field is a DropDownList you no longer need that List in the ViewData.
So, should I create a view data class which contains all the information for both views, where, depending on the operation you're performing, certain properties will be null?
Should I include the operation in the view data as an enum?
Should I surround all the subtle differences with <% if( ViewData.Model.Op == Ops.Editing ) { %> ?
Or is there a better way?
A: It's pretty easy really. Let's assume you're editing a blog post.
Here's your 2 actions for new/edit:
public class BlogController : Controller
{
public ActionResult New()
{
var post = new Post();
return View("Edit", post);
}
public ActionResult Edit(int id)
{
var post = _repository.Get(id);
return View(post);
}
....
}
And here's the view:
<% using(Html.Form("save")) { %>
<%= Html.Hidden("Id") %>
<label for="Title">Title</label>
<%= Html.TextBox("Title") %>
<label for="Body">Body</label>
<%= Html.TextArea("Body") %>
<%= Html.Submit("Submit") %>
<% } %>
And here's the Save action that the view submits to:
public ActionResult Save(int id, string title, string body)
{
var post = id == 0 ? new Post() : _repository.Get(id);
post.Title = title;
post.Body = body;
_repository.Save(post);
return RedirectToAction("list");
}
A: I don't like the Views to become too complex, and so far I have tended to have separate views for Edit and Add. I use a user control to store the common elements to avoid repetition. Both of the views will be centered around the same ViewData, and I have a marker on my data to say whether the object is new or an existing object.
This isn't any more elegant than what you have stipulated, so I wonder if any of the Django or Rails guys can provide any input.
I love asp.net mvc but it is still maturing, and still needs more sugar adding to take away some of the friction of creating websites.
A: I personally just prefer to use the if/else right there in the view. It helps me see everything going on in view at once.
If you want to avoid the tag soup though, I would suggest creating a helper method.
<%= Helper.ProfessionField() %>
string ProfessionField()
{
if(IsNewItem) { return /* some drop down code */ }
else { return "<p>" + _profession+ "</p>"; }
}
A: You can specify a CustomViewData class and pass the parameters here.
public class MyViewData {
public bool IsReadOnly { get; set; }
public ModelObject MyObject { get; set; }
}
And both views should implement this ViewData.
As a result you can use provided IsReadOnly property to manage the UserControl result.
As the controller uses this, you can unit test it and your views doesn't have implementation, so you can respect the MVC principles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What's the difference between a Table Scan and a Clustered Index Scan? Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better?
As an example - what's the performance difference between the following when there are many records?:
declare @temp table(
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table(
RowID int not null identity(1,1) primary key,
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
A: In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map.
A clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT, UPDATE, and DELETE. A heap table, however, requires a second write to the IAM.
If your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering.
And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan.
So:
*
*For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows.
*For a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table.
*For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal.
*For INSERT, UPDATE, and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent.
Microsoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable):
*
*INSERT performance: clustered index wins by about 3% due to the second write needed for a heap.
*UPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap.
*DELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap.
*single SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap.
*range SELECT performance: clustered index wins by about 29% due to the random ordering for a heap.
*concurrent INSERT: heap table wins by 30% under load due to page splits for the clustered index.
A: http://msdn.microsoft.com/en-us/library/aa216840(SQL.80).aspx
The Clustered Index Scan logical and physical operator scans the clustered index specified in the Argument column. When an optional WHERE:() predicate is present, only those rows that satisfy the predicate are returned. If the Argument column contains the ORDERED clause, the query processor has requested that the rows' output be returned in the order in which the clustered index has sorted them. If the ORDERED clause is not present, the storage engine will scan the index in the optimal way (not guaranteeing the output to be sorted).
http://msdn.microsoft.com/en-us/library/aa178416(SQL.80).aspx
The Table Scan logical and physical operator retrieves all rows from the table specified in the Argument column. If a WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.
A: A table scan has to examine every single row of the table. The clustered index scan only needs to scan the index. It doesn't scan every record in the table. That's the point, really, of indices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: Importing C++ enumerations into C# I'm currently working on creating a new C# project that needs to interact with an older C++ application. There is an error enumeration that already exists in the C++ app that I need to use in the C# app.
I don't want to just re declare the enumeration in C# because that could cause sync issues down the line if the files aren't updated together.
All that being said my question is this:
Is there a way for me to taken an enumeration declared like so:
typedef enum
{
eDEVICEINT_ERR_FATAL = 0x10001
...
} eDeviceIntErrCodes;
and use it in a C# program like so:
eDeviceIntErrCodes.eDEVICEINT_ERR_FATAL
A: Simple answer is going to be no. Sorry, you are going to have to re-declare.
I have, in the past however, written scripts to import my C++ enums to a C# format in a enums.cs file and run it as part of the build, that way everything syncs.
A: In C/C++ you can #include a .cs file which contains the enumeration definition. Careful use of preprocessor directives takes care of the syntax differences between C# and C.
Example:
#if CSharp
namespace MyNamespace.SharedEnumerations
{
public
#endif
enum MyFirstEnumeration
{
Autodetect = -1,
Windows2000,
WindowsXP,
WindowsVista,
OSX,
Linux,
// Count must be last entry - is used to determine number of items in the enum
Count
};
#if CSharp
public
#endif
enum MessageLevel
{
None, // Message is ignored
InfoMessage, // Message is written to info port.
InfoWarning, // Message is written to info port and warning is issued
Popup // User is alerted to the message
};
#if CSharp
public delegate void MessageEventHandler(MessageLevel level, string message);
}
#endif
In your C# project, set a conditional compilation symbol "CSharp", make sure no such preprocessor definition exists in the C/C++ build environment.
Note that this will only ensure both parts are syncronised at build time. If you mix-and-match binaries from different builds, the guarantee fails.
A: Check out the PInvoke Interop Assistant tool http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120. Its a useful tool for generating PInvoke signatures for native methods.
If I feed it your enum it generates this code. There is a command line version of the tool included so you could potentially build an automated process to keep the C# definition of the enum up to date whenever the C++ version changes.
public enum eDeviceIntErrCodes
{
/// eDEVICEINT_ERR_FATAL -> 0x10001
eDEVICEINT_ERR_FATAL = 65537,
}
A: If you had declared the enum like:
namespace blah
{
enum DEVICE_ERR_CODES
{
eDEVICEINT_ERR_FATAL = 0x10001,
eDEVICEINT_ERR_OTHER = 0x10002,
};
}
and in another file:
DEVICE_ERR_CODES eDeviceIntErrCodes;
and named the enum file with a .cs extension, you might be able to get it to work.
You'd reference it like:
DEVICE_ERR_CODES err = DEVICE_ERR_CODES.eDEVICEINT_ERR_FATAL;
A: If you define strong enum in C++/CLI, enum codes will be included in the dll meta data. So, you can use enum codes in C#.
public enum class eDeviceIntErrCodes: int
{
eDEVICEINT_ERR_FATAL = 0x10001
...
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How to create a new instance of Sql Server 2005 I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists?
A: Assuming you are a member of the Windows Admininstrator group, you can put the server in Single User mode, you could try this -
http://blogs.msdn.com/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx
A: My read of the question was that the server is set up to use SQL authentication only, and perhaps you don't know the sa password or any other SQL login credentials? If so, you might be able to change the authentication mode. For SQL Server 2005 default instances, it's stored in the registry at:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer
in a DWORD called LoginMode. A value of 2 indicates Mixed Mode (both Windows and SQL authentication are supported); I think 0 is Windows only and 1 is SQL only. You can try changing it to 2, restart the MSSQL service, then try to get into the SQL management studio after logging into the machine as an administrator.
If that fails, you can create another instance by re-running the setup program.
A: Have you tried connecting when logged on as domain/server-local Administrator?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Favourite performance tuning tricks When you have a query or stored procedure that needs performance tuning, what are some of the first things you try?
A: CREATE INDEX
Assure there are indexes available for your WHERE and JOIN clauses. This will speed data access greatly.
If your environment is a data mart or warehouse, indexes should abound for almost any conceivable query.
In a transactional environment, the number of indexes should be lower and their definitions more strategic so that index maintenance doesn't drag down resources. (Index maintenance is when the leaves of an index must be changed to reflect a change in the underlying table, as with INSERT, UPDATE, and DELETE operations.)
Also, be mindful of the order of fields in the index - the more selective (higher cardinality) a field, the earlier in the index it should appear. For example, say you're querying for used automobiles:
SELECT i.make, i.model, i.price
FROM dbo.inventory i
WHERE i.color = 'red'
AND i.price BETWEEN 15000 AND 18000
Price generally has higher cardinality. There may be only a few dozen colors available, but quite possibly thousands of different asking prices.
Of these index choices, idx01 provides the faster path to satisfy the query:
CREATE INDEX idx01 ON dbo.inventory (price, color)
CREATE INDEX idx02 ON dbo.inventory (color, price)
This is because fewer cars will satisfy the price point than the color choice, giving the query engine far less data to analyze.
I've been known to have two very similar indexes differing only in the field order to speed queries (firstname, lastname) in one and (lastname, firstname) in the other.
A: Assuming MySQL here, use EXPLAIN to find out what is going on with the query, make sure that the indexes are being used as efficiently as possible and try to eliminate file sorts. High Performance MySQL: Optimization, Backups, Replication, and More is a great book on this topic as is MySQL Performance Blog.
A: A trick I recently learned is that SQL Server can update local variables as well as fields, in an update statement.
UPDATE table
SET @variable = column = @variable + otherColumn
Or the more readable version:
UPDATE table
SET
@variable = @variable + otherColumn,
column = @variable
I've used this to replace complicated cursors/joins when implementing recursive calculations, and also gained a lot in performance.
Here's details and example code that made fantastic improvements in performance:
Link
A: @Terrapin there are a few other differences between isnull and coalesce that are worth mentioning (besides ANSI compliance, which is a big one for me).
Coalesce vs. IsNull
A: Sometimes in SQL Server if you use an OR in a where clause it will really jack with performance. Instead of using the OR just do two selects and union them together. You get the same results at 1000x the speed.
A: Look at the where clause - verify use of indexes / verify nothing silly is being done
where SomeComplicatedFunctionOf(table.Column) = @param --silly
A: I'll generally start with the joins - I'll knock each one of them out of the query one at a time and re-run the query to get an idea if there's a particular join I'm having a problem with.
A: On all of my temp tables, I like to add unique constraints (where appropriate) to make indexes, and primary keys (almost always).
declare @temp table(
RowID int not null identity(1,1) primary key,
SomeUniqueColumn varchar(25) not null,
SomeNotUniqueColumn varchar(50) null,
unique(SomeUniqueColumn)
)
A: @DavidM
Assuming MySQL here, use EXPLAIN to find out what is going on with the query, make sure that the indexes are being used as efficiently as possible...
In SQL Server, execution plan gets you the same thing - it tells you what indexes are being hit, etc.
A: Not necessarily a SQL performance trick per se but definately related:
A good idea would be to use memcached where possible as it would be much faster just fetching the precompiled data directly from memory rather than getting it from the database. There's also a flavour of MySQL that got memcached built in (third party).
A: Make sure your index lengths are as small as possible. This allows the DB to read more keys at a time from the file system, thus speeding up your joins. I assume this works with all DB's, but I know it's a specific recommendation for MySQL.
A: I've made it a habit to always use bind variables. It's possible bind variables won't help if the RDBMS doesn't cache SQL statements. But if you don't use bind variables the RDBMS doesn't have a chance to reuse query execution plans and parsed SQL statements. The savings can be enormous: http://www.akadia.com/services/ora_bind_variables.html. I work mostly with Oracle, but Microsoft SQL Server works pretty much the same way.
In my experience, if you don't know whether or not you are using bind variables, you probably aren't. If your application language doesn't support them, find one that does. Sometimes you can fix query A by using bind variables for query B.
After that, I talk to our DBA to find out what's causing the RDBMS the most pain. Note that you shouldn't ask "Why is this query slow?" That's like asking your doctor to take out you appendix. Sure your query might be the problem, but it's just as likely that something else is going wrong. As developers, we we tend to think in terms of lines of code. If a line is slow, fix that line. But a RDBMS is a really complicated system and your slow query might be the symptom of a much larger problem.
Way too many SQL tuning tips are cargo cult idols. Most of the time the problem is unrelated or minimally related to the syntax you use, so it's normally best to use the cleanest syntax you can. Then you can start looking at ways to tune the database (not the query). Only tweak the syntax when that fails.
Like any performance tuning, always collect meaningful statistics. Don't use wallclock time unless it's the user experience you are tuning. Instead look at things like CPU time, rows fetched and blocks read off of disk. Too often people optimize for the wrong thing.
A: First step:
Look at the Query Execution Plan!
TableScan -> bad
NestedLoop -> meh warning
TableScan behind a NestedLoop -> DOOM!
SET STATISTICS IO ON
SET STATISTICS TIME ON
A: Running the query using WITH (NoLock) is pretty much standard operation in my place. Anyone caught running queries on the tens-of-gigabytes tables without it is taken out and shot.
A: Convert NOT IN queries to LEFT OUTER JOINS if possible. For example if you want to find all rows in Table1 that are unused by a foreign key in Table2 you could do this:
SELECT *
FROM Table1
WHERE Table1.ID NOT IN (
SELECT Table1ID
FROM Table2)
But you get much better performance with this:
SELECT Table1.*
FROM Table1
LEFT OUTER JOIN Table2 ON Table1.ID = Table2.Table1ID
WHERE Table2.ID is null
A: *
*Have a pretty good idea of the optimal path of running the query in your head.
*Check the query plan - always.
*Turn on STATS, so that you can examine both IO and CPU performance. Focus on driving those numbers down, not necessarily the query time (as that can be influenced by other activity, cache, etc.).
*Look for large numbers of rows coming into an operator, but small numbers coming out. Usually, an index would help by limiting the number of rows coming in (which saves disk reads).
*Focus on the largest cost subtree first. Changing that subtree can often change the entire query plan.
*Common problems I've seen are:
*
*If there's a lot of joins, sometimes Sql Server will choose to expand the joins, and then apply WHERE clauses. You can usually fix this by moving the WHERE conditions into the JOIN clause, or a derived table with the conditions inlined. Views can cause the same problems.
*Suboptimal joins (LOOP vs HASH vs MERGE). My rule of thumb is to use a LOOP join when the top row has very few rows compared to the bottom, a MERGE when the sets are roughly equal and ordered, and a HASH for everything else. Adding a join hint will let you test your theory.
*Parameter sniffing. If you ran the stored proc with unrealistic values at first (say, for testing), then the cached query plan may be suboptimal for your production values. Running again WITH RECOMPILE should verify this. For some stored procs, especially those that deal with varying sized ranges (say, all dates between today and yesterday - which would entail an INDEX SEEK - or, all dates between last year and this year - which would be better off with an INDEX SCAN) you may have to run it WITH RECOMPILE every time.
*Bad indentation...Okay, so Sql Server doesn't have an issue with this - but I sure find it impossible to understand a query until I've fixed up the formatting.
A: Slightly off topic but if you have control over these issues...
High level and High Impact.
*
*For high IO environments make sure your disks are for either RAID 10 or RAID 0+1 or some nested implementation of raid 1 and raid 0.
*Don't use drives less than 1500K.
*Make sure your disks are only used for your Database. IE no logging no OS.
*Turn off auto grow or similar feature. Let the database use all storage that is anticipated. Not necessarily what is currently being used.
*design your schema and indexes for the type queries.
*if it's a log type table (insert only) and must be in the DB don't index it.
*if your doing allot of reporting (complex selects with many joins) then you should look at creating a data warehouse with a star or snowflake schema.
*Don't be afraid of replicating data in exchange for performance!
A: Here is the handy-dandy list of things I always give to someone asking me about optimisation.
We mainly use Sybase, but most of the advice will apply across the board.
SQL Server, for example, comes with a host of performance monitoring / tuning bits, but if you don't have anything like that (and maybe even if you do) then I would consider the following...
99% of problems I have seen are caused by putting too many tables in a join. The fix for this is to do half the join (with some of the tables) and cache the results in a temporary table. Then do the rest of the query joining on that temporary table.
Query Optimisation Checklist
*
*Run UPDATE STATISTICS on the underlying tables
*
*Many systems run this as a scheduled weekly job
*Delete records from underlying tables (possibly archive the deleted records)
*
*Consider doing this automatically once a day or once a week.
*Rebuild Indexes
*Rebuild Tables (bcp data out/in)
*Dump / Reload the database (drastic, but might fix corruption)
*Build new, more appropriate index
*Run DBCC to see if there is possible corruption in the database
*Locks / Deadlocks
*
*Ensure no other processes running in database
*
*Especially DBCC
*Are you using row or page level locking?
*Lock the tables exclusively before starting the query
*Check that all processes are accessing tables in the same order
*Are indices being used appropriately?
*
*Joins will only use index if both expressions are exactly the same data type
*Index will only be used if the first field(s) on the index are matched in the query
*Are clustered indices used where appropriate?
*
*range data
*WHERE field between value1 and value2
*Small Joins are Nice Joins
*
*By default the optimiser will only consider the tables 4 at a time.
*This means that in joins with more than 4 tables, it has a good chance of choosing a non-optimal query plan
*Break up the Join
*
*Can you break up the join?
*Pre-select foreign keys into a temporary table
*Do half the join and put results in a temporary table
*Are you using the right kind of temporary table?
*
*#temp tables may perform much better than @table variables with large volumes (thousands of rows).
*Maintain Summary Tables
*
*Build with triggers on the underlying tables
*Build daily / hourly / etc.
*Build ad-hoc
*Build incrementally or teardown / rebuild
*See what the query plan is with SET SHOWPLAN ON
*See what’s actually happenning with SET STATS IO ON
*Force an index using the pragma: (index: myindex)
*Force the table order using SET FORCEPLAN ON
*Parameter Sniffing:
*
*Break Stored Procedure into 2
*call proc2 from proc1
*allows optimiser to choose index in proc2 if @parameter has been changed by proc1
*Can you improve your hardware?
*What time are you running? Is there a quieter time?
*Is Replication Server (or other non-stop process) running? Can you suspend it? Run it eg. hourly?
A: Index the table(s) by the clm(s) you filter by
A: *
*Prefix all tables with dbo. to prevent recompilations.
*View query plans and hunt for table/index scans.
*In 2005, scour the management views for missing indexes.
A: I like to use
isnull(SomeColThatMayBeNull, '')
Over
coalesce(SomeColThatMayBeNull, '')
When I don't need the multiple argument support that coalesce gives you.
http://blog.falafel.com/2006/04/05/SQLServerArcanaISNULLVsCOALESCE.aspx
A: I look out for:
*
*Unroll any CURSOR loops and convert into set based UPDATE / INSERT statements.
*Look out for any application code that:
*
*Calls an SP that returns a large set of records,
*Then in the application, goes through each record and calls an SP with parameters to update records.
*Convert this into a SP that does all the work in one transaction.
*Any SP that does lots of string manipulation. It's evidence that the data is not structured correctly / normalised.
*Any SP's that re-invent the wheel.
*Any SP's that I can't understand what it's trying to do within a minute!
A: SET NOCOUNT ON
Usually the first line inside my stored procedures, unless I actually need to use @@ROWCOUNT.
A: Remove cursors wherever the are not neceesary.
A: In SQL Server, use the nolock directive. It allows the select command to complete without having to wait - usually other transactions to finish.
SELECT * FROM Orders (nolock) where UserName = 'momma'
A: Remove function calls in Sprocs where a lot of rows will call the function.
My colleague used function calls (getting lastlogindate from userid as example) to return very wide recordsets.
Tasked with optimisation, I replaced the function calls in the sproc with the function's code: I got many sprocs' running time down from > 20 seconds to < 1.
A: Don't prefix Stored Procedure names with "sp_" because system procedures all start with "sp_", and SQL Server will have to search harder to find your procedure when it gets called.
A: Dirty reads -
set transaction isolation level read uncommitted
Prevents dead locks where transactional integrity isn't absolutely necessary (which is usually true)
A: I always go to SQL Profiler (if it's a stored procedure with a lot of nesting levels) or the query execution planner (if it's a few SQL statements with no nesting) first. 90% of the time you can find the problem immediately with one of these two tools.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129"
} |
Q: Asp.net MVC User Control ViewData When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following:
return View("Message", new { DisplayMessage = "This is a test" });
and on my Message control I had this:
<%= ViewData["DisplayMessage"] %>
I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control:
public class MessageControl : ViewUserControl<MessageData>
and now this call works:
return View("Message", new MessageData() { DisplayMessage = "This is a test" });
and can be displayed like this:
<%= ViewData.Model.DisplayMessage %>
Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"?
A: The method
ViewData.Eval("DisplayMessage")
should work for you.
A: Of course after I create this question I immediately find the answer after a few more searches on Google
http://forums.asp.net/t/1197059.aspx
Apparently this happens because of the wrapper class. Even so, it seems like any property passed should get added to the ViewData collection by default.
I really need to stop answering my own questions :(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Is UML practical? In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful?
Edit: My experience is limited to small, under 10 developer projects.
Edit: Many good answers, and though not the most verbose, I belive the one selected is the most balanced.
A:
Using UML is like looking at your feet as you walk. It's making conscious and explicit something that you can usually do unconsciously. Beginners need to think carefully about what they're doing, but a professional programmer already knows what they're doing. Most of the time, writing the code itself is quicker and more effective than writing about the code, because their programming intuition is tuned to the task.
It's not just about what you're doing though. What about the new hire who comes in six months from now and needs to come up to speed on the code? What about five years from now when everyone currently working on the project is gone?
It's incredibly helpful to have some basic up to date documentation available for anyone who joins the project later. I don't advocate full blown UML diagrams with method names and parameters (WAY too difficult to maintain), but I do think that a basic diagram of the components in the system with their relationships and basic behavior is invaluable. Unless the design of the system changes drastically, this information shouldn't change a lot even as the implementation is tweaked.
I've found that the key to documentation is moderation. No one is going to read 50 pages of full blown UML diagrams with design documentation without falling asleep a few pages in. On the other hand, most people would love to get 5-10 pages of simple class diagrams with some basic descriptions of how the system is put together.
The other case where I've found UML to be useful is for when a senior developer is responsible for designing a component but then hands the design to a junior developer to implement.
A: I am coming to this topic a little late and will just try an clarify a couple minor points. Asking if UML is useful as far too broad. Most people seemed to answer the question from the typical/popular UML as a drawing/communication tool perspective. Note: Martin Fowler and other UML book authors feel UML is best used for communication only. However, there are many other uses for UML. Above all, UML is a modeling language that has notation and diagrams mapped to the logical concepts. Here are some uses for UML:
*
*Communication
*Standardized Design/Solution documentation
*DSL (Domain Specific Language) Definition
*Model Definition (UML Profiles)
*Pattern/Asset Usage
*Code Generation
*Model to Model transformations
Given the uses list above the posting by Pascal is not sufficient as it only speaks to diagram creation. A project could benefit from UML if any of the above are critical success factors or are problem areas that need a standardized solution.
The discussion should expanded out from how UML can be over kill or applied to small projects to discuss when UML makes sense or will actually improve the product/solution as that is when UML should be used. There are situations where UML for one developer could sense as well, such as Pattern Application or Code Generation.
A: UML has worked for me for years. When I started out I read Fowler's UML Distilled where he says "do enough modelling/architecture/etc.". Just use what you need!
A: In a sufficiently complex system there are some places where some UML is considered useful.
The useful diagrams for a system, vary by applicability.
But the most widely used ones are:
*
*Class Diagrams
*State Diagrams
*Activity Diagrams
*Sequence Diagrams
There are many enterprises who swear by them and many who outright reject them as an utter waste of time and effort.
It's best not to go overboard and think what's best for the project you are on and pick the stuff that is applicable and makes sense.
A: From a QA Engineer's perspective, UML diagrams point out potential flaws in logic and thought. Makes my job easier :)
A: Though this discussion has long been inactive, I have a couple of -to my mind important- points to add.
Buggy code is one thing. Left to drift downstream, design mistakes can get very bloated and ugly indeed. UML, however, is self-validating. By that I mean that in allowing you to explore your models in multiple, mathematically closed and mutually-checking dimensions, it engenders robust design.
UML has another important aspect: it "talks" directly to our strongest capability, that of visualisation. Had, for example, ITIL V3 (at heart simple enough) been communicated in the form of UML diagrams, it could have been published on a few dozen A3 foldouts. Instead, it came out in several tomes of truly biblical proportions, spawning an entire industry, breathtaking costs and widespread catatonic shock.
A: Using UML is like looking at your feet as you walk. It's making conscious and explicit something that you can usually do unconsciously. Beginners need to think carefully about what they're doing, but a professional programmer already knows what they're doing. Most of the time, writing the code itself is quicker and more effective than writing about the code, because their programming intuition is tuned to the task.
The exception is why you find yourself in the woods at night without a torch and it's started to rain - then you need to look at your feet to avoid falling down. There are times when the task you've taken on is more complicated than your intuition can handle, and you need to slow down and state the structure of your program explicitly. Then UML is one of many tools you can use. Others include pseudocode, high-level architecture diagrams and strange metaphors.
A: I believe there may be a way to utilize Cockburn style UML fish,kite, and sea-level use cases as described by Fowler in his book "UML Distilled." My idea was to employ Cockburn use cases as an aid for code readability.
So I did an experiment and there is a post here about it with the Tag "UML" or "FOWLER." It was a simple idea for c#. Find a way to embed Cockburn use cases into the namespaces of programming constructs (such as the class and inner class namespaces or by making use of the namespaces for enumerations). I believe this could be a viable and simple technique but still have questions and need others to check it out. It could be good for simple programs that need a kind of pseudo-Domain Specific Language which can exist right in the midst of the c# code without any language extensions.
Please check out the post if you are interested. Go here.
A: I think the UML is useful thought I think the 2.0 spec has made what was once a clear specification somewhat bloated and cumbersome. I do agree with the edition of timing diagrams etc since they filled a void...
Learning to use the UML effectively takes a bit of practice. The most important point is to communicate clearly, model when needed and model as a team. Whiteboards are the best tool that I've found. I have not seen any "digital whiteboard software" that has managed to capture the utility of an actual whiteboard.
That being said I do like the following UML tools:
*
*Violet - If it were any more simple it would be a piece of paper
*Altova UModel - Good tool for Java and C# Modeling
*MagicDraw - My favorite commercial tool for Modeling
*Poseidon - Decent tool with good bang for the buck
*StarUML - Best open source modeling tool
A: UML diagrams are useful for capturing and communicating requirements and ensuring that the system meets those requirements. They can be used iteratively and during various stages of planning, design, development, and testing.
From the topic: Using Models within the Development Process at http://msdn.microsoft.com/en-us/library/dd409423%28VS.100%29.aspx
A model can help you visualize the world in which your system works, clarify users' needs, define the
architecture of your system, analyze the code, and ensure that your code meets the requirements.
You might also want to read my response to the following post:
How to learn “good software design/architecture”? at https://stackoverflow.com/questions/268231/how-to-learn-good-software-design-architecture/2293489#2293489
A: Generic work-flow and DFDs can be very useful for complex processes. All other diagramming (ESPECIALLY UML) has, in my experience, without exception been a painful waste of time and effort.
A: I see sequence diagrams and activity diagrams used fairly often. I do a lot of work with "real-time" and embedded systems that interact with other systems, and sequence diagrams are very helpful in visualizing all the interactions.
I like to do use-case diagrams, but I haven't met too many people who think they are valuable.
I've often wondered whether Rational Rose is a good example of the kinds of applications you get from UML-model-based design. It's bloated, buggy, slow, ugly, ...
A: I found UML not really useful for very small projects, but really suitable for larger ones.
Essentially, it does not really matter what you use, you just have to keep two things in mind:
*
*You want some sort of architecture planning
*You want to be sure that everyone in the team is actually using the same technology for project planning
So UML is just that: A standard on how you plan your projects. If you hire new people, there are more likely to know any existing standard - be it UML, Flowchard, Nassi-Schneiderman, whatever - rather than your exising in-house stuff.
Using UML for a single developer and/or a simple software project seems overkill to me, but when working in a larger team, I would definitely want some standard for planning software.
A: UML is useful, yes indeed! The main uses I've made of it were:
*
*Brainstorming about the ways a piece of software should work. It makes easy to communicate what you are thinking.
*Documenting the architecture of a system, it's patterns and the main relationships of its classes. It helps when someone enters your team, when you're leaving and want to make sure your successor will understand it, and when you eventually forget what the hell that little class was meant for.
*Documenting any architectural pattern you use on all your systems, for the same reasons of the dot above
I only disagree with Michael when he says that using UML for a single developer and/or a simple software project seems overkill to him. I've used it on my small personal projects, and having them documented using UML saved me a lot of time when I came back to them seven months later and had completely forgotten how I had built and put together all those classes.
A: One of the problems I have with UML is the understandability of the specification. When I try to really understand the semantics of a particular diagram I quickly get lost in the maze of meta-models and meta-meta-models. One of the selling points of UML is that it is less ambiguous than natural language. However, if two, or more, engineers interpret a diagram differently, it fails at the goal.
Also, I've tried asking specific questions about the super-structure document on several UML forums, and to members of the OMG itself, with little or no results. I don't think the UML community is mature enough yet to support itself.
A: Coming from a student, I find that UML has very little use. I find it ironic that PROGAMERS have yet to develop a program that will automatically generate the things that you have said are necessary. It would be extremely simple to design a feature into Visual Studio that could pull pieces of the data, seek for definitions, and product answers sufficent so that anyone could look at it, great or small, and understand the program. This would also keep it up to date because it would take the information directly from the code to produce the information.
A: UML is used as soon as you represent a class with its fields and methods though it's just a kind of UML diagram.
The problem with UML is that the founders book is too vague.
UML is just a language, it's not really a method.
As for me, I really find annoying the lack of UML schema for Opensource Projects. Take something like Wordpress, you just have a database schema, nothing else. You have to wander around the codex api to try to get the big picture.
A: I'd have to disagree, UML is used all over the place - anywhere a IT project is being designed UML will usually be there.
Now whether it is being used well is another matter.
As Stu said, I find both Use Cases (along with the use case descriptions) and activity diagrams to be the most helpful from a developer point of view.
Class diagram can be very useful when trying to show relationships, as well as object attributes, such as persistence. When it comes to adding ever single attribute or property they are usually overkill, especially as they often become out of date quickly once code is written.
One of the biggest problems with UML is the amount of work required to keep it up to date once code is being generated, as there are few tools that can re-engineer UML from code, and few still that do it well.
A: I will qualify my answer by mentioning that I don't have experience in large (IBM-like) corporate development environments.
The way I view UML and the Rational Unified Process is that it's more TALKING about what you're going to do than actually DOING what you're going to do.
(In other words it's largely a waste of time)
A: Throw away only in my opinion. UML is a great tool for communicating ideas, the only issue is when you store and maintain it because you are essentially creating two copies of the same information and this is where it usually blows.
After the initial round of implementation most of the UML should be generated from the source code else it will go out of date very quickly or require a lot of time (with manual errors) to keep up to date.
A: I co-taught a senior-level development project course my last two semesters in school. The project was intended to be used in a production environment with local non-profits as paying clients. We had to be certain that code did what we expected it to and that the students were capturing all the data necessary to meet the clients' needs.
Class time was limited, as was my time outside of the classroom. As such, we had to perform code reviews at every class meeting, but with 25 students enrolled individual review time was very short. The tool we found most valuable in these review sessions were ERD's, class diagrams and sequence diagrams. ERD's and class diagrams were done only in Visual Studio, so the time required to create them was trivial for the students.
The diagrams communicated a great deal of information very quickly. By having a quick overview of the students' designs, we could quickly isolate problem areas in their code and perform a more detailed review on the spot.
Without using diagrams, we would have had to take the time to go one by one through the students' code files looking for problems.
A: UML has its place. It becomes increasingly important as the size of the project grows. If you have a long running project, then it is best to document everything in UML.
A: UML seems to good for large projects with large teams of people. However I've worked in small teams where communication is better.
Using UML-esque diagrams is good though, especially in the planning stage. I tend to think in code, so I find writing large specs hard. I prefer to write down the inputs' and outputs' and leave the developers to design the bit in the middle.
A: I believe UML is useful just for the fact that it gets people to think about the relationships between their classes. It is a good starting point to start thinking about such relationships, but it is definitely not a solution for everybody.
My belief is that the use of UML is subjective to the situation in which the development team is working.
A: In my experience:
The ability to create and communicate meaningful code diagrams is a necessary skill for any software engineer who is developing new code, or attempting to understand existing code.
Knowing the specifics of UML - when to use a dashed line, or a circle endpoint - is not quite as necessary, but is still good to have.
A: UML is useful in two ways:
*
*Technical side: a lot of people (manager and some functional analyst) think that UML is a luxury feature because The code is the documentation: you start coding, after you debug and fix. The sync of UML diagrams with code and analisys force you to understand well the requests of the customer;
*Management side: the UMl diagrams are a mirror of the requires of the customer who is inaccurate: if you code without UML, maybe you can find a bug in requires after a lot of hours of work. The diagrams UML allow you to find the possible controversal points and to resolve before the coding =>help your planning.
Generally, all the projects without UML diagrams have a superficial analysis or they have short size.
if you're in linkedin group SYSTEMS ENGINEERS, see my old discussion.
A: No it isn't. Code is the best form of communication. Everything else is for people who got into the wrong industry and decided to change it to suit their brain.
Interfaces are vastly superior to class diagrams. Use case diagrams mash user requirements into a form that destroys them and tells you how to design the product. Data flow diagrams and ER diagrams complicate the process of creating a database. Sometimes you might want to draw diagrams to communicate your point, but there is no reason they have to be UML diagrams or conform to any standard other than that the other person can understand them.
In the end, these are all forms of documentation for large, useless, bureaucratic "software" companies that make terrible products laiden with XML. That is, if they ever finish documenting. Really, most of the employees don't care about their job and are just waiting for the person above them to die so that they can get a promotion.
A: UML is just one of methods for communication within people.
Whiteboard is better.
A: UML definetly has its place in the industry. Imagine you are building software for Boing aircraft or some other complex system. UML and RUP would be great help here.
A: In the end UML only exist because of RUP. Do we need UML or any of its related stuff to use Java/.Net ? The practical answer is they have their own documenation (javadoc etc) which is sufficient and lets us get our job done!
UML no thanx.
A: UML is definitely helpful just as junit is essential. It all depends how you sell the idea. Your program will work without UML just as it would work without unit tests. Having said that, you should create do UML as along it is connected to your code, i.e when you update UML diagrams it updates your code, or when you update your code it auto generates the UML. Don't do just for the sake of doing it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "126"
} |
Q: Why doesn't **find** find anything? I'm looking for shell scripts files installed on my system, but find doesn't work:
$ find /usr -name *.sh
But I know there are a ton of scripts out there. For instance:
$ ls /usr/local/lib/*.sh
/usr/local/lib/tclConfig.sh
/usr/local/lib/tkConfig.sh
Why doesn't find work?
A: For finding files on your disks, lean to use "locate" instead that is instantaneous
(looks into a daily built index)
you example would be:
locate '/usr*.sh'
A: Try quoting the wildcard:
$ find /usr -name \*.sh
or:
$ find /usr -name '*.sh'
If you happen to have a file that matches *.sh in the current working directory, the wildcard will be expanded before find sees it. If you happen to have a file named tkConfig.sh in your working directory, the find command would expand to:
$ find /usr -name tkConfig.sh
which would only find files named tkConfig.sh. If you had more than one file that matches *.sh, you'd get a syntax error from find:
$ cd /usr/local/lib
$ find /usr -name *.sh
find: bad option tkConfig.sh
find: path-list predicate-list
Again, the reason is that the wildcard expands to both files:
$ find /usr -name tclConfig.sh tkConfig.sh
Quoting the wildcard prevents it from being prematurely expanded.
Another possibility is that /usr or one of its subdirectories is a symlink. find doesn't normally follow links, so you might need the -follow option:
$ find /usr -follow -name '*.sh'
A: On some systems (Solaris, for example), there's no default action, so you need to add the -print command.
find /usr -name '*.foo' -print
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Batch code indenters and beautifiers Does anyone here know of good batch file code indenters or beautifiers?
Specifically for PHP, JS and SGML-languages.
Preferably with options as to style.
A: The following page has code on it to tidy Javascript (written in javascript as well):
http://www.howtocreate.co.uk/tutorials/jsexamples/JSTidy.html
There are various ways to tidy SGML based files (i.e. XML) - HTMLTidy will often do the trick, and there are various 'pretty print' implementations in various languages out there.
And finally a link to a web site with PHP code for pretty printing PHP: http://tobyinkster.co.uk/blog/2007/07/17/php-pretty-printer/
A: For HTML/XML HTML Tidy is the best option:
http://tidy.sourceforge.net/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What did I do wrong here? [Javascript Regex] So I am writing a registration form and I need the display name to be only numbers, letters and underscores.
Have a look at my code and tell me what I'm doing wrong.
<form method="post" action="/" onsubmit="return check_form()">
<input type="text" id="display-name" name="display-name" maxlength="255" />
<input type="submit" />
</form>
<script type="text/javascript">
<!--
var name_regex = /^([a-zA-Z0-9_])+/
function check_form()
{
if (!name_regex.test(document.forms[0].elements[0].value))
{
document.forms[0].elements[0].focus()
alert("Your display name may only contain letters, numbers and underscores")
return false
}
}
-->
</script>
It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work.
A: My regexp would go along the lines of: /^[a-zA-Z0-9_]+$/
edit: I think it's the lack of a line end $ that makes it fail.
A: Your regex
/^([a-zA-Z0-9_])+/
Looks for
*
*Start of string(check), followed by
*1 or more letters, numbers, or underscore (check)
And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore
If you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string.
/^([a-zA-Z0-9_])+$/
Secondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do'
A: By 'not working' I take it you mean it is letting invalid entries through (rather than not letting valid entries through).
As @Annan has said, this would probably be due to the lack of the $ character at the end of the expression, as currently it only requires a single valid character at the start of the value, and the rest can be anything.
A: What does "doesn't work" mean? Does it reject valid display names? Does it accept invalid display names? Which ones?
Per @Annan, leaving off the $ would make the regexp accept invalid display names like abc123!@#.
If the code is rejecting valid display names, it may be because the parentheses are being matched literally instead of denoting a group (I'm not sure of the quoting convention in JS).
A: I tested your script and meddled with the javascript. This seem to work:
<form method="post" action="/" onsubmit="return check_form()">
<input type="text" id="display-name" name="display-name" maxlength="255" />
<input type="submit" />
</form>
<script type="text/javascript">
<!--
var name_regex = /^([a-zA-Z0-9_])+$/;
function check_form()
{
if (!name_regex.test(document.forms[0].elements[0].value))
{
document.forms[0].elements[0].focus();
alert("Your display name may only contain letters, numbers and underscores");
return false;
}
}
-->
</script>
A: Sorry guys I should have been more specific. Whenever I added spaces the values were still being accepted. The dollar sign $ did the trick!
A: A simpler way to write it still would be
var name_regex = /^([a-z0-9_])+$/i;
A: Even simpler:
var name_regex = /^\w+$/;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to install a plugin for QtWebKit I am running a Qt 4.5 commercial snapshot and want to use a plugin that I downloaded (it's a .so file) in my QWebView. Is there a specific location where I need to place this file? Can I grab it using the QWebPluginFactory?
A: I am assuming the plugin here is the NPAPI plugin (e.g. Flash). Under X11, QtWebKit search several common directories for the plugin. For the complete list, see the documentation on Netscape plugin support.
In addition to that, you must enable plugin support via QWebSettings::. See the documentation for WebAttribute::::PluginsEnabled, either globally or for your particular QWebView only.
A: If you're a commercial client you should be demanding your money earned support directly from the trolltech(nokia) guys.
A: Have you tried putting in the standard library directories? It should be picked up by the linker if it's in one of those directories.
For example:
/lib/
/usr/lib/
/usr/share/lib/
/usr/local/lib/
A: Have you tried looking around in /usr/lib/qt4/plugins/ or somewhere similar yet? I suppose that path will probably be where you have your 4.5 snapshot stuff compiled, but it should have options for putting in plugins for various things.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: SQL 2008 Dialect Support for NHibernate Is anyone working on or know if there exists a SQL 2k8 Dialect for NHibernate?
A: This was asked on the NHibernate Google Group recently - apparently the SQL 2005 dialect should work against SQL 2008.
Definitive location of the dialects - source control is here, binary downloads are here.
A: The MsSql2008Dialect is available here.
Fabio Mulo has just released the Beta1 of NHibernate 2.1 that should come with the MsSql2008Dialect and support the new DateTime datatypes, but i didn't saw anything regarding the other datatypes. Anyway, the Beta1 seems to really good, I am testing it and look good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How can I find the keys of an object? I know in JavaScript, objects double as hashes, but I have been unable to find a built-in function to get the keys:
var h = {a:'b', c:'d'};
I want something like
var k = h.keys() ; // k = ['a', 'c'];
It is simple to write a function myself to iterate over the items and add the keys to an array that I return, but is there a standard cleaner way to do that?
I keep feeling it must be a simple built in function that I missed but I can't find it!
A: For production code requiring a large compatibility with client browsers I still suggest Ivan Nevostruev's answer with shim to ensure Object.keys in older browsers. However, it's possible to get the exact functionality requested using ECMA's new defineProperty feature.
As of ECMAScript 5 - Object.defineProperty
As of ECMA5 you can use Object.defineProperty() to define non-enumerable properties. The current compatibility still has much to be desired, but this should eventually become usable in all browsers. (Specifically note the current incompatibility with IE8!)
Object.defineProperty(Object.prototype, 'keys', {
value: function keys() {
var keys = [];
for(var i in this) if (this.hasOwnProperty(i)) {
keys.push(i);
}
return keys;
},
enumerable: false
});
var o = {
'a': 1,
'b': 2
}
for (var k in o) {
console.log(k, o[k])
}
console.log(o.keys())
# OUTPUT
# > a 1
# > b 2
# > ["a", "b"]
However, since ECMA5 already added Object.keys you might as well use:
Object.defineProperty(Object.prototype, 'keys', {
value: function keys() {
return Object.keys(this);
},
enumerable: false
});
Original answer
Object.prototype.keys = function ()
{
var keys = [];
for(var i in this) if (this.hasOwnProperty(i))
{
keys.push(i);
}
return keys;
}
Edit: Since this answer has been around for a while I'll leave the above untouched. Anyone reading this should also read Ivan Nevostruev's answer below.
There's no way of making prototype functions non-enumerable which leads to them always turning up in for-in loops that don't use hasOwnProperty. I still think this answer would be ideal if extending the prototype of Object wasn't so messy.
A: Using jQuery, you can get the keys like this:
var bobject = {primary:"red", bg:"maroon", hilite:"green"};
var keys = [];
$.each(bobject, function(key, val){ keys.push(key); });
console.log(keys); // ["primary", "bg", "hilite"]
Or:
var bobject = {primary:"red", bg:"maroon", hilite:"green"};
$.map(bobject, function(v, k){return k;});
Thanks to @pimlottc.
A: I believe you can loop through the properties of the object using for/in, so you could do something like this:
function getKeys(h) {
Array keys = new Array();
for (var key in h)
keys.push(key);
return keys;
}
A: You can use Object.keys:
Object.keys(h)
A: I wanted to use AnnanFay's answer:
Object.prototype.keys = function () ...
However, when using it in conjunction with the Google Maps API v3, Google Maps is non-functional.
However,
for (var key in h) ...
works well.
A: You could use Underscore.js, which is a JavaScript utility library.
_.keys({one : 1, two : 2, three : 3});
// => ["one", "two", "three"]
A: There is function in modern JavaScript (ECMAScript 5) called Object.keys performing this operation:
var obj = { "a" : 1, "b" : 2, "c" : 3};
alert(Object.keys(obj)); // will output ["a", "b", "c"]
Compatibility details can be found here.
On the Mozilla site there is also a snippet for backward compatibility:
if(!Object.keys) Object.keys = function(o){
if (o !== Object(o))
throw new TypeError('Object.keys called on non-object');
var ret=[],p;
for(p in o) if(Object.prototype.hasOwnProperty.call(o,p)) ret.push(p);
return ret;
}
A: This is the best you can do, as far as I know...
var keys = [];
for (var k in h)keys.push(k);
A: If you are trying to get the elements only, but not the functions then this code can help you:
this.getKeys = function() {
var keys = new Array();
for (var key in this) {
if (typeof this[key] !== 'function') {
keys.push(key);
}
}
return keys;
}
This is part of my implementation of the HashMap and I only want the keys. this is the hashmap object that contains the keys.
A: In Javascript we can use
Object.keys(h)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "203"
} |
Q: Can I access ASP.NET Development server in an intranet? Im testing an ASP.NEt site. When I execute it, it starts the ASP.NET Development Server and opens up a page.
Now I want to test it in the intranet I have.
*
*Can I use this server or I need to configure IIS in this machine?
*Do I need to configure something for it to work?
I've changed the localhost to the correct IP and I opened up the firewall.
Thanks
A: I realize this isn't a direct answer to your question, but an alternative to debugging using the ASP development server is to attach to the IIS process: How do I attach the debugger to IIS instead of ASP.NET Development Server?
A: Yes you can! And you don't need IIS
Just use a simple Java TCP tunnel. Download this Java app & just tunnel the traffic back.
http://jcbserver.uwaterloo.ca/cs436/software/tgui/tcpTunnelGUI.shtml
In command prompt, you'd then run the java app like this... Let's assume you want external access on port 80 and your standard debug environment runs on port 1088...
java -jar tunnel.jar 80 localhost 1088
(Also answered here: Accessing asp. net development server external to VM)
A: Nope, stupidly (IMHO) there's no way to get the default ASP.net development server to serve pages to IPs other than localhost. What I did was to use UltiDev Cassini which is very quick to set up and is basically a version of the ASP.net development server compiled by UltiDev, and it will serve pages to any IP address.
A: Just for those who don't want/cant set up IIS for whatever reason...
Use fiddler or similar on your host - set your browser on the client VM to use the proxy then just use localhost:dev_port as usual on the client.
All requests from the client goto the proxy on your dev machine which routes to localhost on the dev machine and the ASP.net dev server thinks the request is from your dev machine!
A: You can recompile Cassini to get it to work - there's a fairly easy to remove check for localhost in there. Or, I'm pretty sure Ultidev's Cassini doesn't have this restriction. Both of these are easier to setup than IIS.
But, yeah, the builtin WebDev.WebServer doesn't work....Hmm, unless you run something like AnalogX's Proxy on your dev box and point it to the WebDev port. That should work (though I haven't tried it, it should take < 2 mins to setup).
A: You can use Cassini to expose your web apps externally. You just need to proxy the connection. I wrote a simple program to do this that you can run in another VS instance. Just change the port to match the port Cassini is using.
https://gist.github.com/945649
A: No, you can't. It's set up so it only works on localhost, and I couldn't find any workarounds to make it work.
But, here's what I've been doing - I created the website on a specific port in IIS and opened that port up so it's visible on the network. I pointed that IIS website to my website's root folder (the one with web.config in it). Then I continued to use the ASP.NET Development server on that local machine while developing - both IIS and the ASP.NET Development Server can access the files at the same time (unless you're doing something wacky).
Let me know if there's a challenge with running IIS on your machine and I'll update my answer.
A: You can do port redirection using SOAP Toolkit 3.0
Once installed, go to My Programs > Microsoft Soap Toolkit 3 > Trace Utility
Once Trace Utility opened, go to File > New > Formatted Trace
In the dialog insert your ASP .NET Development Server port in Forward To Destination Port field.
It's only a workaround for testing purposes
A: I believe the built in ASP.NET server only works on localhost. You'll have to use IIS.
A: Compile all you website in Debug mode, then create the website and publish it in IIS (make sure you can view it from other machine). Then attach the VS2010 Debugger to the process with the AppPool of your website (the process is called w3wp.exe when IIS>v5 and aspnet_wp.exe when IIS<5).
If you make some changes, just replace the package contents on the physical path of the website, and there you go again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Can you set, or where is, the local document root? When opening a file from your hard drive into your browser, where is the document root? To illustrate, given the following HTML code, if the page is opened from the local machine (file:///) then where should the css file be for the browser to find it?
<link href="/temp/test.css" rel="stylesheet" type="text/css" />
A: It depends on what browser you use, but Internet Explorer, for example, would take you to the root directory of your harddrive (eg. C:/), while browsers such as Firefox does nothing.
A: You can, but probably don't want to, set the document root on a per-file basis in the head of your file:
<base href="my-root">
A: On a Mac, the document root is what you see in the window that appears after you double click on the main hard drive icon on your desktop. The temp folder needs to be in there for a browser to find the CSS file as you have it written in your code.
Actually, you could also write the code like this:
<link href="file:///temp/test.css" rel="stylesheet" type="text/css" />
A: Eric, the document root is the folder in which your file is, wherever it may be.
A: As far as local, static html goes, unless you specify it, most browsers will take the location of the html file you are viewing as the root. So any css put in there can just be referenced by it's name only.
The lazy way to get the correct reference for your css file is to open it in your browser. Then just grab the url that you see there - something like: file:///blah/test.css and copy that into your stylesheet link on your html: <link href="file:///blah/test.css" rel="stylesheet" type="text/css">
Either that or you can just take the url for the html file and amend it to refer to the stylesheet.
Then your local page should load fine with the local stylesheet.
A: If you're interested in setting the document root, you might look at getting a web server installed on your machine, or, if you already have one (like Apache or IIS), storing your project-in-development in the web root of that server (htdocs in Apache, not entirely sure in IIS). If you'd rather leave your files where they are, you can set up virtual hosts and even map them to addresses that you can type into your browser (for example, I have a local.mrwarshaw.com address that resolves to the web root of my personal site's development folder).
If you're on Windows and don't want to mess around with setting up a server on your own, you could get a package like XAMPP or WAMPP, though bear in mind that those carry the extra weight of PHP and MySQL with them. Still, if you've got the space, they're a pretty easy drop-in development environment for your machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How can I remove duplicate rows? I need to remove duplicate rows from a fairly large SQL Server table (i.e. 300,000+ rows).
The rows, of course, will not be perfect duplicates because of the existence of the RowID identity field.
MyTable
RowID int not null identity(1,1) primary key,
Col1 varchar(20) not null,
Col2 varchar(2048) not null,
Col3 tinyint not null
How can I do this?
A: *
*Create new blank table with the same structure
*Execute query like this
INSERT INTO tc_category1
SELECT *
FROM tc_category
GROUP BY category_id, application_id
HAVING count(*) > 1
*Then execute this query
INSERT INTO tc_category1
SELECT *
FROM tc_category
GROUP BY category_id, application_id
HAVING count(*) = 1
A: By useing below query we can able to delete duplicate records based on the single column or multiple column. below query is deleting based on two columns. table name is: testing and column names empno,empname
DELETE FROM testing WHERE empno not IN (SELECT empno FROM (SELECT empno, ROW_NUMBER() OVER (PARTITION BY empno ORDER BY empno)
AS [ItemNumber] FROM testing) a WHERE ItemNumber > 1)
or empname not in
(select empname from (select empname,row_number() over(PARTITION BY empno ORDER BY empno)
AS [ItemNumber] FROM testing) a WHERE ItemNumber > 1)
A: Another way of doing this :--
DELETE A
FROM TABLE A,
TABLE B
WHERE A.COL1 = B.COL1
AND A.COL2 = B.COL2
AND A.UNIQUEFIELD > B.UNIQUEFIELD
A: Another possible way of doing this is
;
--Ensure that any immediately preceding statement is terminated with a semicolon above
WITH cte
AS (SELECT ROW_NUMBER() OVER (PARTITION BY Col1, Col2, Col3
ORDER BY ( SELECT 0)) RN
FROM #MyTable)
DELETE FROM cte
WHERE RN > 1;
I am using ORDER BY (SELECT 0) above as it is arbitrary which row to preserve in the event of a tie.
To preserve the latest one in RowID order for example you could use ORDER BY RowID DESC
Execution Plans
The execution plan for this is often simpler and more efficient than that in the accepted answer as it does not require the self join.
This is not always the case however. One place where the GROUP BY solution might be preferred is situations where a hash aggregate would be chosen in preference to a stream aggregate.
The ROW_NUMBER solution will always give pretty much the same plan whereas the GROUP BY strategy is more flexible.
Factors which might favour the hash aggregate approach would be
*
*No useful index on the partitioning columns
*relatively fewer groups with relatively more duplicates in each group
In extreme versions of this second case (if there are very few groups with many duplicates in each) one could also consider simply inserting the rows to keep into a new table then TRUNCATE-ing the original and copying them back to minimise logging compared to deleting a very high proportion of the rows.
A: delete t1
from table t1, table t2
where t1.columnA = t2.columnA
and t1.rowid>t2.rowid
Postgres:
delete
from table t1
using table t2
where t1.columnA = t2.columnA
and t1.rowid > t2.rowid
A:
From the application level (unfortunately). I agree that the proper way to prevent duplication is at the database level through the use of a unique index, but in SQL Server 2005, an index is allowed to be only 900 bytes, and my varchar(2048) field blows that away.
I dunno how well it would perform, but I think you could write a trigger to enforce this, even if you couldn't do it directly with an index. Something like:
-- given a table stories(story_id int not null primary key, story varchar(max) not null)
CREATE TRIGGER prevent_plagiarism
ON stories
after INSERT, UPDATE
AS
DECLARE @cnt AS INT
SELECT @cnt = Count(*)
FROM stories
INNER JOIN inserted
ON ( stories.story = inserted.story
AND stories.story_id != inserted.story_id )
IF @cnt > 0
BEGIN
RAISERROR('plagiarism detected',16,1)
ROLLBACK TRANSACTION
END
Also, varchar(2048) sounds fishy to me (some things in life are 2048 bytes, but it's pretty uncommon); should it really not be varchar(max)?
A: I would mention this approach as well as it can be helpful, and works in all SQL servers:
Pretty often there is only one - two duplicates, and Ids and count of duplicates are known. In this case:
SET ROWCOUNT 1 -- or set to number of rows to be deleted
delete from myTable where RowId = DuplicatedID
SET ROWCOUNT 0
A: DELETE
FROM
table_name T1
WHERE
rowid > (
SELECT
min(rowid)
FROM
table_name T2
WHERE
T1.column_name = T2.column_name
);
A: CREATE TABLE car(Id int identity(1,1), PersonId int, CarId int)
INSERT INTO car(PersonId,CarId)
VALUES(1,2),(1,3),(1,2),(2,4)
--SELECT * FROM car
;WITH CTE as(
SELECT ROW_NUMBER() over (PARTITION BY personid,carid order by personid,carid) as rn,Id,PersonID,CarId from car)
DELETE FROM car where Id in(SELECT Id FROM CTE WHERE rn>1)
A: DELETE
FROM MyTable
WHERE NOT EXISTS (
SELECT min(RowID)
FROM Mytable
WHERE (SELECT RowID
FROM Mytable
GROUP BY Col1, Col2, Col3
))
);
A: I you want to preview the rows you are about to remove and keep control over which of the duplicate rows to keep. See http://developer.azurewebsites.net/2014/09/better-sql-group-by-find-duplicate-data/
with MYCTE as (
SELECT ROW_NUMBER() OVER (
PARTITION BY DuplicateKey1
,DuplicateKey2 -- optional
ORDER BY CreatedAt -- the first row among duplicates will be kept, other rows will be removed
) RN
FROM MyTable
)
DELETE FROM MYCTE
WHERE RN > 1
A: DELETE LU
FROM (SELECT *,
Row_number()
OVER (
partition BY col1, col1, col3
ORDER BY rowid DESC) [Row]
FROM mytable) LU
WHERE [row] > 1
A: This will delete duplicate rows, except the first row
DELETE
FROM
Mytable
WHERE
RowID NOT IN (
SELECT
MIN(RowID)
FROM
Mytable
GROUP BY
Col1,
Col2,
Col3
)
Refer (http://www.codeproject.com/Articles/157977/Remove-Duplicate-Rows-from-a-Table-in-SQL-Server)
A: I would prefer CTE for deleting duplicate rows from sql server table
strongly recommend to follow this article ::http://codaffection.com/sql-server-article/delete-duplicate-rows-in-sql-server/
by keeping original
WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)
DELETE FROM CTE WHERE RN<>1
without keeping original
WITH CTE AS
(SELECT *,R=RANK() OVER (ORDER BY col1,col2,col3)
FROM MyTable)
DELETE CTE
WHERE R IN (SELECT R FROM CTE GROUP BY R HAVING COUNT(*)>1)
A: To Fetch Duplicate Rows:
SELECT
name, email, COUNT(*)
FROM
users
GROUP BY
name, email
HAVING COUNT(*) > 1
To Delete the Duplicate Rows:
DELETE users
WHERE rowid NOT IN
(SELECT MIN(rowid)
FROM users
GROUP BY name, email);
A: Quick and Dirty to delete exact duplicated rows (for small tables):
select distinct * into t2 from t1;
delete from t1;
insert into t1 select * from t2;
drop table t2;
A: I prefer the subquery\having count(*) > 1 solution to the inner join because I found it easier to read and it was very easy to turn into a SELECT statement to verify what would be deleted before you run it.
--DELETE FROM table1
--WHERE id IN (
SELECT MIN(id) FROM table1
GROUP BY col1, col2, col3
-- could add a WHERE clause here to further filter
HAVING count(*) > 1
--)
A: SELECT DISTINCT *
INTO tempdb.dbo.tmpTable
FROM myTable
TRUNCATE TABLE myTable
INSERT INTO myTable SELECT * FROM tempdb.dbo.tmpTable
DROP TABLE tempdb.dbo.tmpTable
A: There's a good article on removing duplicates on the Microsoft Support site. It's pretty conservative - they have you do everything in separate steps - but it should work well against large tables.
I've used self-joins to do this in the past, although it could probably be prettied up with a HAVING clause:
DELETE dupes
FROM MyTable dupes, MyTable fullTable
WHERE dupes.dupField = fullTable.dupField
AND dupes.secondDupField = fullTable.secondDupField
AND dupes.uniqueField > fullTable.uniqueField
A: I thought I'd share my solution since it works under special circumstances.
I my case the table with duplicate values did not have a foreign key (because the values were duplicated from another db).
begin transaction
-- create temp table with identical structure as source table
Select * Into #temp From tableName Where 1 = 2
-- insert distinct values into temp
insert into #temp
select distinct *
from tableName
-- delete from source
delete from tableName
-- insert into source from temp
insert into tableName
select *
from #temp
rollback transaction
-- if this works, change rollback to commit and execute again to keep you changes!!
PS: when working on things like this I always use a transaction, this not only ensures everything is executed as a whole, but also allows me to test without risking anything. But off course you should take a backup anyway just to be sure...
A: Using CTE. The idea is to join on one or more columns that form a duplicate record and then remove whichever you like:
;with cte as (
select
min(PrimaryKey) as PrimaryKey
UniqueColumn1,
UniqueColumn2
from dbo.DuplicatesTable
group by
UniqueColumn1, UniqueColumn1
having count(*) > 1
)
delete d
from dbo.DuplicatesTable d
inner join cte on
d.PrimaryKey > cte.PrimaryKey and
d.UniqueColumn1 = cte.UniqueColumn1 and
d.UniqueColumn2 = cte.UniqueColumn2;
A: This query showed very good performance for me:
DELETE tbl
FROM
MyTable tbl
WHERE
EXISTS (
SELECT
*
FROM
MyTable tbl2
WHERE
tbl2.SameValue = tbl.SameValue
AND tbl.IdUniqueValue < tbl2.IdUniqueValue
)
it deleted 1M rows in little more than 30sec from a table of 2M (50% duplicates)
A: Yet another easy solution can be found at the link pasted here. This one easy to grasp and seems to be effective for most of the similar problems. It is for SQL Server though but the concept used is more than acceptable.
Here are the relevant portions from the linked page:
Consider this data:
EMPLOYEE_ID ATTENDANCE_DATE
A001 2011-01-01
A001 2011-01-01
A002 2011-01-01
A002 2011-01-01
A002 2011-01-01
A003 2011-01-01
So how can we delete those duplicate data?
First, insert an identity column in that table by using the following code:
ALTER TABLE dbo.ATTENDANCE ADD AUTOID INT IDENTITY(1,1)
Use the following code to resolve it:
DELETE FROM dbo.ATTENDANCE WHERE AUTOID NOT IN (SELECT MIN(AUTOID) _
FROM dbo.ATTENDANCE GROUP BY EMPLOYEE_ID,ATTENDANCE_DATE)
A: This is the easiest way to delete duplicate record
DELETE FROM tblemp WHERE id IN
(
SELECT MIN(id) FROM tblemp
GROUP BY title HAVING COUNT(id)>1
)
A: Use this
WITH tblTemp as
(
SELECT ROW_NUMBER() Over(PARTITION BY Name,Department ORDER BY Name)
As RowNumber,* FROM <table_name>
)
DELETE FROM tblTemp where RowNumber >1
A: Assuming no nulls, you GROUP BY the unique columns, and SELECT the MIN (or MAX) RowId as the row to keep. Then, just delete everything that didn't have a row id:
DELETE FROM MyTable
LEFT OUTER JOIN (
SELECT MIN(RowId) as RowId, Col1, Col2, Col3
FROM MyTable
GROUP BY Col1, Col2, Col3
) as KeepRows ON
MyTable.RowId = KeepRows.RowId
WHERE
KeepRows.RowId IS NULL
In case you have a GUID instead of an integer, you can replace
MIN(RowId)
with
CONVERT(uniqueidentifier, MIN(CONVERT(char(36), MyGuidColumn)))
A: Here is another good article on removing duplicates.
It discusses why its hard: "SQL is based on relational algebra, and duplicates cannot occur in relational algebra, because duplicates are not allowed in a set."
The temp table solution, and two mysql examples.
In the future are you going to prevent it at a database level, or from an application perspective. I would suggest the database level because your database should be responsible for maintaining referential integrity, developers just will cause problems ;)
A: Oh sure. Use a temp table. If you want a single, not-very-performant statement that "works" you can go with:
DELETE FROM MyTable WHERE NOT RowID IN
(SELECT
(SELECT TOP 1 RowID FROM MyTable mt2
WHERE mt2.Col1 = mt.Col1
AND mt2.Col2 = mt.Col2
AND mt2.Col3 = mt.Col3)
FROM MyTable mt)
Basically, for each row in the table, the sub-select finds the top RowID of all rows that are exactly like the row under consideration. So you end up with a list of RowIDs that represent the "original" non-duplicated rows.
A: I had a table where I needed to preserve non-duplicate rows.
I'm not sure on the speed or efficiency.
DELETE FROM myTable WHERE RowID IN (
SELECT MIN(RowID) AS IDNo FROM myTable
GROUP BY Col1, Col2, Col3
HAVING COUNT(*) = 2 )
A: The following query is useful to delete duplicate rows. The table in this example has ID as an identity column and the columns which have duplicate data are Column1, Column2 and Column3.
DELETE FROM TableName
WHERE ID NOT IN (SELECT MAX(ID)
FROM TableName
GROUP BY Column1,
Column2,
Column3
/*Even if ID is not null-able SQL Server treats MAX(ID) as potentially
nullable. Because of semantics of NOT IN (NULL) including the clause
below can simplify the plan*/
HAVING MAX(ID) IS NOT NULL)
The following script shows usage of GROUP BY, HAVING, ORDER BY in one query, and returns the results with duplicate column and its count.
SELECT YourColumnName,
COUNT(*) TotalCount
FROM YourTableName
GROUP BY YourColumnName
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC
A: The other way is Create a new table with same fields and with Unique Index. Then move all data from old table to new table. Automatically SQL SERVER ignore (there is also an option about what to do if there will be a duplicate value: ignore, interrupt or sth) duplicate values. So we have the same table without duplicate rows. If you don't want Unique Index, after the transfer data you can drop it.
Especially for larger tables you may use DTS (SSIS package to import/export data) in order to transfer all data rapidly to your new uniquely indexed table. For 7 million row it takes just a few minute.
A: alter table MyTable add sno int identity(1,1)
delete from MyTable where sno in
(
select sno from (
select *,
RANK() OVER ( PARTITION BY RowID,Col3 ORDER BY sno DESC )rank
From MyTable
)T
where rank>1
)
alter table MyTable
drop column sno
A: Sometimes a soft delete mechanism is used where a date is recorded to indicate the deleted date. In this case an UPDATE statement may be used to update this field based on duplicate entries.
UPDATE MY_TABLE
SET DELETED = getDate()
WHERE TABLE_ID IN (
SELECT x.TABLE_ID
FROM MY_TABLE x
JOIN (SELECT min(TABLE_ID) id, COL_1, COL_2, COL_3
FROM MY_TABLE d
GROUP BY d.COL_1, d.COL_2, d.COL_3
HAVING count(*) > 1) AS d ON d.COL_1 = x.COL_1
AND d.COL_2 = x.COL_2
AND d.COL_3 = x.COL_3
AND d.TABLE_ID <> x.TABLE_ID
/*WHERE x.COL_4 <> 'D' -- Additional filter*/)
This method has served me well for fairly moderate tables containing ~30 million rows with high and low amounts of duplications.
A: I know that this question has been already answered, but I've created pretty useful sp which will create a dynamic delete statement for a table duplicates:
CREATE PROCEDURE sp_DeleteDuplicate @tableName varchar(100), @DebugMode int =1
AS
BEGIN
SET NOCOUNT ON;
IF(OBJECT_ID('tempdb..#tableMatrix') is not null) DROP TABLE #tableMatrix;
SELECT ROW_NUMBER() OVER(ORDER BY name) as rn,name into #tableMatrix FROM sys.columns where [object_id] = object_id(@tableName) ORDER BY name
DECLARE @MaxRow int = (SELECT MAX(rn) from #tableMatrix)
IF(@MaxRow is null)
RAISERROR ('I wasn''t able to find any columns for this table!',16,1)
ELSE
BEGIN
DECLARE @i int =1
DECLARE @Columns Varchar(max) ='';
WHILE (@i <= @MaxRow)
BEGIN
SET @Columns=@Columns+(SELECT '['+name+'],' from #tableMatrix where rn = @i)
SET @i = @i+1;
END
---DELETE LAST comma
SET @Columns = LEFT(@Columns,LEN(@Columns)-1)
DECLARE @Sql nvarchar(max) = '
WITH cteRowsToDelte
AS (
SELECT ROW_NUMBER() OVER (PARTITION BY '+@Columns+' ORDER BY ( SELECT 0)) as rowNumber,* FROM '+@tableName
+')
DELETE FROM cteRowsToDelte
WHERE rowNumber > 1;
'
SET NOCOUNT OFF;
IF(@DebugMode = 1)
SELECT @Sql
ELSE
EXEC sp_executesql @Sql
END
END
So if you create table like that:
IF(OBJECT_ID('MyLitleTable') is not null)
DROP TABLE MyLitleTable
CREATE TABLE MyLitleTable
(
A Varchar(10),
B money,
C int
)
---------------------------------------------------------
INSERT INTO MyLitleTable VALUES
('ABC',100,1),
('ABC',100,1), -- only this row should be deleted
('ABC',101,1),
('ABC',100,2),
('ABCD',100,1)
-----------------------------------------------------------
exec sp_DeleteDuplicate 'MyLitleTable',0
It will delete all duplicates from your table. If you run it without the second parameter it will return a SQL statement to run.
If you need to exclude any of the column just run it in the debug mode get the code and modify it whatever you like.
A: If all the columns in duplicate rows are same then below query can be used to delete the duplicate records.
SELECT DISTINCT * INTO #TemNewTable FROM #OriginalTable
TRUNCATE TABLE #OriginalTable
INSERT INTO #OriginalTable SELECT * FROM #TemNewTable
DROP TABLE #TemNewTable
A: For the table structure
MyTable
RowID int not null identity(1,1) primary key,
Col1 varchar(20) not null,
Col2 varchar(2048) not null,
Col3 tinyint not null
The query for removing duplicates:
DELETE t1
FROM MyTable t1
INNER JOIN MyTable t2
WHERE t1.RowID > t2.RowID
AND t1.Col1 = t2.Col1
AND t1.Col2=t2.Col2
AND t1.Col3=t2.Col3;
I am assuming that RowID is kind of auto-increment and rest of the columns have duplicate values.
A: Now lets look elasticalsearch table which this tables has duplicated rows and Id is identical uniq field. We know if some id exist by a group criteria then we can delete other rows outscope of this group. My manner shows this criteria.
So many case of this thread are in the like state of mine. Just change your target group criteria according your case for deleting repeated (duplicated) rows.
DELETE
FROM elasticalsearch
WHERE Id NOT IN
(SELECT min(Id)
FROM elasticalsearch
GROUP BY FirmId,FilterSearchString
)
cheers
A: I think this would be helpfull. Here, ROW_NUMBER() OVER(PARTITION BY res1.Title ORDER BY res1.Id)as num has been used to differentiate the duplicate rows.
delete FROM
(SELECT res1.*,ROW_NUMBER() OVER(PARTITION BY res1.Title ORDER BY res1.Id)as num
FROM
(select * from [dbo].[tbl_countries])as res1
)as res2
WHERE res2.num > 1
A: Other way to remove duplicates based on two columns
I found this query easier to read and replace.
DELETE
FROM
TABLE_NAME
WHERE FIRST_COLUMNS
IN(
SELECT * FROM
( SELECT MIN(FIRST_COLUMNS)
FROM TABLE_NAME
GROUP BY
FIRST_COLUMNS,
SECOND_COLUMNS
HAVING COUNT(FIRST_COLUMNS) > 1
) temp
)
note: It's good to simulate query before you run it.
A: First you can select minimum RowId's using MIN() and Group By. We will keep these Rows.
SELECT MIN(RowId) as RowId
FROM MyTable
GROUP BY Col1, Col2, Col3
And Delete RowId's those are not in selected minimum RowId's using
DELETE FROM MyTable WHERE RowId Not IN()
Final query:
DELETE FROM MyTable WHERE RowId Not IN(
SELECT MIN(RowId) as RowId
FROM MyTable
GROUP BY Col1, Col2, Col3
)
You can also check my answer in SQL Fiddle
A: A very simple way to delete duplicate rows of table in postgresql.
DELETE FROM table1 a
USING table1 b
WHERE a.id < b.id
AND a.column1 = b.column1
AND a.column2 = b.column2;
A: Delete Duplicate record
Greater Than operator in this case delete all record except first record
DELETE u1 FROM users u1 JOIN users u2
WHERE u1.id > u2.id
AND u1.email=u2.email
< Less than operator in this case delete all record except last record
DELETE u1 FROM users u1 JOIN users u2
WHERE u1.id < u2.id
AND u1.email=u2.email
A: Create another table that will consist of original values:
CREATE TABLE table2 AS SELECT *, COUNT(*) FROM table1 GROUP BY name HAVING COUNT (*) > 0
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1366"
} |
Q: Obscuring network proxy password in plain text files on Linux/UNIX-likes Typically in a large network a computer needs to operate behind an authenticated proxy - any connections to the outside world require a username/password which is often the password a user uses to log into email, workstation etc.
This means having to put the network password in the apt.conf file as well as typically the http_proxy, ftp_proxy and https_proxy environment variables defined in ~/.profile
I realise that with apt.conf that you could set chmod 600 (which it isn't by default on Ubuntu/Debian!) but on our system there are people who need root priveleges .
I also realise that it is technically impossible to secure a password from someone who has root access, however I was wondering if there was a way of obscuring the password to prevent accidental discovery. Windows operates with users as admins yet somehow stores network passwords (probably stored deep in the registry obscured in some way) so that in typical use you won't stumble across it in plain text
I only ask since the other day, I entirely by accident discovered somebody elses password in this way when comparing configuration files across systems.
@monjardin - Public key authentication is not an alternative on this network I'm afraid. Plus I doubt it is supported amongst the majority of commandline tools.
@Neall - I don't mind the other users having web access, they can use my credentials to access the web, I just don't want them to happen across my password in plain text.
A: With the following approach you never have to save your proxy password in plain text. You just have to type in a password interactively as soon as you need http/https/ftp access:
*
*Use openssl to encrypt your plain text proxy password into a file, with e.g. AES256 encryption:
openssl enc -aes-256-cbc -in pw.txt -out pw.bin
*
*Use a (different) password for protecting the encoded file
*Remove plain text pw.txt
*Create an alias in e.g. ~/.alias to set your http_proxy/https_proxy/ftp_proxy environment variables (set appropriate values for $USER/proxy/$PORT)
alias myproxy='PW=`openssl aes-256-cbc -d -in pw.bin`; PROXY="http://$USER:$PW@proxy:$PORT"; export http_proxy=$PROXY; export https_proxy=$PROXY; export ftp_proxy=$PROXY'
*
*you should source this file into your normal shell environment (on some systems this is done automatically)
*type 'myproxy' and enter your openssl password you used for encrypting the file
*done.
Note: the password is available (and readable) inside the users environment for the duration of the shell session. If you want to clean it from the environment after usage you can use another alias:
alias clearproxy='export http_proxy=; export https_proxy=; export
ftp_proxy='
A: I did a modified solution:
edit /etc/bash.bashrc and add following lines:
alias myproxy='read -p "Username: " USER;read -s -p "Password: " PW
PROXY="$USER:[email protected]:80";
export http_proxy=http://$PROXY;export Proxy=$http_proxy;export https_proxy=https://$PROXY;export ftp_proxy=ftp://$PROXY'
From next logon enter myproxy and input your user/password combination! Now work with sudo -E
-E, --preserve-env
Indicates to the security policy that the user wishes to reserve their
existing environment variables.
e.g. sudo -E apt-get update
Remark: proxy settings only valid during shell session
A: There are lots of ways to obscure a password: you could store the credentials in rot13 format, or BASE64, or use the same password-scrambling algorithm that CVS uses. The real trick though is making your applications aware of the scrambling algorithm.
For the environment variables in ~/.profile you could store them encoded and then decode them before setting the variables, e.g.:
encodedcreds="sbbone:cnffjbeq"
creds=`echo "$encodedcreds" | tr n-za-mN-ZA-M a-zA-Z`
That will set creds to foobar:password, which you can then embed in http_proxy etc.
I assume you know this, but it bears repeating: this doesn't add any security. It just protects against inadvertently seeing another user's password.
A: Prefer applications that integrate with Gnome Keyring. Another possibility is to use an SSH tunnel to an external machine and run apps through that. Take a look at the -D option for creating a local SOCKS proxy interface, rather than single-serving -L forwards.
A: Unless the specific tools you are using allow an obfuscated format, or you can create some sort of workflow to go from obfuscated to plain on demand, you are probably out of luck.
One thing I've seen in cases like this is creating per-server, per-user, or per-server/per-user dedicated credentials that only have access to the proxy from a specific IP. It doesn't solve your core obfuscation problem but it mitigates the effects of someone seeing the password because it's worth so little.
Regarding the latter option, we came up with a "reverse crypt" password encoding at work that we use for stuff like this. It's only obfuscation because all the data needed to decode the pw is stored in the encoded string, but it prevents people from accidentally seeing passwords in plain text. So you might, for instance, store one of the above passwords in this format, and then write a wrapper for apt that builds apt.conf dynamically, calls the real apt, and at exit deletes apt.conf. You still end up with the pw in plaintext for a little while, but it minimizes the window.
A: Is public key authentication a valid alternative for you?
A: As long as all three of these things are true, you're out of luck:
*
*Server needs web access
*Users need absolute control over server (root)
*You don't want users to have server's web access
If you can't remove #2 or #3, your only choice is to remove #1. Set up an internal server that hosts all the software updates. Keep that one locked down from your other users and don't allow other servers to have web access.
Anything else you try to do is just fooling yourself.
A: we solved this problem by not asking for proxy passwords on rpm, apt or other similar updates (virus databases, windows stuff etc)
That's a small whitelist of known repositories to add to the proxy.
A: I suppose you could create a local proxy, point these tools through that, and then have the local proxy interactively ask the user for the external proxy password which it would then apply. It could optionally remember this for a few minutes in obfuscated internal storage.
An obvious attack vector would be for a privileged user to modify this local proxy to do something else with the entered password (as they could with anything else such as an email client that requests it or the windowing system itself), but at least you'd be safe from inadvertent viewing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What is your reporting tool of choice? Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform.
To get the job done what tools, widgets, platforms has the group used with success, frustration and failure?
A: For most reports we use BIRT.
A: I've used Reporting Services and Crystal fairly extensively, and I'm writing a few reports using Excel(ick) at the moment.
Reporting Services is pretty good for simple reports but as soon as you need total control over formatting,complex formulas and charts etc. Crystal is a long way ahead. I also find Crystal to be far more usable; being able to change things within the report preview is invaluable (it may be possible in later versions of RS?).
RS also needs to be deployed to a web server which limits it's usefulness if you are writing applications that need to be deployed externally.
Older versions of Crystal were very buggy but the latest ones are much better, it's much more mature than Reporting Services.
A: For a lot of projects we use ActiveReports.
A: I am a committer on the BIRT project, so I am biased. BIRT provides a very well thought out report object model (ROM) and appropriate API for the various design and deploy function that is needed. In addition, BIRT provides the best multi-language support and the ability to separate development from design through the use of CSS.
BIRT can be embedded into your application for no license cost through the REAPI or it can be purchased through a couple of commercial offerings.
A: Cognos is a robust suite of tools (we use it as a front-end for an Oracle back-end), but there's a pronounced lack of documentation on how to accomplish complex reporting tasks -- mostly, you end up banging on it until you get something to work.
I wouldn't discount the usefulness of using Microsoft Access as a reporting front-end. It doesn't have that useful Web-enabled functionality, but for in-house reports it's very versatile and surprisingly powerful.
A: We use i-net Clear Reports for our reporting (seeing as how we "eat our own dog food"). ;)
*
*It is like Crystal Reports,
*can read Crystal Reports templates,
*the API is more useful,
*costs less than Crystal Reports (and if you factor in support costs, costs less than open source)
*is platform independent because written in Java.
*we offer a free and fully functional report designer
A: For knocking out fairly "run of the mill" reports, SQL Reporting Services is really quite impressive.
For complicated analysis, loading the data (maybe pre-aggregated) into an Excel Pivot table is usually adequate for most users.
I've found you can spend a lot of time (and money) building a comprehensive "ad-hoc" reporting suite and after the first month or two of "wow factor", 99% of the reports generated will be the same report with minor differences in a fixed set of parameters.
Don't accept when a user says they want "ad-hoc" reports without specifying what goals and targets their looking for. They are just fishing and they need to actually spend as much time on THINKING about THEIR reporting requirements as YOU would have to spend BUILDING their solution.
I've spent too much time building the "the system that can report everything" and for it to become out of date or out of favour before it was finished. Much better to get the quick wins out of the way as quick as possible and then spend time "systemising" the most important reports.
A: If you have all the money in the world, go with Cognos. They provide a data cube that essentially makes the reporting "developer free" and the end user can create reports, dashboards, anything they like.
For the "common man", I've grown quite fond of the ComponentOne reports for .NET library/tools. It has a similar feel to Crystal Reports, but has a very friendly XML format that you and edit under the hood and none of the headaches with versioning, keys, and other items that I've had to deal with when making simple updates to either the report or the underlying version.
A: I don't really have much SSAS work to do but I've been quite taken with this:
Cube Browser for ASP.net
It offers many of the capabilities of an excel pivot table in a web app, (thought I'm not enough of an expert on Excel to really know the whole of the pivot table's capabilities - it at least looks comparable to visual studio's cube browser).
Unfortunately the demos don't seem to be online anymore :(
A: I would have to agree, I really like SQL Server Reporting Services. It just does stuff, and does it easily.
A: Crystal Reports, because it is easy to take the same exact report file and
1 - Post it on the intranet
2 - Embed it in an application
3 - Schedule it to be emailed as an Excel output every so often to whoever needs it
Also (as I already suggested), it exports easily to Excel, PDF, and other formats.
A: We've been using BIRT which had a steep learning curve for me until I realized how many WYSIWIG features it had (I started editing the xml source code direct, which I don't recommend.) There are some output specific tricks (like using a 0 left margin to not get a blank A column when outputting to XLS format) but for the most part it's quick and easy to use, edit and preview.
I have also been impressed on how easy it is to intermix different datasets in a single report. While not a silver bullet, its a better all around tool than 99.999% of people are going to build on their own.
A: "Give them data and they will love you for it"
Out of the methods and tools I've used in the past, I would rank them in the following order based on abilities/versatility/usability/speed to deploy. I'm leaving cost out of it because while it is always a factor it is a different factor for everyone.
1 is Cognos (version 8)
2 is SQL Server Reporting
3 is Crystal Reports
4 is Custom written code
I haven't used any of the other tools mentioned. Cognos 8 is nothing short of awesome. While pricey, you are only limited by your imagination. It can do anything.
A: This isn't so much a positive suggestion, but more of a cautionary tale against crystal reports... As with other people, getting the right version of the crystal runtime is important, but having done that, I still had this problem:
*
*Spent weeks developing reports that had embedded images.
*Tested on dev and staging environment, all A-OK.
*Deploy to live server - doesn't work... Hmmm...
Spent two weeks trawling forums and looking for advice, eventually got a response from a crystal body on their forums. Suggested that he had seen a similar problem to do with MS Paint being set up as the default application for a certain file extension.
At this point, we gave up trying (after I convinced my boss that this wasn't a take the piss answer, but actually a formal response from Crystal). Handily we were migrating to new servers about a month later (where the reports worked), but honestly, wouldn't touch them again...
Oh, and have used SSRS and found it to be pretty good for most things (particularly the most recent version).
A: Tableau software is an amazing tool to run your reports and get easily deep throught analysis
A: For simple reports I use the standard ReportViewer included in Visual Studio.
For more complicated reports and ones that require more performance I've used both Report Sharp Shooter and devExpress XtraReports. Surprisingly, in both products creating tables isn't as easy as it should but both are faster than ReportViewer and handle extremely well multi-column reports, barcodes and aggregate data.
A: We use Cognos, it's a fairly complex system, but very powerful.
A: i have a small reporting set, made in 2 months:
at least 10 times faster than crystal reports;
easy editing;
.net formula;
easy usage;
small code usage;
serialization and deserialization(fast and small);
extreme security;
multi threaded;
no errors;
A: We had used MS Reporting Services, but we was completely unhappy with it.
Reasons:
*
*it is needed to make difficult configuration of server
*it is not possible to embed report editor into our app without buying SQL server license for every user
*it is possible only to use embedded report parameters input form UI or send them from app, but not to create parameters UI by report designer
Now we a using Stimulsoft Reports. It have no such limitations like MS Reporting Services, and we and your users are happy with it.
A: 1) I would think Reporting Services is very good for most of the needs, when in comes to developing table based reports and also matrix reports (drilldown - pivot like functionality).Considering the price of Cognos etc. An SME can't even dream of getting Congns AFAIK
2) Report Scheduling / Subscription functionality can be invoked to send reports to a set of users (data driven) to deliver reports. Subscriptions can be delivered to custom locations such as an SFTP, by writing .Net code.
3) Using Report Models, end user can drag and drop columns and develop customized reports
To Note:
1) It can get trickier once you develop really complex graphical/dashboard kind reports - which involve few charts and small tables to be displayed in A4. Report Designer (the tool we use to design reports) and Web display use different rendering engines. So it is better if you deploy the reports often and see how they look, if you develop complex graphical reports
2) If you write custom functionality, you may have to change the XML configuration files(RSReportServer.Config etc). If there is any problem in the edit, ReportServer service may stop. So be careful to back up before doing anything custom
A: Cognos with an Oracle backend is what we use. We also use spotfire for visualization on top of cognos.
A: I'm the CTO at Windward and I do believe that Windward Reports is by far both the easiest to use and you can do more with it than any other reporting - and both traits are for the same reason, you design your reports in Word, Excel, & PowerPoint.
As to the generated reports, it's fast, it's rock solid, and incorporating it into your program can be as little as 3 lines of code.
A: We use Crystal Reports where I work. It has quite a few limitations, and we find ourselves doing almost all of the logic in Database procedures and Views.
One limitation to note is that Crystal Reports does not allow multiple layered sub-reports. In other words, you cannot have a sub-report inside a sub-report.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Disabling multi-line fields in MS Access Is there a way to disable entering multi-line entries in a Text Box (i.e., I'd like to stop my users from doing ctrl-enter to get a newline)?
A: I was able to do it on using KeyPress event.
Here's the code example:
Private Sub SingleLineTextBox_ KeyPress(ByRef KeyAscii As Integer)
If KeyAscii = 10 _
or KeyAscii = 13 Then
'10 -> Ctrl-Enter. AKA ^J or ctrl-j
'13 -> Enter. AKA ^M or ctrl-m
KeyAscii = 0 'clear the the KeyPress
End If
End Sub
A: The way I did it before (and the last time I worked in Access was around '97 so my memory is not so hot) was raising a key-up event and executing a VBA function. It's a similar method to what you do with an AJAX suggest text box in a modern webform application, but as I recall it could get tripped up if your Access form has other events which tend to occur frequently such a onMouseMove over the entire form object.
A: Using the KeyPress event means that your code will fire every time the user types. This can lead to screen flickering and other problems (the OnChange event would be the same).
It seems to me that you should use a single event to strip out the CrLf's, and the correct event would be AfterUpdate. You'd simply do this:
If InStr(Me!MyMemoControl, vbCrLf) Then
Me!MyMemoControl = Replace(Me!MyMemoControl, vbCrLf, vbNullString)
End If
Note the use of the Access global constants, vbCrLf (for Chr(10) & Chr(13)) and vbNullString (for zero-length string).
Using a validation rule means that you're going to pop up an ugly error message to your user, but provide them with little in the way of tools to correct the problem. The AfterUpdate approach is much cleaner and easier for the users, seems to me.
A: Thanks Ian and BIBD. I created a public sub based on your answer that is reusable.
Public Sub PreventNewlines(ByRef KeyAscii As Integer)
If KeyAscii = 10 Or KeyAscii = 13 Then KeyAscii = 0
End Sub
Private Sub textbox_KeyPress(KeyAscii As Integer)
Call PreventNewlines(KeyAscii)
End Sub
Screen flicker should never be an issue, as these are handled events, not constant polling (and it's per control further limiting the scope). Seems to me like an invalid argument, as every text editor is executing some code per keystroke.
Thanks
A: not entirely sure about that one, you should be able to remove the line breaks when you render the content though, or even run a vbscript to clear it out, you just need to check for chr(13) or vbCrLf.
A: If you don't want an event interfering, you can set up the Validation Rule property for the textbox to be
NOT LIKE "*"+Chr(10)+"*" OR "*"+Chr(13)+"*"
You will probably also want to set the Validation Text to explain specifically why Access is throwing up an error box.
A: Jason's response works well. Just to add to it..
If you want to allow the user to leave the text box blank, you could use this:
Not Like ""+Chr(10)+"" Or ""+Chr(13)+"" Or Is Null
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to match linux device path to windows drive name? I'm writing an application that on some stage performs low-level disk operations in Linux environment. The app actually consists of 2 parts, one runs on Windows and interacts with a user and another is a linux part that runs from a LiveCD. User makes a choice of Windows drive letters and then a linux part performs actions with corresponding partitions. The problem is finding a match between a Windows drive letter (like C:) and a linux device name (like /dev/sda1). This is my current solution that I rate as ugly:
*
*store partitions information (i.e. drive letter, number of blocks, drive serial number etc.) in Windows in some pre-defined place (i.e. the root of the system partition).
*read a list of partitions from /proc/partitions. Get only those partitions that has major number for SCSI or IDE hard drives and minor number that identifies them as real partitions and not the whole disks.
*Try to mount each of them with either ntfs or vfat file systems. Check whether the mounted partition contains the information stored by Windows app.
*Upon finding the required information written by the Windows app make the actual match. For each partition found in /proc/partitions acquire drive serial number (via HDIO_GET_IDENTITY syscall), number of blocks (from /proc/partitions) and drive offset (/sys/blocks/drive_path/partition_name/start), compare this to the Windows information and if this matches - store a Windows drive letter along with a linux device name.
There are a couple of problems in this scheme:
*
*This is ugly. Writing data in Windows and then reading it in Linux makes testing a nightmare.
*linux device major number is compared only with IDE or SCSI devices. This would probably fail, i.e. on USB or FireWire disks. It's possible to add these types of disks, but limiting the app to only known subset of possible devices seems to be rather bad idea.
*looks like HDIO_GET_IDENTITY works only on IDE and SATA drives.
*/sys/block hack may not work on other than IDE or SATA drives.
Any ideas on how to improve this schema? Perhaps there is another way to determine windows names without writing all the data in windows app?
P.S. The language of the app is C++. I can't change this.
A:
Partitions have UUIDs associated with them
My knowledge of this is very shallow, but I thought that was only true for disks formatted with GPT (Guid Partition Table) partitions, rather than the old-style MBR format which 99% of the world is still stuck with?
A: Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with:
sudo vol_id -u device (e.g. /dev/sda1)
If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs.
Edit: This may be a linux-only thing, and it may speficially be the volid util that generates these from something (instead of reading off meta-data for the drive). Having said that, there is nothing stopping you getting the source for volid and checking out what it does.
A:
My knowledge of this is very shallow,
but I thought that was only true for
disks formatted with GPT (Guid
Partition Table) partitions, rather
than the old-style MBR format which
99% of the world is still stuck with?
Not to sounds like a linux user cliche but it Works For Me.. I use it with NTFS partitions and have had no problems. As I said in my edit, vol_id may be generating them itself. If that were the case there would be no reliance on any particular partition format, which would be swell.
A:
Partitions have UUIDs associated with them. I don't know how to find these in Windows but in linux you can find the UUID for each partition with:
sudo vol_id -u device (e.g. /dev/sda1)
If there is an equivilent function in Windows you could simply store the UUIDs for whatever partition they pick then iterate through all known partitions in linux and match the UUIDs.
That's a good point, thank you! I've looked to the sources of vol_id (a part of the udev tarball) and it seems that for FAT(32) and NTFS it generates UUUD using the volume serial number that is read from the predefined location on the partition. Since I don't expect anything other then fat32 and ntfs I consider to use this information as a partition identifier.
A: You need to either mark the drive in some way (e.g. write a file etc.), or find some identifier that is only associated with that particular drive.
It is very hard, almost impossible to figure out what letter Windows would assign to a particular drive partition, without actually running Windows. This is because Windows always associates the drive that it is run from with C:. Which could be any drive, if you have more than one operating system installed. Windows also allows you to choose what drive letter it will try first, for a specific partition, causing further problems.
It would be a whole lot easier to do the GUI stuff inside Linux, than to try this mixed Window/Linux solution. I'm not say don't try it this way, what I am saying is there are very many possible pitfalls with this approach. I'm sure I don't even know about all of them.
Another option would be to see if you could actually do the Linux part, inside of Windows. If you are a very good Windows programmer, you can actually get access to the raw file-system. There are probably just as many pitfalls with this approach, because Windows will be running while all of this is in operation.
So to re-iterate I would see if you could do everything from within Linux, if you can. It's just a whole lot simpler in the long run.
A: In Windows you can read the "NTFS Volume Serial Number" which seams to match the UUID under Linux.
Possibilities to get the "NTFS Volume Serial" from Windows:
*
*commandline since XP: fsutil.exe fsinfo ntfsinfo C:
*under c++
HANDLE fileHandle = CreateFile(L"\\\\.\\C:", // or use syntax "\\?\Volume{GUID}"
GENERIC_READ,
FILE_SHARE_READ|FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
NULL,
NULL);
DWORD i;
NTFS_VOLUME_DATA_BUFFER ntfsInfo;
DeviceIoControl(fileHandle,
FSCTL_GET_NTFS_VOLUME_DATA,
NULL,
0,
&ntfsInfo,
sizeof(ntfsInfo),
&i,
NULL));
cout << "UUID is " << std::hex << ntfsInfo.VolumeSerialNumber.HighPart << std::hex << ntfsInfo.VolumeSerialNumber.LowPart << endl;
Possibilities to get the UUID under Linux:
*
*ls -l /dev/disk/by-uuid
*ls -l /dev/disk/by-label
*blkid /dev/sda1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What do you think of developing for the command line first? What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods?
eg.
W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00"
loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database.
Eventually you add in a screen for this:
============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================
When you click submit, it calls the same AddTask function.
Is this considered:
*
*a good way to code
*just for the newbies
*horrendous!.
Addendum:
I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves?
Why not just call the same executable in different ways:
*
*"todo /G" when you want the full-on graphical interface
*"todo /I" for an interactive prompt within todo.exe (scripting, etc)
*plain old "todo <function>" when you just want to do one thing and be done with it.
Addendum 2:
It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something."
Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library.
Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience.
A: Put the shared functionality in a library, then write a command-line and a GUI front-end for it. That way your layer transition isn't tied to the command-line.
(Also, this way adds another security concern: shouldn't the GUI first have to make sure it's the RIGHT todo.exe that is being called?)
A: Joel wrote an article contrasting this ("unix-style") development to the GUI first ("Windows-style") method a few years back. He called it Biculturalism.
I think on Windows it will become normal (if it hasn't already) to wrap your logic into .NET assemblies, which you can then access from both a GUI and a PowerShell provider. That way you get the best of both worlds.
A: My technique for programming backend functionality first without having the need for an explicit UI (especially when the UI isn't my job yet, e.g., I'm desigining a web application that is still in the design phase) is to write unit tests.
That way I don't even need to write a console application to mock the output of my backend code -- it's all in the tests, and unlike your console app I don't have to throw the code for the tests away because they still are useful later.
A: I think it depends on what type of application you are developing. Designing for the command line puts you on the fast track to what Alan Cooper refers to as "Implementation Model" in The Inmates are Running the Asylum. The result is a user interface that is unintuitive and difficult to use.
37signals also advocates designing your user interface first in Getting Real. Remember, for all intents and purposes, in the majority of applications, the user interface is the program. The back end code is just there to support it.
A: It's probably better to start with a command line first to make sure you have the functionality correct. If your main users can't (or won't) use the command line then you can add a GUI on top of your work.
This will make your app better suited for scripting as well as limiting the amount of upfront Bikeshedding so you can get to the actual solution faster.
A: If you plan to keep your command-line version of your app then I don't see a problem with doing it this way - it's not time wasted. You'll still end up coding the main functionality of your app for the command-line and so you'll have a large chunk of the work done.
I don't see working this way as being a barrier to a nice UI - you've still got the time to add one and make is usable etc.
I guess this way of working would only really work if you intend for your finished app to have both command-line and GUI variants. It's easy enough to mock a UI and build your functionality into that and then beautify the UI later.
Agree with Stu: your base functionality should be in a library that is called from the command-line and GUI code. Calling the executable from the UI is unnecessary overhead at runtime.
A: @jcarrascal
I don't see why this has to make the GUI "bad?"
My thought would be that it would force you to think about what the "business" logic actually needs to accomplish, without worrying too much about things being pretty. Once you know what it should/can do, you can build your interface around that in whatever way makes the most sense.
Side note: Not to start a separate topic, but what is the preferred way to address answers to/comments on your questions? I considered both this, and editing the question itself.
A: I did exactly this on one tool I wrote, and it worked great. The end result is a scriptable tool that can also be used via a GUI.
I do agree with the sentiment that you should ensure the GUI is easy and intuitive to use, so it might be wise to even develop both at the same time... a little command line feature followed by a GUI wrapper to ensure you are doing things intuitively.
If you are true to implementing both equally, the result is an app that can be used in an automated manner, which I think is very powerful for power users.
A: I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS.
Also, with a library you can easily do unit tests for the functionality.
But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation.
A: Kinda depends on your goal for the program, but yeah i do this from time to time - it's quicker to code, easier to debug, and easier to write quick and dirty test cases for. And so long as i structure my code properly, i can go back and tack on a GUI later without too much work.
To those suggesting that this technique will result in horrible, unusable UIs: You're right. Writing a command-line utility is a terrible way to design a GUI. Take note, everyone out there thinking of writing a UI that isn't a CLUI - don't prototype it as a CLUI.
But, if you're writing new code that does not itself depend on a UI, then go for it.
A: I usually start with a class library and a separate, really crappy and basic GUI. As the Command Line involves parsing the Command Line, I feel like i'm adding a lot of unneccessary overhead.
As a Bonus, this gives an MVC-like approach, as all the "real" code is in a Class Library. Of course, at a later stage, Refactoring the library together with a real GUI into one EXE is also an option.
A: If you do your development right, then it should be relatively easy to switch to a GUI later on in the project. The problem is that it's kinda difficult to get it right.
A: John Gruber had a good post about the concept of adding a GUI to a program not designed for one: Ronco Spray-On Usability
Summary: It doesn't work. If usability isn't designed into an application from the beginning, adding it later is more work than anyone is willing to do.
A: A better approach might be to develop the logic as a lib with a well defined API and, at the dev stage, no interface (or a hard coded interface) then you can wright the CLI or GUI later
A: I would not do this for a couple of reasons.
Design:
A GUI and a CLI are two different interfaces used to access an underlying implementation. They are generally used for different purposes (GUI is for a live user, CLI is usually accessed by scripting) and can often have different requirements. Coupling the two together is not a wise choice and is bound to cause you trouble down the road.
Performance:
The way you've described things, you need to spawn an executable every time the GUI needs to d o something. This is just plain ugly.
The right way to do this is to put the implementation in a library that's called by both the CLI and the GUI.
A: Command line tools generate less events then GUI apps and usually check all the params before starting. This will limit your gui because for a gui, it could make more sense to ask for the params as your program works or afterwards.
If you don't care about the GUI then don't worry about it. If the end result will be a gui, make the gui first, then do the command line version. Or you could work on both at the same time.
--Massive edit--
After spending some time on my current project, I feel as though I have come full circle from my previous answer. I think it is better to do the command line first and then wrap a gui on it. If you need to, I think you can make a great gui afterwards. By doing the command line first, you get all of the arguments down first so there is no surprises (until the requirements change) when you are doing the UI/UX.
A: @Maudite
The command-line app will check params up front and the GUI won't - but they'll still be checking the same params and inputting them into some generic worker functions.
Still the same goal. I don't see the command-line version affecting the quality of the GUI one.
A: Do a program that you expose as a web-service. then do the gui and command line to call the same web service. This approach also allows you to make a web-gui, and also to provide the functionality as SaaS to extranet partners, and/or to better secure the business logic.
This also allows your program to more easily participate in a SOA environement.
For the web-service, don't go overboard. do yaml or xml-rpc. Keep it simple.
A: In addition to what Stu said, having a shared library will allow you to use it from web applications as well. Or even from an IDE plugin.
A: There are several reasons why doing it this way is not a good idea. A lot of them have been mentioned, so I'll just stick with one specific point.
Command-line tools are usually not interactive at all, while GUI's are. This is a fundamental difference. This is for example painful for long-running tasks.
Your command-line tool will at best print out some kind of progress information - newlines, a textual progress bar, a bunch of output, ... Any kind of error it can only output to the console.
Now you want to slap a GUI on top of that, what do you do ? Parse the output of your long-running command line tool ? Scan for WARNING and ERROR in that output to throw up a dialog box ?
At best, most UI's built this way throw up a pulsating busy bar for as long as the command runs, then show you a success or failure dialog when the command exits. Sadly, this is how a lot of UNIX GUI programs are thrown together, making it a terrible user experience.
Most repliers here are correct in saying that you should probably abstract the actual functionality of your program into a library, then write a command-line interface and the GUI at the same time for it. All your business logic should be in your library, and either UI (yes, a command line is a UI) should only do whatever is necessary to interface between your business logic and your UI.
A command line is too poor a UI to make sure you develop your library good enough for GUI use later. You should start with both from the get-go, or start with the GUI programming. It's easy to add a command line interface to a library developed for a GUI, but it's a lot harder the other way around, precisely because of all the interactive features the GUI will need (reporting, progress, error dialogs, i18n, ...)
A: That is exactly one of my most important realizations about coding and I wish more people would take such approach.
Just one minor clarification: The GUI should not be a wrapper around the command line. Instead one should be able to drive the core of the program from either a GUI or a command line. At least at the beginning and just basic operations.
When is this a great idea?
When you want to make sure that your domain implementation is independent of the GUI framework. You want to code around the framework not into the framework
When is this a bad idea?
When you are sure your framework will never die
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How can I beautify JavaScript code using Command Line? I am writing a batch script in order to beautify JavaScript code. It needs to work on both Windows and Linux.
How can I beautify JavaScript code using the command line tools?
A: First, pick your favorite Javascript based Pretty Print/Beautifier. I prefer the one at http://jsbeautifier.org/, because it's what I found first. Downloads its file https://github.com/beautify-web/js-beautify/blob/master/js/lib/beautify.js
Second, download and install The Mozilla group's Java based Javascript engine, Rhino. "Install" is a little bit misleading; Download the zip file, extract everything, place js.jar in your Java classpath (or Library/Java/Extensions on OS X). You can then run scripts with an invocation similar to this
java -cp js.jar org.mozilla.javascript.tools.shell.Main name-of-script.js
Use the Pretty Print/Beautifier from step 1 to write a small shell script that will read in your javascript file and run it through the Pretty Print/Beautifier from step one. For example
//original code
(function() { ... js_beautify code ... }());
//new code
print(global.js_beautify(readFile(arguments[0])));
Rhino gives javascript a few extra useful functions that don't necessarily make sense in a browser context, but do in a console context. The function print does what you'd expect, and prints out a string. The function readFile accepts a file path string as an argument and returns the contents of that file.
You'd invoke the above something like
java -cp js.jar org.mozilla.javascript.tools.shell.Main beautify.js file-to-pp.js
You can mix and match Java and Javascript in your Rhino run scripts, so if you know a little Java it shouldn't be too hard to get this running with text-streams as well.
A: UPDATE April 2014:
The beautifier has been rewritten since I answered this in 2010. There is now a python module in there, an npm Package for nodejs, and the jar file is gone. Please read the project page on github.com.
Python style:
$ pip install jsbeautifier
NPM style:
$ npm -g install js-beautify
to use it (this will return the beatified js file on the terminal, the main file remains unchanged):
$ js-beautify file.js
To make the changes take effect on the file, you should use this command:
$ js-beautify -r file.js
Original answer
Adding to Answer of @Alan Storm
the command line beautifier based on http://jsbeautifier.org/ has gotten a bit easier to use, because it is now (alternatively) based on the V8 javascript engine (c++ code) instead of rhino (java-based JS engine, packaged as "js.jar"). So you can use V8 instead of rhino.
How to use:
download jsbeautifier.org zip file from
http://github.com/einars/js-beautify/zipball/master
(this is a download URL linked to a zip file such as http://download.github.com/einars-js-beautify-10384df.zip)
old (no longer works, jar file is gone)
java -jar js.jar name-of-script.js
new (alternative)
install/compile v8 lib FROM svn, see v8/README.txt in above-mentioned zip file
./jsbeautify somefile.js
-has slightly different command line options than the rhino version,
-and works great in Eclipse when configured as an "External Tool"
A: You have a few one liner choices. Use with npm or standalone with npx.
Semistandar
npx semistandard "js/**/*.js" --fix
Standard
npx standard "js/**/*.js" --fix
Prettier
npx prettier --single-quote --write --trailing-comma all "js/**/*.js"
A: In the console, you can use Artistic Style (a.k.a. AStyle) with --mode=java.
It works great and it's free, open-source and cross-platform (Linux, Mac OS X, Windows).
A: If you're using nodejs then try uglify-js
On Linux or Mac, assuming you already have nodejs installed, you can install uglify with:
sudo npm install -g uglify-js
And then get the options:
uglifyjs -h
So if I have a source file foo.js which looks like this:
// foo.js -- minified
function foo(bar,baz){console.log("something something");return true;}
I can beautify it like so:
uglifyjs foo.js --beautify --output cutefoo.js
uglify uses spaces for indentation by default so if I want to convert the 4-space-indentation to tabs I can run it through unexpand which Ubuntu 12.04 comes with:
unexpand --tabs=4 cutefoo.js > cuterfoo.js
Or you can do it all in one go:
uglifyjs foo.js --beautify | unexpand --tabs=4 > cutestfoo.js
You can find out more about unexpand here
so after all this I wind up with a file that looks like so:
function foo(bar, baz) {
console.log("something something");
return true;
}
update 2016-06-07
It appears that the maintainer of uglify-js is now working on version 2 though installation is the same.
A: I'm not able to add a comment to the accepted answer so that's why you see a post that should have not existed in the first place.
Basically I also needed a javascript beautifier in a java code and to my surprise none is available as far as I could find. So I coded one myself entirely based on the accepted answer (it wraps the jsbeautifier.org beautifier .js script but is callable from java or the command line).
The code is located at https://github.com/belgampaul/JsBeautifier
I used rhino and beautifier.js
USAGE from console: java -jar jsbeautifier.jar script indentation
example: java -jar jsbeautifier.jar "function ff() {return;}" 2
USAGE from java code:
public static String jsBeautify(String jsCode, int indentSize)
You are welcome to extend the code. In my case I only needed the indentation so I could check the generated javascript while developing.
In the hope it'll save you some time in your project.
A: Use the modern JavaScript way:
Use Grunt in combination with the jsbeautifier plugin for Grunt
You can install everything easily into your dev environment using npm.
All you will need is set up a Gruntfile.js with the appropriate tasks, which can also involve file concatenation, lint, uglify, minify etc, and run the grunt command.
A: On Ubuntu LTS
$ sudo apt install jsbeautifier
$ js-beautify ugly.js > beautiful.js
For in place beautifying, any of the follwing commands:
$ js-beautify -r file.js
$ js-beautify --replace file.js
A: I've written an article explaining how to build a command-line JavaScript beautifier implemented in JavaScript in under 5 minutes. YMMV.
*
*Download the latest stable Rhino and unpack it somewhere, e.g. ~/dev/javascript/rhino
*Download beautify.js which is referenced from aforementioned jsbeautifier.org then copy it somewhere, e.g. ~/dev/javascript/bin/cli-beautifier.js
*Add this at the end of beautify.js (using some additional top-level properties to JavaScript):
// Run the beautifier on the file passed as the first argument.
print( j23s_beautify( readFile( arguments[0] )));
*Copy-paste the following code in an executable file, e.g. ~/dev/javascript/bin/jsbeautifier.sh:
#!/bin/sh
java -cp ~/dev/javascript/rhino/js.jar org.mozilla.javascript.tools.shell.Main ~/dev/web/javascript/bin/cli-beautifier.js $*
*(optional) Add the folder with jsbeautifier.js to PATH or moving to some folder already there.
A: I believe when you asked about command line tool you just wanted to beautify all your js files in batch.
In this case Intellij IDEA (tested with 11.5) can do this.
You just need to select any of your project files and select "Code"->"Reformat code.." in main IDE menu. Then in the dialog select "all files in directory ..." and press "enter".
Just make sure you dedicated enough memory for the JVM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: Best practice for storing large amounts of data with J2ME I am developing a J2ME application that has a large amount of data to store on the device (in the region of 1MB but variable). I can't rely on the file system so I'm stuck the Record Management System (RMS), which allows multiple record stores but each have a limited size. My initial target platform, Blackberry, limits each to 64KB.
I'm wondering if anyone else has had to tackle the problem of storing a large amount of data in the RMS and how they managed it? I'm thinking of having to calculate record sizes and split one data set accross multiple stores if its too large, but that adds a lot of complexity to keep it intact.
There is lots of different types of data being stored but only one set in particular will exceed the 64KB limit.
A: For anything past a few kilobytes you need to use either JSR 75 or a remote server. RMS records are extremely limited in size and speed, even in some higher end handsets. If you need to juggle 1MB of data in J2ME the only reliable, portable way is to store it on the network. The HttpConnection class and the GET and POST methods are always supported.
On the handsets that support JSR 75 FileConnection it may be valid alternative but without code signing it is an user experience nightmare. Almost every single API call will invoke a security prompt with no blanket permission choice. Companies that deploy apps with JSR 75 usually need half a dozen binaries for every port just to cover a small part of the possible certificates. And this is just for the manufacturer certificates; some handsets only have carrier-locked certificates.
A: RMS performance and implementation varies wildly between devices, so if platform portability is a problem, you may find that your code works well on some devices and not others. RMS is designed to store small amounts of data (High score tables, or whatever) not large amounts.
You might find that some platforms are faster with files stored in multiple record stores. Some are faster with multiple records within one store. Many are ok for storage, but become unusably slow when deleting large amounts of data from the store.
Your best bet is to use JSR-75 instead where available, and create your own file store interface that falls back to RMS if nothing better is supported.
Unfortunately when it comes to JavaME, you are often drawn into writing device-specific variants of your code.
A: I think the most flexible approach would be to implement your own file system on top of the RMS. You can handle the RMS records in a similar way as blocks on a hard drive and use a inode structure or similar to spread logical files over multiple blocks. I would recommend implementing a byte or stream-oriented interface on top of the blocks, and then possibly making another API layer on top of that for writing special data structures (or simply make your objects serializable to the data stream).
Tanenbaum's classical book on operating systems covers how to implement a simple file system, but I am sure you can find other resources online if you don't like paper.
A: Under Blackberry OS 4.6 the RMS store size limit has been increased to 512Kb but this isn't much help as many devices will likely not have support for 4.6. The other option on Blackberry is the Persistent Store which has a record size limit of 64kb but no limit on the size of the store (other than the physical limits of the device).
I think Carlos and izb are right.
A: It is quite simple, use JSR75 (FileConnection) and remember to sign your midlet with a valid (trusted) certificate.
A: For read only I'm arriving at acceptable times (within 10s), by indexing a resource file. I've got two ~800KB CSV price list exports. Program classes and both those files compress to a 300KB JAR.
On searching I display a List and run a two Threads in the background to fill it, so the first results come pretty quickly and are viewable immediately. I first implemented a simple linear search, but that was too slow (~2min).
Then I indexed the file (which is alphabetically sorted) to find the beginnings of each letter. Now before parsing line by line, I first InputStreamReader.skip() to the desired position, based on first letter. I suspect the delay comes mostly from decompressing the resource, so splitting resources would speed it up further. I don't want to do that, not to loose the advantage of easy upgrade. CSV are exported without any preprocessing.
A: I'm just starting to code for JavaME, but have experience with old versions of PalmOS, where all data chunks are limited in size, requiring the design of data structures using record indexes and offsets.
A: Thanks everyone for useful commments. In the end the simplest solution was to limit the amount of data being stored, implementing code that adjusts the data according to how large the store is, and fetching data from the server on demand if its not stored locally. Thats interesting that the limit is increased in OS 4.6, with any luck my code will simply adjust on its own and store more data :)
Developing a J2ME application for Blackberry without using the .cod compiler limits the use of JSR 75 some what since we can't sign the archive. As pointed out by Carlos this is a problem on any platform and I've had similar issues using the PIM part of it. The RMS seems to be incredibly slow on the Blackberry platform so I'm not sure how useful a inode/b-tree file system on top would be, unless data was cached in memory and written to RMS in a low priority background thread.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Using Lucene to search for email addresses I want to use Lucene (in particular, Lucene.NET) to search for email address domains.
E.g. I want to search for "@gmail.com" to find all emails sent to a gmail address.
Running a Lucene query for "*@gmail.com" results in an error, asterisks cannot be at the start of queries. Running a query for "@gmail.com" doesn't return any matches, because "[email protected]" is seen as a whole word, and you cannot search for just parts of a word.
How can I do this?
A: I see you have your solution, but mine would have avoided this and added a field to the documents you're indexing called email_domain, into which I would have added the parsed out domain of the email address. It might sound silly, but the amount of storage associated with this is pretty minimal. If you feel like getting fancier, say some domain had many subdomains, you could instead make a field into which the reversed domain went, so you'd store com.gmail, com.company.department, or ae.eim so you could find all the United Arab Emirates related addresses with a prefix query of 'ae.'
A: There also is setAllowLeadingWildcard
But be careful. This could get very performance expensive (thats why it is disabled by default). Maybe in some cases this would be an easy solution, but I would prefer a custom Tokenizer as stated by Judah Himango, too.
A: No one gave a satisfactory answer, so we started poking around Lucene documentation and discovered we can accomplish this using custom Analyzers and Tokenizers.
The answer is this: create a WhitespaceAndAtSymbolTokenizer and a WhitespaceAndAtSymbolAnalyzer, then recreate your index using this analyzer. Once you do this, a search for "@gmail.com" will return all gmail addresses, because it's seen as a separate word thanks to the Tokenizer we just created.
Here's the source code, it's actually very simple:
class WhitespaceAndAtSymbolTokenizer : CharTokenizer
{
public WhitespaceAndAtSymbolTokenizer(TextReader input)
: base(input)
{
}
protected override bool IsTokenChar(char c)
{
// Make whitespace characters and the @ symbol be indicators of new words.
return !(char.IsWhiteSpace(c) || c == '@');
}
}
internal class WhitespaceAndAtSymbolAnalyzer : Analyzer
{
public override TokenStream TokenStream(string fieldName, TextReader reader)
{
return new WhitespaceAndAtSymbolTokenizer(reader);
}
}
That's it! Now you just need to rebuild your index and do all searches using this new Analyzer. For example, to write documents to your index:
IndexWriter index = new IndexWriter(indexDirectory, new WhitespaceAndAtSymbolAnalyzer());
index.AddDocument(myDocument);
Performing searches should use the analyzer as well:
IndexSearcher searcher = new IndexSearcher(indexDirectory);
Query query = new QueryParser("TheFieldNameToSearch", new WhitespaceAndAtSymbolAnalyzer()).Parse("@gmail.com");
Hits hits = query.Search(query);
A: You could a separate field that indexes the email address reversed:
Index '[email protected]' as 'moc.liamg@oof'
Which enables you to do a query for "moc.liamg@*"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to check set of files conform to a naming scheme I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme..
Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths.
Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code.
The current code can be found here
I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state..
How could I write this system in a more expandable way?
The rules it needs to check would be..
*
*File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi
*If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output
*The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename)
*each Show Name/season 1/ folder should contain "folder.jpg"
.any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things..
The only thought I had was a list of dicts in the format:
checker = [
{
'name':'valid files',
'type':'file',
'function':check_valid(), # runs check_valid() on all files
'status':0 # if it returns True, this is the status the file gets
}
A:
I want to add a rule that checks for
the presence of a folder.jpg file in
each directory, but to add this would
make the code substantially more messy
in it's current state..
This doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:
*
*Get a list of all the files
*Check for "required" files
You would just have have add to your dictionary a list of required files:
checker = {
...
'required': ['file', 'list', 'for_required']
}
As far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the "multiple" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):
check_dict = {
'delim' : /\-/,
'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],
'patterns' : [/valid name/, /valid episode name/, /valid number/ ],
'required' : ['list', 'of', 'files'],
'ignored' : ['.*', 'hidden.txt'],
'start_dir': '/path/to/dir/to/test/'
}
*
*Split the filename based on the delimiter.
*Check each of the parts.
Because its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.
Ignored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input "globs" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.
Here start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.
The real loose end here is the path template and along the same lines what path is required for "valid files". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.
Is this strategy in tune with what you were thinking of?
A: maybe you should take the approach of defaulting to: "the filename is correct" and work from there to disprove that statement:
with the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a "-" (dash) so you have to have 2 of those for a filename to be correct.
if that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).
if however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.
and separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: JavaScript Load Order I am working with both amq.js (ActiveMQ) and Google Maps. I load my scripts in this order
<head>
<meta http-equiv="content-type" content="text/html;charset=UTF-8" />
<title>AMQ & Maps Demo</title>
<!-- Stylesheet -->
<link rel="stylesheet" type="text/css" href="style.css"></link>
<!-- Google APIs -->
<script type="text/javascript" src="http://www.google.com/jsapi?key=abcdefg"></script>
<!-- Active MQ -->
<script type="text/javascript" src="amq/amq.js"></script>
<script type="text/javascript">amq.uri='amq';</script>
<!-- Application -->
<script type="text/javascript" src="application.js"></script>
</head>
However in my application.js it loads Maps fine but I get an error when trying to subscribe to a Topic with AMQ. AMQ depends on prototype which the error console in Firefox says object is not defined. I think I have a problem with using the amq object before the script is finished loading. Is there a way to make sure both scripts load before I use them in my application.js?
Google has this nice function call google.setOnLoadCallback(initialize); which works great. I'm not sure amq.js has something like this.
A: in jquery you can use:
$(document).ready(function(){/*do stuff here*/});
which makes sure the javascript is loaded and the dom is ready before doing your stuff.
in prototype it looks like this might work
document.observe("dom:loaded", function() {/*do stuff here*/});
If I understand your problem correctly.. I think that may help..
If you don't want to rely on a lib to do this... I think this might work:
<script>
function doIt() {/*do stuff here*/}
</script>
<body onLoad="doIt();"></body>
A: I had a similar problem to this, only with a single script. The solution I came up with was to use addEventListener("load",fn,false) to a script object created using document.createElement('script') Here is the final function which loads any standard JS file and lets you add a "post load" script.
function addJavaScript( js, onload ) {
var head, ref;
head = document.getElementsByTagName('head')[0];
if (!head) { return; }
script = document.createElement('script');
script.type = 'text/javascript';
script.src = js;
script.addEventListener( "load", onload, false );
head.appendChild(script);
}
I hope this may help someone in the future.
A: cross-domain scripts are loaded after scripts of site itself, this is why you get errors. interestingly, nobody knows this here.
A:
Is there a way to make sure both scripts load before I use them?
Yes.
Put the code you want loaded last (your application.js stuff) into prototype's document.observe. This should ensure that the code will load only after prototype + other stuff is finished and ready. (If you are familiar with jQuery, this function is similar to jQuery's $(document).ready )
A:
Is there a way to make sure both scripts load before I use them in my application.js?
JavaScript files should load sequentially and block so unless the scripts you are depending on are doing something unusual all you should need to do is load application.js after the other files.
Non-blocking JavaScript Downloads has some information about how scripts load (and discusses some techniques to subvert the blocking).
A:
AMQ depends on prototype which the error console in FireFox says object is not defined.
Do you mean that AMQ depends on the Prototype library? I can't see an import for that library in the code you've provided.
A:
Do you mean that AMQ depends on the
Prototype library? I can't see an
import for that library in the code
you've provided.
Yes for ActiveMQ's javascript (amq.js) does depend on Prototype. In the amq.js it loads 3 scripts, _amq.js, behaviour.js and prototype.js.
Thanks you for your help on the JavaScript load order wrumsby. This tells me that my bug is in another castle :(
I guess I have a different problem. I also checked the js files from ActiveMQ 5.0 to 5.1 and noticed they were the same as well. Something has changed in 5.0 to 5.1 that requires a refresh for the topics to subscribe. I'll keep looking, but thanks for eliminating this possible cause.
A: You can also use the built in SharePoint javascript method to control the execution of your scripts;
_spBodyOnLoadFunctionNames.push("yourFunction");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: TortoiseSVN side-by-side configuration is incorrect After upgrading to the latest version of TortoiseSVN (1.5.2.13595), it's context menu is no longer available.
When attempting to run it manually, I get this error:
The application has failed to start because its side-by-side configuration is incorrect.
Please see the application event log for more detail
The application log shows this
Activation context generation failed for "C:\Program Files\TortoiseSVN\bin\TortoiseSVN.dll".
Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30411.0" could not be found.
Please use sxstrace.exe for detailed diagnosis.
A: I remembered I'd seen this thing before just after posting to SO
It seems that later versions of TortoiseSVN are built with Visual Studio 2008 SP1 (hence the 9.0.30411.0 build number)
Installing the VC2008 SP1 Redistributable fixes it
A: Confirmed working on windows 7 x64.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is the difference between oracle's 'yy' and 'rr' date mask? Example:
select ename from emp where hiredate = todate('01/05/81','dd/mm/yy')
and
select ename from emp where hiredate = todate('01/05/81','dd/mm/rr')
return different results
A: y2k compatibility. rr assumes 01 to be 2001, yy assumes 01 to be 1901
see: http://www.oradev.com/oracle_date_format.jsp
edit: damn! michael "quickfingers" stum beat me to it!
/mp
A: @Michael Stum
My last Oracle experience is a bit long ago
uhm, was it, before 2000? :p
...
Will yy always assume 19xx?
according to your source, we get the following scenarios:
USING
ENTERED
STORED
SELECT of date column
YY
22-FEB-01
22-FEB-1901
22-FEB-01
YYYY
22-FEB-01
22-FEB-0001
22-FEB-0001
RR
22-FEB-01
22-FEB-2001
22-FEB-01
RRRR
22-FEB-01
22-FEB-2001
22-FEB-2001
/mp
A: http://oracle.ittoolbox.com/groups/technical-functional/oracle-dev-l/difference-between-yyyy-and-rrrr-format-519525
YY allows you to retrieve just two digits of a year, for example, the 99 in
1999. The other digits (19) are automatically assigned to the current
century. RR converts two-digit years into four-digit years by rounding.
50-99 are stored as 1950-1999, and dates ending in 00-49 are stored as
2000-2049. RRRR accepts a four-digit input (although not required), and
converts two-digit dates as RR does. YYYY accepts 4-digit inputs butdoesn't
do any date converting
Essentially, your first example will assume that 81 is 2081 whereas the RR one assumes 1981. So the first example should not return any rows as you most likely did not hire any guys after May 1 2081 yet :-)
A: RR displays four digits as 1999 or 2015(if it is <49 then it will consider 20th century)
A: About RR or RRRR
When we are inserting the dates with 2 digits years (i.e. 09/oct/15)
then Oracle may changes the centuries automatically hence the
solution is 4 digit dates. But 4 digit version is introduced in
some recent versions, therefore the solutions for this problem
in earlier versions was RR or RRRR. But note that it only works
with the TO_DATE() function but not with the TO_CHAR() function.
Whenever inserts/updates are conducted upon dates we should always clarify current date running in the clock in association with the date translation since Oracle conducts every date translation by contacting the server.
In order to keep the consistencies among the centuries, it is always better to execute the date translation with 4 digit years.
About YY or YYYY
It accepts the dates but doesn't has functionality to automatically change it.
This Image shows behaviour when inserting date with two digit (i.e.
09/oct/15)
A: RR stands for after 1990 and yy assumes 90 as 2090....as we are in the current yr,...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How do I do an Upsert Into Table? I have a view that has a list of jobs in it, with data like who they're assigned to and the stage they are in. I need to write a stored procedure that returns how many jobs each person has at each stage.
So far I have this (simplified):
DECLARE @ResultTable table
(
StaffName nvarchar(100),
Stage1Count int,
Stage2Count int
)
INSERT INTO @ResultTable (StaffName, Stage1Count)
SELECT StaffName, COUNT(*) FROM ViewJob
WHERE InStage1 = 1
GROUP BY StaffName
INSERT INTO @ResultTable (StaffName, Stage2Count)
SELECT StaffName, COUNT(*) FROM ViewJob
WHERE InStage2 = 1
GROUP BY StaffName
The problem with that is that the rows don't combine. So if a staff member has jobs in stage1 and stage2 there's two rows in @ResultTable. What I would really like to do is to update the row if one exists for the staff member and insert a new row if one doesn't exist.
Does anyone know how to do this, or can suggest a different approach?
I would really like to avoid using cursors to iterate on the list of users (but that's my fall back option).
I'm using SQL Server 2005.
Edit: @Lee: Unfortunately the InStage1 = 1 was a simplification. It's really more like WHERE DateStarted IS NOT NULL and DateFinished IS NULL.
Edit: @BCS: I like the idea of doing an insert of all the staff first so I just have to do an update every time. But I'm struggling to get those UPDATE statements correct.
A: Actually, I think you're making it much harder than it is. Won't this code work for what you're trying to do?
SELECT StaffName, SUM(InStage1) AS 'JobsAtStage1', SUM(InStage2) AS 'JobsAtStage2'
FROM ViewJob
GROUP BY StaffName
A: You could just check for existence and use the appropriate command. I believe this really does use a cursor behind the scenes, but it's the best you'll likely get:
IF (EXISTS (SELECT * FROM MyTable WHERE StaffName = @StaffName))
begin
UPDATE MyTable SET ... WHERE StaffName = @StaffName
end
else
begin
INSERT MyTable ...
end
SQL2008 has a new MERGE capability which is cool, but it's not in 2005.
A: To get a real "upsert" type of query you need to use an if exists... type of thing, and this unfortunately means using a cursor.
However, you could run two queries, one to do your updates where there is an existing row, then afterwards insert the new one. I'd think this set-based approach would be preferable unless you're dealing exclusively with small numbers of rows.
A: IIRC there is some sort of "On Duplicate" (name might be wrong) syntax that lets you update if a row exists (MySQL)
Alternately some form of:
INSERT INTO @ResultTable (StaffName, Stage1Count, Stage2Count)
SELECT StaffName,0,0 FROM ViewJob
GROUP BY StaffName
UPDATE @ResultTable Stage1Count= (
SELECT COUNT(*) AS count FROM ViewJob
WHERE InStage1 = 1
@ResultTable.StaffName = StaffName)
UPDATE @ResultTable Stage2Count= (
SELECT COUNT(*) AS count FROM ViewJob
WHERE InStage2 = 1
@ResultTable.StaffName = StaffName)
A: The following query on your result table should combine the rows again. This is assuming that InStage1 and InStage2 are never both '1'.
select distinct(rt1.StaffName), rt2.Stage1Count, rt3.Stage2Count
from @ResultTable rt1
left join @ResultTable rt2 on rt1.StaffName=rt2.StaffName and rt2.Stage1Count is not null
left join @ResultTable rt3 on rt1.StaffName=rt2.StaffName and rt3.Stage2Count is not null
A: I managed to get it working with a variation of BCS's answer. It wouldn't let me use a table variable though, so I had to make a temp table.
CREATE TABLE #ResultTable
(
StaffName nvarchar(100),
Stage1Count int,
Stage2Count int
)
INSERT INTO #ResultTable (StaffName)
SELECT StaffName FROM ViewJob
GROUP BY StaffName
UPDATE #ResultTable SET
Stage1Count= (
SELECT COUNT(*) FROM ViewJob V
WHERE InStage1 = 1 AND
V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS
GROUP BY V.StaffName),
Stage2Count= (
SELECT COUNT(*) FROM ViewJob V
WHERE InStage2 = 1 AND
V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS
GROUP BY V.StaffName)
SELECT StaffName, Stage1Count, Stage2Count FROM #ResultTable
DROP TABLE #ResultTable
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What .NET Mime Parsing libraries are available? I have a project that utilizes the javax.mail.internet.MimeMessage and other related classes that does mime parsing for emails that we receive. This needs to be ported to .NET.
What .Net 3rd party or built in library can I use to replace the Java classes that I'm using?
EDIT: Anything change in the last 9 months since I asked this question?
A: I've not used javax.mail.internet.MimeMessage, so I can't say how any of this compares, but .NET 2.0 and beyond does have a System.Net.Mime namespace which might have something useful for you.
Otherwise, I used Chilkat MIME .NET a long time ago and was happy with it.
A: SharpMimeTools, which is free and open source.
http://anmar.eu.org/projects/sharpmimetools/
It's what I use in my application, BugTracker.NET and it has been very dependable.
A: I have used both, and concur with Ryan that the System.Net.Mime and sibling namespaces provide very similar functionality. If anything, I think you'll find that the .Net APIs are cleaner and easier to work with.
A: I am in need of such a library, too. Looking for a mime processing library. I need to convert messages and attachments to PDF.
Here are some of the libraries I have found so far.
Open Source Libraries:
*
*SharpMime.NET
Commercial Libraries:
*
*Mime4Net
*Rebex
*Chilkat
*Aspose - the most expensive option that I see.
(would have added more links, but my account level prevents me from doing so)
I am still sorting through these, and have not tried one yet. Probably gonna start with SharpMime since it's open source. Mime4Net has some examples on their site. From what I see, none of these offer the the conversion to PDF that I am needing, but there are other libraries I am looking at to fulfill that task.
A: I've recently released MimeKit which is far more robust than any of the other open source .NET MIME parser libraries out there and it's orders of magnitude faster as well due to the fact that it is an actual stream parser and not a recursive descent string parser (which also has the added benefit of it using a LOT less memory).
It has full support for S/MIME v3.2 (including compression, which none of the other libraries that claim "full" support actually support) and OpenPGP.
For SMTP, POP3, and IMAP you can use my MailKit library which supports a bunch of SASL authentication mechanisms including XOAUTH2 (used by Google). The SMTP client supports PIPELINING which can improve performance of sending mail and the IMAP client supports a growing number of extensions that allow clients to optimize their bandwidth as well.
A: You may try a S/MIME library included in our Rebex Secure Mail component.
Features include:
*
*high level API (MailMessage - as seen in email client)
*low level API (access to a MIME tree)
*autocorrecting code for mangled messages and for messages produced by misbehaving email clients
*ability to read TNEF (aka winmail.dat created by Outlook)
*S/MIME: sign/encrypt/decrypt messages
*supports both .NET and .NET CF
Check features, MailMessage tutorial and S/MIME tutorial. You can download it at www.rebex.net/secure-mail.net
A: Try using Mail.dll IMAP component, it's on the market for quite a while, and is well tested.
using(Imap imap = new Imap())
{
imap.Connect("imapServer");
imap.UseBestLogin("user", "password");
imap.SelectInbox();
List<long> uids = imap.SearchFlag(Flag.Unseen);
foreach (long uid in uids)
{
byte[] eml = imap.GetMessageByUID(uid);
IMail message = new MailBuilder()
.CreateFromEml(eml);
Console.WriteLine(message.Subject);
}
imap.Close();
}
Please note that Mail.dll is a commercial product that I've created.
You can download it here: http://www.limilabs.com/mail.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Should menu items always be enabled? And how do you tell the user? One of the things that has been talked about a few times on the podcast is whether menu items should always be enabled to prevent "WHY ISN'T THIS AVAILABLE!" frustration for the end user.
This strikes me as a good idea, but then there's the issue of communicating the lack of availability (and the reason why) to the user. Is there anything better than just popping up a message box with a blurb of text?
As I'm about to start on a fairly sizeable cross-platform Windows / Mac app I thought I'd throw this out to hear the wisdom of the SO crowd.
A: One thing I've seen a printer manufacturer do with their printer properties dialog is to have a little help baloon icon beside disabled items that display a tooltip when hovered over.
Another thing you can do with disabled items is to add in parenthesis why it's disabled or what the user would have to do to enable it. E.g., "Save (already saved)" or "Copy (select something to copy)".
I don't like keeping it enabled because then it will instill hesitation in users to select any menu item in fear that they'll just get an error message making them feel stupid for not realizing that they couldn't possibly perform that operation at the time.
Menu items that spring dialogs have elipsis (...) after them to let users know it's not just click and carry on. Required form fields have an asterisk or bold label to spare the user from being scolded with a validation error message.
A: You have to consider the alternatives.
*
*Hide the menu item. This is bad. Now you have menu items disappearing and reappearing all the time?
*Disable the menu item. Now the user can find what they're looking for, it just isn't obvious how to enable it. This is better, but still leaves the user slightly puzzled.
*Keep the menu item enabled, but make it display a dialog that explains what needs to be done when the program is in a state where the menu item can't be properly used.
I agree with Joel on this one, #3 seems like the best choice.
A: Joel has a post on that http://www.joelonsoftware.com/items/2008/07/01.html which might be a good place to start thinking about this.
A: @Bill the Lizard: I'd combine #2 and #3 - disable the item, but have a tooltip that indicates why it is disabled.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Bash Pipe Handling Does anyone know how bash handles sending data through pipes?
cat file.txt | tail -20
Does this command print all the contents of file.txt into a buffer, which is then read by tail? Or does this command, say, print the contents of file.txt line by line, and then pause at each line for tail to process, and then ask for more data?
The reason I ask is that I'm writing a program on an embedded device that basically performs a sequence of operations on some chunk of data, where the output of one operation is send off as the input of the next operation. I would like to know how linux (bash) handles this so please give me a general answer, not specifically what happens when I run "cat file.txt | tail -20".
EDIT: Shog9 pointed out a relevant Wikipedia Article, this didn't lead me directly to the article but it helped me find this: http://en.wikipedia.org/wiki/Pipeline_%28Unix%29#Implementation which did have the information I was looking for.
I'm sorry for not making myself clear. Of course you're using a pipe and of course you're using stdin and stdout of the respective parts of the command. I had assumed that was too obvious to state.
What I'm asking is how this is handled/implemented. Since both programs cannot run at once, how is data sent from stdin to stdout? What happens if the first program generates data significantly faster than the second program? Does the system just run the first command until either it's terminated or it's stdout buffer is full, and then move on to the next program, and so on in a loop until no more data is left to be processed or is there a more complicated mechanism?
A: I decided to write a slightly more detailed explanation.
The "magic" here lies in the operating system. Both programs do start up at roughly the same time, and run at the same time (the operating system assigns them slices of time on the processor to run) as every other simultaneously running process on your computer (including the terminal application and the kernel). So, before any data gets passed, the processes are doing whatever initialization necessary. In your example, tail is parsing the '-20' argument and cat is parsing the 'file.txt' argument and opening the file. At some point tail will get to the point where it needs input and it will tell the operating system that it is waiting for input. At some other point (either before or after, it doesn't matter) cat will start passing data to the operating system using stdout. This goes into a buffer in the operating system. The next time tail gets a time slice on the processor after some data has been put into the buffer by cat, it will retrieve some amount of that data (or all of it) which leaves the buffer on the operating system. When the buffer is empty, at some point tail will have to wait for cat to output more data. If cat is outputting data much faster than tail is handling it, the buffer will expand. cat will eventually be done outputting data, but tail will still be processing, so cat will close and tail will process all remaining data in the buffer. The operating system will signal tail when their is no more incoming data with an EOF. Tail will process the remaining data. In this case, tail is probably just receiving all the data into a circular buffer of 20 lines, and when it is signalled by the operating system that there is no more incoming data, it then dumps the last twenty lines to its own stdout, which just gets displayed in the terminal. Since tail is a much simpler program than cat, it will likely spend most of the time waiting for cat to put data into the buffer.
On a system with multiple processors, the two programs will not just be sharing alternating time slices on the same processor core, but likely running at the same time on separate cores.
To get into a little more detail, if you open some kind of process monitor (operating system specific) like 'top' in Linux you will see a whole list of running processes, most of which are effectively using 0% of the processor. Most applications, unless they are crunching data, spend most of their time doing nothing. This is good, because it allows other processes to have unfettered access to the processor according to their needs. This is accomplished in basically three ways. A process could get to a sleep(n) style instruction where it basically tells the kernel to wait n milliseconds before giving it another time slice to work with. Most commonly a program needs to wait for something from another program, like 'tail' waiting for more data to enter the buffer. In this case the operating system will wake up the process when more data is available. Lastly, the kernel can preempt a process in the middle of execution, giving some processor time slices to other processes. 'cat' and 'tail' are simple programs. In this example, tail spends most of it's time waiting for more data on the buffer, and cat spends most of it's time waiting for the operating system to retrieve data from the harddrive. The bottleneck is the speed (or slowness) of the physical medium that the file is stored on. That perceptible delay you might detect when you run this command for the first time is the time it takes for the read heads on the disk drive to seek to the position on the harddrive where 'file.txt' is. If you run the command a second time, the operating system will likely have the contents of file.txt cached in memory, and you will not likely see any perceptible delay (unless file.txt is very large, or the file is no longer cached.)
Most operations you do on your computer are IO bound, which is to say that you are usually waiting for data to come from your harddrive, or from a network device, etc.
A: Shog9 already referenced the Wikipedia article, but the implementation section has the details you want. The basic implementation is a bounded buffer.
A: cat will just print the data to standard out, which happens to be redirected to the standard in of tail. This can be seen in the man page of bash.
In other words, there is no pausing going on, tail is just reading from standard in and cat is just writing to standard out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Best way to fix CSS/JS drop-down in IE7 when page includes Google Map I have a page using <ul> lists for navigation (Javascript changes the styling to display or not on mouseover).
This is working fine for me except in IE6 and IE7 when I have a Google Map on the page.
In this case the drop-down simply does not work. However, the page continues to work in FireFox 2.
I have done a little bit of research and discovered that this may be an example of the IE Select Box Bug, but I am not sure as the Google Map appears to be using a <div>, not an <iframe>.
Has anyone else encountered a problem similar to this, and if so do they have any recommendations on the best way to overcome this problem?
A: I don't know if this will fix your problem but you may want to try this solution at ccsplay.co.uk which fixes the problem of menus appearing underneath drop-down lists. I don't know if it will work for sure, but it's worth a shot.
A: I fixed a similar issue with drop-downs not appearing over flash movies in IE6/IE7/IE8 using this jQuery:
$(function () {
$("#primary-nav").appendTo("#footer");
});
Where primary-nav is the ID of the drop-down container element and footer is the ID of the last element on the page. I then used absolute positioning to relocate the dropdowns back to the top where they belong.
The reason this works is because IE respects source ordering more than it does the z-index. It still wasn't able to display over top of a Windows Media Player plugin though.
A: I believe that might happen because of an Active-X thingy IE 6+ uses to parse CSS.
Over time I had to adapt my work to include some IE hacks on my CSS in order for it to be compatible with several browsers.
I would first try to make a menu without Javascript, using pure CSS and including the hacks I mentioned. It would likely fix your problem. You don't actually need Javascript to change styles on mouseover and stuff like that.
If you want to check out what CSS hacking is about: click here
If you want to check out some pure CSS menu examples: click here
Hope this helps!
A: According to this google maps thread, you are correct - an IFrame is inserted by the google code.
You'll need to use the solution which Dan mentioned,
you may want to try this solution at ccsplay.co.uk which fixes the problem of menus appearing underneath drop-down lists
Alternatively, see Internet Explorer HACK/Fix For Select Box Showing through DIV.
Basically the solution is, using JavaScript, to place your css menu in an IFrame in IE6.
An alternative solution is to use JavaScript to hide the Google Map when the CSS menu is pulled down, or to replace the Google Map with a static map (maybe even a Google static map) when the CSS menu is pulled down.
A: I don't have an immediate answer for you, but the tools mentioned in this answer (particularly the IE DOM Inspector) may help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Expression Versus Statement I'm asking with regards to c#, but I assume its the same in most other languages.
Does anyone have a good definition of expressions and statements and what the differences are?
A: For an explanation of important differences in composability (chainability) of expressions vs statements, my favorite reference is John Backus's Turing award paper, Can programming be liberated from the von Neumann style?.
Imperative languages (Fortran, C, Java, ...) emphasize statements for structuring programs, and have expressions as a sort of after-thought. Functional languages emphasize expressions. Purely functional languages have such powerful expressions than statements can be eliminated altogether.
A: Expressions can be evaluated to get a value, whereas statements don't return a value (they're of type void).
Function call expressions can also be considered statements of course, but unless the execution environment has a special built-in variable to hold the returned value, there is no way to retrieve it.
Statement-oriented languages require all procedures to be a list of statements. Expression-oriented languages, which is probably all functional languages, are lists of expressions, or in tha case of LISP, one long S-expression that represents a list of expressions.
Although both types can be composed, most expressions can be composed arbitrarily as long as the types match up. Each type of statement has its own way of composing other statements, if they can do that all. Foreach and if statements require either a single statment or that all subordinate statements go in a statement block, one after another, unless the substatements allow for thier own substatements.
Statements can also include expressions, where an expression doesn't really include any statements. One exception, though, would be a lambda expression, which represents a function, and so can include anything a function can iclude unless the language only allows for limited lambdas, like Python's single-expression lambdas.
In an expression-based language, all you need is a single expression for a function since all control structures return a value (a lot of them return NIL). There's no need for a return statement since the last-evaluated expression in the function is the return value.
A: Expression: Something which evaluates to a value. Example: 1+2/x
Statement: A line of code which does something. Example: GOTO 100
In the earliest general-purpose programming languages, like FORTRAN, the distinction was crystal-clear. In FORTRAN, a statement was one unit of execution, a thing that you did. The only reason it wasn't called a "line" was because sometimes it spanned multiple lines. An expression on its own couldn't do anything... you had to assign it to a variable.
1 + 2 / X
is an error in FORTRAN, because it doesn't do anything. You had to do something with that expression:
X = 1 + 2 / X
FORTRAN didn't have a grammar as we know it today—that idea was invented, along with Backus-Naur Form (BNF), as part of the definition of Algol-60. At that point the semantic distinction ("have a value" versus "do something") was enshrined in syntax: one kind of phrase was an expression, and another was a statement, and the parser could tell them apart.
Designers of later languages blurred the distinction: they allowed syntactic expressions to do things, and they allowed syntactic statements that had values.
The earliest popular language example that still survives is C. The designers of C realized that no harm was done if you were allowed to evaluate an expression and throw away the result. In C, every syntactic expression can be a made into a statement just by tacking a semicolon along the end:
1 + 2 / x;
is a totally legit statement even though absolutely nothing will happen. Similarly, in C, an expression can have side-effects—it can change something.
1 + 2 / callfunc(12);
because callfunc might just do something useful.
Once you allow any expression to be a statement, you might as well allow the assignment operator (=) inside expressions. That's why C lets you do things like
callfunc(x = 2);
This evaluates the expression x = 2 (assigning the value of 2 to x) and then passes that (the 2) to the function callfunc.
This blurring of expressions and statements occurs in all the C-derivatives (C, C++, C#, and Java), which still have some statements (like while) but which allow almost any expression to be used as a statement (in C# only assignment, call, increment, and decrement expressions may be used as statements; see Scott Wisniewski's answer).
Having two "syntactic categories" (which is the technical name for the sort of thing statements and expressions are) can lead to duplication of effort. For example, C has two forms of conditional, the statement form
if (E) S1; else S2;
and the expression form
E ? E1 : E2
And sometimes people want duplication that isn't there: in standard C, for example, only a statement can declare a new local variable—but this ability is useful enough that the
GNU C compiler provides a GNU extension that enables an expression to declare a local variable as well.
Designers of other languages didn't like this kind of duplication, and they saw early on that if expressions can have side effects as well as values, then the syntactic distinction between statements and expressions is not all that useful—so they got rid of it. Haskell, Icon, Lisp, and ML are all languages that don't have syntactic statements—they only have expressions. Even the class structured looping and conditional forms are considered expressions, and they have values—but not very interesting ones.
A: Simply: an expression evaluates to a value, a statement doesn't.
A: Some things about expression based languages:
Most important: Everything returns an value
There is no difference between curly brackets and braces for delimiting code blocks and expressions, since everything is an expression. This doesn't prevent lexical scoping though: A local variable could be defined for the expression in which its definition is contained and all statements contained within that, for example.
In an expression based language, everything returns a value. This can be a bit strange at first -- What does (FOR i = 1 TO 10 DO (print i)) return?
Some simple examples:
*
*(1) returns 1
*(1 + 1) returns 2
*(1 == 1) returns TRUE
*(1 == 2) returns FALSE
*(IF 1 == 1 THEN 10 ELSE 5) returns 10
*(IF 1 == 2 THEN 10 ELSE 5) returns 5
A couple more complex examples:
*
*Some things, such as some function calls, don't really have a meaningful value to return (Things that only produce side effects?). Calling OpenADoor(), FlushTheToilet() or TwiddleYourThumbs() will return some sort of mundane value, such as OK, Done, or Success.
*When multiple unlinked expressions are evaluated within one larger expression, the value of the last thing evaluated in the large expression becomes the value of the large expression. To take the example of (FOR i = 1 TO 10 DO (print i)), the value of the for loop is "10", it causes the (print i) expression to be evaluated 10 times, each time returning i as a string. The final time through returns 10, our final answer
It often requires a slight change of mindset to get the most out of an expression based language, since the fact that everything is an expression makes it possible to 'inline' a lot of things
As a quick example:
FOR i = 1 to (IF MyString == "Hello, World!" THEN 10 ELSE 5) DO
(
LotsOfCode
)
is a perfectly valid replacement for the non expression-based
IF MyString == "Hello, World!" THEN TempVar = 10 ELSE TempVar = 5
FOR i = 1 TO TempVar DO
(
LotsOfCode
)
In some cases, the layout that expression-based code permits feels much more natural to me
Of course, this can lead to madness. As part of a hobby project in an expression-based scripting language called MaxScript, I managed to come up with this monster line
IF FindSectionStart "rigidifiers" != 0 THEN FOR i = 1 TO (local rigidifier_array = (FOR i = (local NodeStart = FindsectionStart "rigidifiers" + 1) TO (FindSectionEnd(NodeStart) - 1) collect full_array[i])).count DO
(
LotsOfCode
)
A: I am not really satisfied with any of the answers here. I looked at the grammar for C++ (ISO 2008). However maybe for the sake of didactics and programming the answers might suffice to distinguish the two elements (reality looks more complicated though).
A statement consists of zero or more expressions, but can also be other language concepts. This is the Extended Backus Naur form for the grammar (excerpt for statement):
statement:
labeled-statement
expression-statement <-- can be zero or more expressions
compound-statement
selection-statement
iteration-statement
jump-statement
declaration-statement
try-block
We can see the other concepts that are considered statements in C++.
*
*expression-statements is self-explaining (a statement can consist of zero or more expressions, read the grammar carefully, it's tricky)
*case for example is a labeled-statement
*selection-statements are if if/else, case
*iteration-statements are while, do...while, for (...)
*jump-statements are break, continue, return (can return expression), goto
*declaration-statement is the set of declarations
*try-block is statement representing try/catch blocks
*and there might be some more down the grammar
This is an excerpt showing the expressions part:
expression:
assignment-expression
expression "," assignment-expression
assignment-expression:
conditional-expression
logical-or-expression assignment-operator initializer-clause
throw-expression
*
*expressions are or contain often assignments
*conditional-expression (sounds misleading) refers to usage of the operators (+, -, *, /, &, |, &&, ||, ...)
*throw-expression - uh? the throw clause is an expression too
A: The de-facto basis of these concepts is:
Expressions: A syntactic category whose instance can be evaluated to a value.
Statement: A syntactic category whose instance may be involved with evaluations of an expression and the resulted value of the evaluation (if any) is not guaranteed available.
Besides to the very initial context for FORTRAN in the early decades, both definitions of expressions and statements in the accepted answer are obviously wrong:
*
*Expressions can be unvaluated operands. Values are never produced from them.
*
*Subexpressions in non-strict evaluations can be definitely unevaluated.
*
*Most C-like languages have the so-called short-circuit evaluation rules to conditionally skip some subexpression evaluations not change the final result in spite of the side effects.
*C and some C-like languages have the notion of unevaluated operand which may be even normatively defined in the language specification. Such constructs are used to avoid the evaluations definitely, so the remained context information (e.g. types or alignment requirements) can be statically distinguished without changing the behavior after the program translation.
*
*For example, an expression used as the operand of the sizeof operator is never evaluated.
*Statements have nothing to do with line constructs. They can do something more than expressions, depending on the language specifications.
*
*Modern Fortran, as the direct descendant of the old FORTRAN, has concepts of executable statements and nonexecutable statements.
*Similarly, C++ defines declarations as the top-level subcategory of a translation unit. A declaration in C++ is a statement. (This is not true in C.) There are also expression-statements like Fortran's executable statements.
*To the interest of the comparison with expressions, only the "executable" statements matter. But you can't ignore the fact that statements are already generalized to be constructs forming the translation units in such imperative languages. So, as you can see, the definitions of the category vary a lot. The (probably) only remained common property preserved among these languages is that statements are expected to be interpreted in the lexical order (for most users, left-to-right and top-to-bottom).
(BTW, I want to add [citation needed] to that answer concerning materials about C because I can't recall whether DMR has such opinions. It seems not, otherwise there should be no reasons to preserve the functionality duplication in the design of C: notably, the comma operator vs. the statements.)
(The following rationale is not the direct response to the original question, but I feel it necessary to clarify something already answered here.)
Nevertheless, it is doubtful that we need a specific category of "statements" in general-purpose programming languages:
*
*Statements are not guaranteed to have more semantic capabilities over expressions in usual designs.
*
*Many languages have already successfully abandon the notion of statements to get clean, neat and consistent overall designs.
*
*In such languages, expressions can do everything old-style statements can do: just drop the unused results when the expressions are evaluated, either by leaving the results explicitly unspecified (e.g. in RnRS Scheme), or having a special value (as a value of a unit type) not producible from normal expression evaluations.
*The lexical order rules of evaluation of expressions can be replaced by explicit sequence control operator (e.g. begin in Scheme) or syntactic sugar of monadic structures.
*The lexical order rules of other kinds of "statements" can be derived as syntactic extensions (using hygienic macros, for example) to get the similar syntactic functionality. (And it can actually do more.)
*On the contrary, statements cannot have such conventional rules, because they don't compose on evaluation: there is just no such common notion of "substatement evaluation". (Even if any, I doubt there can be something much more than copy and paste from existed rules of evaluation of expressions.)
*
*Typically, languages preserving statements will also have expressions to express computations, and there is a top-level subcategory of the statements preserved to expression evaluations for that subcategory. For example, C++ has the so-called expression-statement as the subcategory, and uses the discarded-value expression evaluation rules to specify the general cases of full-expression evaluations in such context. Some languages like C# chooses to refine the contexts to simplify the use cases, but it bloats the specification more.
*For users of programming languages, the significance of statements may confuse them further.
*
*The separation of rules of expressions and statements in the languages requires more effort to learn a language.
*The naive lexical order interpretation hides the more important notion: expression evaluation. (This is probably most problematic over all.)
*
*Even the evaluations of full expressions in statements are constraint with the lexical order, subexpressions are not (necessarily). Users should ultimately learn this besides any rules coupled to the statements. (Consider how to make a newbie get the point that ++i + ++i is meaningless in C.)
*Some languages like Java and C# further constraints the order of evaluations of subexpressions to be permissive of ignorance of evaluation rules. It can be even more problematic.
*
*This seems overspecified to users who have already learned the idea of expression evaluation. It also encourages the user community to follow the blurred mental model of the language design.
*It bloats the language specification even more.
*It is harmful to optimization by missing the expressiveness of nondeterminism on evaluations, before more complicated primitives are introduced.
*A few languages like C++ (particularly, C++17) specify more subtle contexts of evaluation rules, as a compromise of the problems above.
*
*It bloats the language specification a lot.
*This goes totally against to simplicity to average users...
So why statements? Anyway, the history is already a mess. It seems most language designers do not take their choice carefully.
Worse, it even gives some type system enthusiasts (who are not familiar enough with the PL history) some misconceptions that type systems must have important things to do with the more essential designs of rules on the operational semantics.
Seriously, reasoning depending on types are not that bad in many cases, but particularly not constructive in this special one. Even experts can screw things up.
For example, someone emphasizes the well-typing nature as the central argument against the traditional treatment of undelimited continuations. Although the conclusion is somewhat reasonable and the insights about composed functions are OK (but still far too naive to the essense), this argument is not sound because it totally ignores the "side channel" approach in practice like _Noreturn any_of_returnable_types (in C11) to encode Falsum. And strictly speaking, an abstract machine with unpredictable state is not identical to "a crashed computer".
A: *
*an expression is anything that yields a value: 2 + 2
*a statement is one of the basic "blocks" of program execution.
Note that in C, "=" is actually an operator, which does two things:
*
*returns the value of the right hand subexpression.
*copies the value of the right hand subexpression into the variable on the left hand side.
Here's an extract from the ANSI C grammar. You can see that C doesn't have many different kinds of statements... the majority of statements in a program are expression statements, i.e. an expression with a semicolon at the end.
statement
: labeled_statement
| compound_statement
| expression_statement
| selection_statement
| iteration_statement
| jump_statement
;
expression_statement
: ';'
| expression ';'
;
http://www.lysator.liu.se/c/ANSI-C-grammar-y.html
A: A statement is a special case of an expression, one with void type. The tendency of languages to treat statements differently often causes problems, and it would be better if they were properly generalized.
For example, in C# we have the very useful Func<T1, T2, T3, TResult> overloaded set of generic delegates. But we also have to have a corresponding Action<T1, T2, T3> set as well, and general purpose higher-order programming constantly has to be duplicated to deal with this unfortunate bifurcation.
Trivial example - a function that checks whether a reference is null before calling onto another function:
TResult IfNotNull<TValue, TResult>(TValue value, Func<TValue, TResult> func)
where TValue : class
{
return (value == null) ? default(TValue) : func(value);
}
Could the compiler deal with the possibility of TResult being void? Yes. All it has to do is require that return is followed by an expression that is of type void. The result of default(void) would be of type void, and the func being passed in would need to be of the form Func<TValue, void> (which would be equivalent to Action<TValue>).
A number of other answers imply that you can't chain statements like you can with expressions, but I'm not sure where this idea comes from. We can think of the ; that appears after statements as a binary infix operator, taking two expressions of type void and combining them into a single expression of type void.
A: Statements -> Instructions to follow sequentially
Expressions -> Evaluation that returns a value
Statements are basically like steps, or instructions in an algorithm, the result of the execution of a statement is the actualization of the instruction pointer (so-called in assembler)
Expressions do not imply and execution order at first sight, their purpose is to evaluate and return a value. In the imperative programming languages the evaluation of an expression has an order, but it is just because of the imperative model, but it is not their essence.
Examples of Statements:
for
goto
return
if
(all of them imply the advance of the line (statement) of execution to another line)
Example of expressions:
2+2
(it doesn't imply the idea of execution, but of the evaluation)
A: An expression is something that returns a value, whereas a statement does not.
For examples:
1 + 2 * 4 * foo.bar() //Expression
foo.voidFunc(1); //Statement
The Big Deal between the two is that you can chain expressions together, whereas statements cannot be chained.
A: You can find this on wikipedia, but expressions are evaluated to some value, while statements have no evaluated value.
Thus, expressions can be used in statements, but not the other way around.
Note that some languages (such as Lisp, and I believe Ruby, and many others) do not differentiate statement vs expression... in such languages, everything is an expression and can be chained with other expressions.
A: Statement,
A statement is a procedural building-block from which all C# programs are constructed. A statement can declare a local variable or constant, call a method, create an object, or assign a value to a variable, property, or field.
A series of statements surrounded by curly braces form a block of code. A method body is one example of a code block.
bool IsPositive(int number)
{
if (number > 0)
{
return true;
}
else
{
return false;
}
}
Statements in C# often contain expressions. An expression in C# is a fragment of code containing a literal value, a simple name, or an operator and its operands.
Expression,
An expression is a fragment of code that can be evaluated to a single value, object, method, or namespace. The two simplest types of expressions are literals and simple names. A literal is a constant value that has no name.
int i = 5;
string s = "Hello World";
Both i and s are simple names identifying local variables. When those variables are used in an expression, the value of the variable is retrieved and used for the expression.
A: I prefer the meaning of statement in the formal logic sense of the word. It is one that changes the state of one or more of the variables in the computation, enabling a true or false statement to be made about their value(s).
I guess there will always be confusion in the computing world and science in general when new terminology or words are introduced, existing words are 'repurposed' or users are ignorant of the existing, established or 'proper' terminology for what they are describing
A: Here is the summery of one of the simplest answer I found.
originally Answered by Anders Kaseorg
A statement is a complete line of code that performs some action, while an expression is any section of the code that evaluates to a value.
Expressions can be combined “horizontally” into larger expressions using operators, while statements can only be combined “vertically” by writing one after another, or with block constructs.
Every expression can be used as a statement (whose effect is to evaluate the expression and ignore the resulting value), but most statements cannot be used as expressions.
http://www.quora.com/Python-programming-language-1/Whats-the-difference-between-a-statement-and-an-expression-in-Python
A: Statements are grammatically complete sentences. Expressions are not. For example
x = 5
reads as "x gets 5." This is a complete sentence. The code
(x + 5)/9.0
reads, "x plus 5 all divided by 9.0." This is not a complete sentence. The statement
while k < 10:
print k
k += 1
is a complete sentence. Notice that the loop header is not; "while k < 10," is a subordinating clause.
A: In a statement-oriented programming language, a code block is defined as a list of statements. In other words, a statement is a piece of syntax that you can put inside a code block without causing a syntax error.
Wikipedia defines the word statement similarly
In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements
Notice the latter statement. (although "a program" in this case is technically wrong because both C and Java reject a program that consists of nothing of statements.)
Wikipedia defines the word expression as
An expression in a programming language is a syntactic entity that may be evaluated to determine its value
This is, however, false, because in Kotlin, throw new Exception("") is an expression but when evaluated, it simply throws an exception, never returning any value.
In a statically typed programming language, every expression has a type. This definition, however, doesn't work in a dynamically typed programming language.
Personally, I define an expression as a piece of syntax that can be composed with an operator or function calls to yield a bigger expression. This is actually similar to the explanation of expression by Wikipedia:
It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value
But, the problem is in C programming language, given a function executeSomething like this:
void executeSomething(void){
return;
}
Is executeSomething() an expression or is it a statement? According to my definition, it is a statement because as defined in Microsoft's C reference grammar,
You cannot use the (nonexistent) value of an expression that has type void in any way, nor can you convert a void expression (by implicit or explicit conversion) to any type except void
But the same page clearly indicates that such syntax is an expression.
A: A statement is a block of code that doesn't return anything and which is just a standalone unit of execution. For example-
if(a>=0)
printf("Hello Humen,I'm a statement");
An expression, on the other hand, returns or evaluates a new value. For example -
if(a>=0)
return a+10;//This is an expression because it evalutes an new value;
or
a=10+y;//This is also an expression because it returns a new value.
A: Expression
A piece of syntax which can be evaluated to some value. In other words, an expression is an accumulation of expression elements like literals, names, attribute access, operators or function calls which all return a value. In contrast to many other languages, not all language constructs are expressions. There are also statements which cannot be used as expressions, such as while. Assignments are also statements, not expressions.
Statement
A statement is part of a suite (a “block” of code). A statement is either an expression or one of several constructs with a keyword, such as if, while or for.
A: To improve on and validate my prior answer, definitions of programming language terms should be explained from computer science type theory when applicable.
An expression has a type other than the Bottom type, i.e. it has a value. A statement has the Unit or Bottom type.
From this it follows that a statement can only have any effect in a program when it creates a side-effect, because it either can not return a value or it only returns the value of the Unit type which is either nonassignable (in some languages such a C's void) or (such as in Scala) can be stored for a delayed evaluation of the statement.
Obviously a @pragma or a /*comment*/ have no type and thus are differentiated from statements. Thus the only type of statement that would have no side-effects would be a non-operation. Non-operation is only useful as a placeholder for future side-effects. Any other action due to a statement would be a side-effect. Again a compiler hint, e.g. @pragma, is not a statement because it has no type.
A: Most precisely, a statement must have a "side-effect" (i.e. be imperative) and an expression must have a value type (i.e. not the bottom type).
The type of a statement is the unit type, but due to Halting theorem unit is fiction so lets say the bottom type.
Void is not precisely the bottom type (it isn't the subtype of all possible types). It exists in languages that don't have a completely sound type system. That may sound like a snobbish statement, but completeness such as variance annotations are critical to writing extensible software.
Let's see what Wikipedia has to say on this matter.
https://en.wikipedia.org/wiki/Statement_(computer_science)
In computer programming a statement is the smallest standalone element of an imperative programming language that expresses some action to be carried out.
Many languages (e.g. C) make a distinction between statements and definitions, with a statement only containing executable code and a definition declaring an identifier, while an expression evaluates to a value only.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "470"
} |
Q: What is the correct way to create a single-instance WPF application? Using C# and WPF under .NET (rather than Windows Forms or console), what is the correct way to create an application that can only be run as a single instance?
I know it has something to do with some mythical thing called a mutex, rarely can I find someone that bothers to stop and explain what one of these are.
The code needs to also inform the already-running instance that the user tried to start a second one, and maybe also pass any command-line arguments if any existed.
A: From here.
A common use for a cross-process Mutex is to ensure that only instance of a program can run at a time. Here's how it's done:
class OneAtATimePlease {
// Use a name unique to the application (eg include your company URL)
static Mutex mutex = new Mutex (false, "oreilly.com OneAtATimeDemo");
static void Main()
{
// Wait 5 seconds if contended – in case another instance
// of the program is in the process of shutting down.
if (!mutex.WaitOne(TimeSpan.FromSeconds (5), false))
{
Console.WriteLine("Another instance of the app is running. Bye!");
return;
}
try
{
Console.WriteLine("Running - press Enter to exit");
Console.ReadLine();
}
finally
{
mutex.ReleaseMutex();
}
}
}
A good feature of Mutex is that if the application terminates without ReleaseMutex first being called, the CLR will release the Mutex automatically.
A: The following code is my WCF named pipes solution to register a single-instance application. It's nice because it also raises an event when another instance attempts to start, and receives the command line of the other instance.
It's geared toward WPF because it uses the System.Windows.StartupEventHandler class, but this could be easily modified.
This code requires a reference to PresentationFramework, and System.ServiceModel.
Usage:
class Program
{
static void Main()
{
var applicationId = new Guid("b54f7b0d-87f9-4df9-9686-4d8fd76066dc");
if (SingleInstanceManager.VerifySingleInstance(applicationId))
{
SingleInstanceManager.OtherInstanceStarted += OnOtherInstanceStarted;
// Start the application
}
}
static void OnOtherInstanceStarted(object sender, StartupEventArgs e)
{
// Do something in response to another instance starting up.
}
}
Source Code:
/// <summary>
/// A class to use for single-instance applications.
/// </summary>
public static class SingleInstanceManager
{
/// <summary>
/// Raised when another instance attempts to start up.
/// </summary>
public static event StartupEventHandler OtherInstanceStarted;
/// <summary>
/// Checks to see if this instance is the first instance running on this machine. If it is not, this method will
/// send the main instance this instance's startup information.
/// </summary>
/// <param name="guid">The application's unique identifier.</param>
/// <returns>True if this instance is the main instance.</returns>
public static bool VerifySingleInstace(Guid guid)
{
if (!AttemptPublishService(guid))
{
NotifyMainInstance(guid);
return false;
}
return true;
}
/// <summary>
/// Attempts to publish the service.
/// </summary>
/// <param name="guid">The application's unique identifier.</param>
/// <returns>True if the service was published successfully.</returns>
private static bool AttemptPublishService(Guid guid)
{
try
{
ServiceHost serviceHost = new ServiceHost(typeof(SingleInstance));
NetNamedPipeBinding binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
serviceHost.AddServiceEndpoint(typeof(ISingleInstance), binding, CreateAddress(guid));
serviceHost.Open();
return true;
}
catch
{
return false;
}
}
/// <summary>
/// Notifies the main instance that this instance is attempting to start up.
/// </summary>
/// <param name="guid">The application's unique identifier.</param>
private static void NotifyMainInstance(Guid guid)
{
NetNamedPipeBinding binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
EndpointAddress remoteAddress = new EndpointAddress(CreateAddress(guid));
using (ChannelFactory<ISingleInstance> factory = new ChannelFactory<ISingleInstance>(binding, remoteAddress))
{
ISingleInstance singleInstance = factory.CreateChannel();
singleInstance.NotifyMainInstance(Environment.GetCommandLineArgs());
}
}
/// <summary>
/// Creates an address to publish/contact the service at based on a globally unique identifier.
/// </summary>
/// <param name="guid">The identifier for the application.</param>
/// <returns>The address to publish/contact the service.</returns>
private static string CreateAddress(Guid guid)
{
return string.Format(CultureInfo.CurrentCulture, "net.pipe://localhost/{0}", guid);
}
/// <summary>
/// The interface that describes the single instance service.
/// </summary>
[ServiceContract]
private interface ISingleInstance
{
/// <summary>
/// Notifies the main instance that another instance of the application attempted to start.
/// </summary>
/// <param name="args">The other instance's command-line arguments.</param>
[OperationContract]
void NotifyMainInstance(string[] args);
}
/// <summary>
/// The implementation of the single instance service interface.
/// </summary>
private class SingleInstance : ISingleInstance
{
/// <summary>
/// Notifies the main instance that another instance of the application attempted to start.
/// </summary>
/// <param name="args">The other instance's command-line arguments.</param>
public void NotifyMainInstance(string[] args)
{
if (OtherInstanceStarted != null)
{
Type type = typeof(StartupEventArgs);
ConstructorInfo constructor = type.GetConstructor(BindingFlags.Instance | BindingFlags.NonPublic, null, Type.EmptyTypes, null);
StartupEventArgs e = (StartupEventArgs)constructor.Invoke(null);
FieldInfo argsField = type.GetField("_args", BindingFlags.Instance | BindingFlags.NonPublic);
Debug.Assert(argsField != null);
argsField.SetValue(e, args);
OtherInstanceStarted(null, e);
}
}
}
}
A: MSDN actually has a sample application for both C# and VB to do exactly this: http://msdn.microsoft.com/en-us/library/ms771662(v=VS.90).aspx
The most common and reliable technique
for developing single-instance
detection is to use the Microsoft .NET
Framework remoting infrastructure
(System.Remoting). The Microsoft .NET
Framework (version 2.0) includes a
type, WindowsFormsApplicationBase,
which encapsulates the required
remoting functionality. To incorporate
this type into a WPF application, a
type needs to derive from it, and be
used as a shim between the application
static entry point method, Main, and
the WPF application's Application
type. The shim detects when an
application is first launched, and
when subsequent launches are
attempted, and yields control the WPF
Application type to determine how to
process the launches.
*
*For C# people just take a deep breath and forget about the whole 'I don't wanna include VisualBasic DLL'. Because of this and what Scott Hanselman says and the fact that this pretty much is the cleanest solution to the problem and is designed by people who know a lot more about the framework than you do.
*From a usability standpoint the fact is if your user is loading an application and it is already open and you're giving them an error message like 'Another instance of the app is running. Bye' then they're not gonna be a very happy user. You simply MUST (in a GUI application) switch to that application and pass in the arguments provided - or if command line parameters have no meaning then you must pop up the application which may have been minimized.
The framework already has support for this - its just that some idiot named the DLL Microsoft.VisualBasic and it didn't get put into Microsoft.ApplicationUtils or something like that. Get over it - or open up Reflector.
Tip: If you use this approach exactly as is, and you already have an App.xaml with resources etc. you'll want to take a look at this too.
A: Here is a very good article regarding the Mutex solution. The approach described by the article is advantageous for two reasons.
First, it does not require a dependency on the Microsoft.VisualBasic assembly. If my project already had a dependency on that assembly, I would probably advocate using the approach shown in another answer. But as it is, I do not use the Microsoft.VisualBasic assembly, and I'd rather not add an unnecessary dependency to my project.
Second, the article shows how to bring the existing instance of the application to the foreground when the user tries to start another instance. That's a very nice touch that the other Mutex solutions described here do not address.
UPDATE
As of 8/1/2014, the article I linked to above is still active, but the blog hasn't been updated in a while. That makes me worry that eventually it might disappear, and with it, the advocated solution. I'm reproducing the content of the article here for posterity. The words belong solely to the blog owner at Sanity Free Coding.
Today I wanted to refactor some code that prohibited my application
from running multiple instances of itself.
Previously I had use System.Diagnostics.Process to search for an
instance of my myapp.exe in the process list. While this works, it
brings on a lot of overhead, and I wanted something cleaner.
Knowing that I could use a mutex for this (but never having done it
before) I set out to cut down my code and simplify my life.
In the class of my application main I created a static named Mutex:
static class Program
{
static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}");
[STAThread]
...
}
Having a named mutex allows us to stack synchronization across
multiple threads and processes which is just the magic I'm looking
for.
Mutex.WaitOne has an overload that specifies an amount of time for us
to wait. Since we're not actually wanting to synchronizing our code
(more just check if it is currently in use) we use the overload with
two parameters: Mutex.WaitOne(Timespan timeout, bool exitContext).
Wait one returns true if it is able to enter, and false if it wasn't.
In this case, we don't want to wait at all; If our mutex is being
used, skip it, and move on, so we pass in TimeSpan.Zero (wait 0
milliseconds), and set the exitContext to true so we can exit the
synchronization context before we try to aquire a lock on it. Using
this, we wrap our Application.Run code inside something like this:
static class Program
{
static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}");
[STAThread]
static void Main() {
if(mutex.WaitOne(TimeSpan.Zero, true)) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
mutex.ReleaseMutex();
} else {
MessageBox.Show("only one instance at a time");
}
}
}
So, if our app is running, WaitOne will return false, and we'll get a
message box.
Instead of showing a message box, I opted to utilize a little Win32 to
notify my running instance that someone forgot that it was already
running (by bringing itself to the top of all the other windows). To
achieve this I used PostMessage to broadcast a custom message to every
window (the custom message was registered with RegisterWindowMessage
by my running application, which means only my application knows what
it is) then my second instance exits. The running application instance
would receive that notification and process it. In order to do that, I
overrode WndProc in my main form and listened for my custom
notification. When I received that notification I set the form's
TopMost property to true to bring it up on top.
Here is what I ended up with:
*
*Program.cs
static class Program
{
static Mutex mutex = new Mutex(true, "{8F6F0AC4-B9A1-45fd-A8CF-72F04E6BDE8F}");
[STAThread]
static void Main() {
if(mutex.WaitOne(TimeSpan.Zero, true)) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
mutex.ReleaseMutex();
} else {
// send our Win32 message to make the currently running instance
// jump on top of all the other windows
NativeMethods.PostMessage(
(IntPtr)NativeMethods.HWND_BROADCAST,
NativeMethods.WM_SHOWME,
IntPtr.Zero,
IntPtr.Zero);
}
}
}
*
*NativeMethods.cs
// this class just wraps some Win32 stuff that we're going to use
internal class NativeMethods
{
public const int HWND_BROADCAST = 0xffff;
public static readonly int WM_SHOWME = RegisterWindowMessage("WM_SHOWME");
[DllImport("user32")]
public static extern bool PostMessage(IntPtr hwnd, int msg, IntPtr wparam, IntPtr lparam);
[DllImport("user32")]
public static extern int RegisterWindowMessage(string message);
}
*
*Form1.cs (front side partial)
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
protected override void WndProc(ref Message m)
{
if(m.Msg == NativeMethods.WM_SHOWME) {
ShowMe();
}
base.WndProc(ref m);
}
private void ShowMe()
{
if(WindowState == FormWindowState.Minimized) {
WindowState = FormWindowState.Normal;
}
// get our current "TopMost" value (ours will always be false though)
bool top = TopMost;
// make our form jump to the top of everything
TopMost = true;
// set it back to whatever it was
TopMost = top;
}
}
A: So many answers to such a seemingly simple question. Just to shake things up a little bit here is my solution to this problem.
Creating a Mutex can be troublesome because the JIT-er only sees you using it for a small portion of your code and wants to mark it as ready for garbage collection. It pretty much wants to out-smart you thinking you are not going to be using that Mutex for that long. In reality you want to hang onto this Mutex for as long as your application is running. The best way to tell the garbage collector to leave you Mutex alone is to tell it to keep it alive though out the different generations of garage collection. Example:
var m = new Mutex(...);
...
GC.KeepAlive(m);
I lifted the idea from this page: http://www.ai.uga.edu/~mc/SingleInstance.html
A: It looks like there is a really good way to handle this:
WPF Single Instance Application
This provides a class you can add that manages all the mutex and messaging cruff to simplify the your implementation to the point where it's simply trivial.
A: Look at the folllowing code. It is a great and simple solution to prevent multiple instances of a WPF application.
private void Application_Startup(object sender, StartupEventArgs e)
{
Process thisProc = Process.GetCurrentProcess();
if (Process.GetProcessesByName(thisProc.ProcessName).Length > 1)
{
MessageBox.Show("Application running");
Application.Current.Shutdown();
return;
}
var wLogin = new LoginWindow();
if (wLogin.ShowDialog() == true)
{
var wMain = new Main();
wMain.WindowState = WindowState.Maximized;
wMain.Show();
}
else
{
Application.Current.Shutdown();
}
}
A: Not using Mutex though, simple answer:
System.Diagnostics;
...
string thisprocessname = Process.GetCurrentProcess().ProcessName;
if (Process.GetProcesses().Count(p => p.ProcessName == thisprocessname) > 1)
return;
Put it inside the Program.Main().
Example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Diagnostics;
namespace Sample
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
//simple add Diagnostics namespace, and these 3 lines below
string thisprocessname = Process.GetCurrentProcess().ProcessName;
if (Process.GetProcesses().Count(p => p.ProcessName == thisprocessname) > 1)
return;
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Sample());
}
}
}
You can add MessageBox.Show to the if-statement and put "Application already running".
This might be helpful to someone.
A: You should never use a named mutex to implement a single-instance application (or at least not for production code). Malicious code can easily DoS (Denial of Service) your ass...
A: Here is what I use. It combined process enumeration to perform switching and mutex to safeguard from "active clickers":
public partial class App
{
[DllImport("user32")]
private static extern int OpenIcon(IntPtr hWnd);
[DllImport("user32.dll")]
private static extern bool SetForegroundWindow(IntPtr hWnd);
protected override void OnStartup(StartupEventArgs e)
{
base.OnStartup(e);
var p = Process
.GetProcessesByName(Process.GetCurrentProcess().ProcessName);
foreach (var t in p.Where(t => t.MainWindowHandle != IntPtr.Zero))
{
OpenIcon(t.MainWindowHandle);
SetForegroundWindow(t.MainWindowHandle);
Current.Shutdown();
return;
}
// there is a chance the user tries to click on the icon repeatedly
// and the process cannot be discovered yet
bool createdNew;
var mutex = new Mutex(true, "MyAwesomeApp",
out createdNew); // must be a variable, though it is unused -
// we just need a bit of time until the process shows up
if (!createdNew)
{
Current.Shutdown();
return;
}
new Bootstrapper().Run();
}
}
A: I found the simpler solution, similar to Dale Ragan's, but slightly modified. It does practically everything you need and based on the standard Microsoft WindowsFormsApplicationBase class.
Firstly, you create SingleInstanceController class, which you can use in all other single-instance applications, which use Windows Forms:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.VisualBasic.ApplicationServices;
namespace SingleInstanceController_NET
{
public class SingleInstanceController
: WindowsFormsApplicationBase
{
public delegate Form CreateMainForm();
public delegate void StartNextInstanceDelegate(Form mainWindow);
CreateMainForm formCreation;
StartNextInstanceDelegate onStartNextInstance;
public SingleInstanceController(CreateMainForm formCreation, StartNextInstanceDelegate onStartNextInstance)
{
// Set whether the application is single instance
this.formCreation = formCreation;
this.onStartNextInstance = onStartNextInstance;
this.IsSingleInstance = true;
this.StartupNextInstance += new StartupNextInstanceEventHandler(this_StartupNextInstance);
}
void this_StartupNextInstance(object sender, StartupNextInstanceEventArgs e)
{
if (onStartNextInstance != null)
{
onStartNextInstance(this.MainForm); // This code will be executed when the user tries to start the running program again,
// for example, by clicking on the exe file.
} // This code can determine how to re-activate the existing main window of the running application.
}
protected override void OnCreateMainForm()
{
// Instantiate your main application form
this.MainForm = formCreation();
}
public void Run()
{
string[] commandLine = new string[0];
base.Run(commandLine);
}
}
}
Then you can use it in your program as follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
using SingleInstanceController_NET;
namespace SingleInstance
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
static Form CreateForm()
{
return new Form1(); // Form1 is used for the main window.
}
static void OnStartNextInstance(Form mainWindow) // When the user tries to restart the application again,
// the main window is activated again.
{
mainWindow.WindowState = FormWindowState.Maximized;
}
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
SingleInstanceController controller = new SingleInstanceController(CreateForm, OnStartNextInstance);
controller.Run();
}
}
}
Both the program and the SingleInstanceController_NET solution should reference Microsoft.VisualBasic . If you just want to reactivate the running application as a normal window when the user tries to restart the running program, the second parameter in the SingleInstanceController can be null. In the given example, the window is maximized.
A: Update 2017-01-25. After trying few things, I decided to go with VisualBasic.dll it is easier and works better (at least for me). I let my previous answer just as reference...
Just as reference, this is how I did without passing arguments (which I can't find any reason to do so... I mean a single app with arguments that as to be passed out from one instance to another one).
If file association is required, then an app should (per users standard expectation) be instanciated for each doc. If you have to pass args to existing app, I think I would used vb dll.
Not passing args (just single instance app), I prefer not registering a new Window message and not override the message loop as defined in Matt Davis Solution. Although it's not a big deal to add a VisualBasic dll, but I prefer not add a new reference just to do single instance app. Also, I do prefer instanciate a new class with Main instead of calling Shutdown from App.Startup override to ensure to exit as soon as possible.
In hope that anybody will like it... or will inspire a little bit :-)
Project startup class should be set as 'SingleInstanceApp'.
public class SingleInstanceApp
{
[STAThread]
public static void Main(string[] args)
{
Mutex _mutexSingleInstance = new Mutex(true, "MonitorMeSingleInstance");
if (_mutexSingleInstance.WaitOne(TimeSpan.Zero, true))
{
try
{
var app = new App();
app.InitializeComponent();
app.Run();
}
finally
{
_mutexSingleInstance.ReleaseMutex();
_mutexSingleInstance.Close();
}
}
else
{
MessageBox.Show("One instance is already running.");
var processes = Process.GetProcessesByName(Assembly.GetEntryAssembly().GetName().Name);
{
if (processes.Length > 1)
{
foreach (var process in processes)
{
if (process.Id != Process.GetCurrentProcess().Id)
{
WindowHelper.SetForegroundWindow(process.MainWindowHandle);
}
}
}
}
}
}
}
WindowHelper:
using System;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Interop;
using System.Windows.Threading;
namespace HQ.Util.Unmanaged
{
public class WindowHelper
{
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool SetForegroundWindow(IntPtr hWnd);
A: Named-mutex-based approaches are not cross-platform because named mutexes are not global in Mono. Process-enumeration-based approaches don't have any synchronization and may result in incorrect behavior (e.g. multiple processes started at the same time may all self-terminate depending on timing). Windowing-system-based approaches are not desirable in a console application. This solution, built on top of Divin's answer, addresses all these issues:
using System;
using System.IO;
namespace TestCs
{
public class Program
{
// The app id must be unique. Generate a new guid for your application.
public static string AppId = "01234567-89ab-cdef-0123-456789abcdef";
// The stream is stored globally to ensure that it won't be disposed before the application terminates.
public static FileStream UniqueInstanceStream;
public static int Main(string[] args)
{
EnsureUniqueInstance();
// Your code here.
return 0;
}
private static void EnsureUniqueInstance()
{
// Note: If you want the check to be per-user, use Environment.SpecialFolder.ApplicationData instead.
string lockDir = Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData),
"UniqueInstanceApps");
string lockPath = Path.Combine(lockDir, $"{AppId}.unique");
Directory.CreateDirectory(lockDir);
try
{
// Create the file with exclusive write access. If this fails, then another process is executing.
UniqueInstanceStream = File.Open(lockPath, FileMode.Create, FileAccess.Write, FileShare.None);
// Although only the line above should be sufficient, when debugging with a vshost on Visual Studio
// (that acts as a proxy), the IO exception isn't passed to the application before a Write is executed.
UniqueInstanceStream.Write(new byte[] { 0 }, 0, 1);
UniqueInstanceStream.Flush();
}
catch
{
throw new Exception("Another instance of the application is already running.");
}
}
}
}
A: [I have provided sample code for console and wpf applications below.]
You only have to check the value of the createdNew variable (example below!), after you create the named Mutex instance.
The boolean createdNew will return false:
if the Mutex instance named "YourApplicationNameHere" was already
created on the system somewhere
The boolean createdNew will return true:
if this is the first Mutex named "YourApplicationNameHere" on the
system.
Console application - Example:
static Mutex m = null;
static void Main(string[] args)
{
const string mutexName = "YourApplicationNameHere";
bool createdNew = false;
try
{
// Initializes a new instance of the Mutex class with a Boolean value that indicates
// whether the calling thread should have initial ownership of the mutex, a string that is the name of the mutex,
// and a Boolean value that, when the method returns, indicates whether the calling thread was granted initial ownership of the mutex.
using (m = new Mutex(true, mutexName, out createdNew))
{
if (!createdNew)
{
Console.WriteLine("instance is alreday running... shutting down !!!");
Console.Read();
return; // Exit the application
}
// Run your windows forms app here
Console.WriteLine("Single instance app is running!");
Console.ReadLine();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.ReadLine();
}
}
WPF-Example:
public partial class App : Application
{
static Mutex m = null;
protected override void OnStartup(StartupEventArgs e)
{
const string mutexName = "YourApplicationNameHere";
bool createdNew = false;
try
{
// Initializes a new instance of the Mutex class with a Boolean value that indicates
// whether the calling thread should have initial ownership of the mutex, a string that is the name of the mutex,
// and a Boolean value that, when the method returns, indicates whether the calling thread was granted initial ownership of the mutex.
m = new Mutex(true, mutexName, out createdNew);
if (!createdNew)
{
Current.Shutdown(); // Exit the application
}
}
catch (Exception)
{
throw;
}
base.OnStartup(e);
}
protected override void OnExit(ExitEventArgs e)
{
if (m != null)
{
m.Dispose();
}
base.OnExit(e);
}
}
A: I can't find a short solution here so I hope someone will like this:
UPDATED 2018-09-20
Put this code in your Program.cs:
using System.Diagnostics;
static void Main()
{
Process thisProcess = Process.GetCurrentProcess();
Process[] allProcesses = Process.GetProcessesByName(thisProcess.ProcessName);
if (allProcesses.Length > 1)
{
// Don't put a MessageBox in here because the user could spam this MessageBox.
return;
}
// Optional code. If you don't want that someone runs your ".exe" with a different name:
string exeName = AppDomain.CurrentDomain.FriendlyName;
// in debug mode, don't forget that you don't use your normal .exe name.
// Debug uses the .vshost.exe.
if (exeName != "the name of your executable.exe")
{
// You can add a MessageBox here if you want.
// To point out to users that the name got changed and maybe what the name should be or something like that^^
MessageBox.Show("The executable name should be \"the name of your executable.exe\"",
"Wrong executable name", MessageBoxButtons.OK, MessageBoxIcon.Error);
return;
}
// Following code is default code:
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new MainForm());
}
A: I use Mutex in my solution for preventing multiple instances.
static Mutex mutex = null;
//A string that is the name of the mutex
string mutexName = @"Global\test";
//Prevent Multiple Instances of Application
bool onlyInstance = false;
mutex = new Mutex(true, mutexName, out onlyInstance);
if (!onlyInstance)
{
MessageBox.Show("You are already running this application in your system.", "Already Running..", MessageBoxButton.OK);
Application.Current.Shutdown();
}
A: This code should go to the main method. Look at here for more information about the main method in WPF.
[DllImport("user32.dll")]
private static extern Boolean ShowWindow(IntPtr hWnd, Int32 nCmdShow);
private const int SW_SHOWMAXIMIZED = 3;
static void Main()
{
Process currentProcess = Process.GetCurrentProcess();
var runningProcess = (from process in Process.GetProcesses()
where
process.Id != currentProcess.Id &&
process.ProcessName.Equals(
currentProcess.ProcessName,
StringComparison.Ordinal)
select process).FirstOrDefault();
if (runningProcess != null)
{
ShowWindow(runningProcess.MainWindowHandle, SW_SHOWMAXIMIZED);
return;
}
}
Method 2
static void Main()
{
string procName = Process.GetCurrentProcess().ProcessName;
// get the list of all processes by that name
Process[] processes=Process.GetProcessesByName(procName);
if (processes.Length > 1)
{
MessageBox.Show(procName + " already running");
return;
}
else
{
// Application.Run(...);
}
}
Note : Above methods assumes your process/application has a unique name. Because it uses process name to find if any existing processors. So, if your application has a very common name (ie: Notepad), above approach won't work.
A: Well, I have a disposable Class for this that works easily for most use cases:
Use it like this:
static void Main()
{
using (SingleInstanceMutex sim = new SingleInstanceMutex())
{
if (sim.IsOtherInstanceRunning)
{
Application.Exit();
}
// Initialize program here.
}
}
Here it is:
/// <summary>
/// Represents a <see cref="SingleInstanceMutex"/> class.
/// </summary>
public partial class SingleInstanceMutex : IDisposable
{
#region Fields
/// <summary>
/// Indicator whether another instance of this application is running or not.
/// </summary>
private bool isNoOtherInstanceRunning;
/// <summary>
/// The <see cref="Mutex"/> used to ask for other instances of this application.
/// </summary>
private Mutex singleInstanceMutex = null;
/// <summary>
/// An indicator whether this object is beeing actively disposed or not.
/// </summary>
private bool disposed;
#endregion
#region Constructor
/// <summary>
/// Initializes a new instance of the <see cref="SingleInstanceMutex"/> class.
/// </summary>
public SingleInstanceMutex()
{
this.singleInstanceMutex = new Mutex(true, Assembly.GetCallingAssembly().FullName, out this.isNoOtherInstanceRunning);
}
#endregion
#region Properties
/// <summary>
/// Gets an indicator whether another instance of the application is running or not.
/// </summary>
public bool IsOtherInstanceRunning
{
get
{
return !this.isNoOtherInstanceRunning;
}
}
#endregion
#region Methods
/// <summary>
/// Closes the <see cref="SingleInstanceMutex"/>.
/// </summary>
public void Close()
{
this.ThrowIfDisposed();
this.singleInstanceMutex.Close();
}
public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (!this.disposed)
{
/* Release unmanaged ressources */
if (disposing)
{
/* Release managed ressources */
this.Close();
}
this.disposed = true;
}
}
/// <summary>
/// Throws an exception if something is tried to be done with an already disposed object.
/// </summary>
/// <remarks>
/// All public methods of the class must first call this.
/// </remarks>
public void ThrowIfDisposed()
{
if (this.disposed)
{
throw new ObjectDisposedException(this.GetType().Name);
}
}
#endregion
}
A: Use mutex solution:
using System;
using System.Windows.Forms;
using System.Threading;
namespace OneAndOnlyOne
{
static class Program
{
static String _mutexID = " // generate guid"
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Boolean _isNotRunning;
using (Mutex _mutex = new Mutex(true, _mutexID, out _isNotRunning))
{
if (_isNotRunning)
{
Application.Run(new Form1());
}
else
{
MessageBox.Show("An instance is already running.");
return;
}
}
}
}
}
A: I added a sendMessage Method to the NativeMethods Class.
Apparently the postmessage method dosent work, if the application is not show in the taskbar, however using the sendmessage method solves this.
class NativeMethods
{
public const int HWND_BROADCAST = 0xffff;
public static readonly int WM_SHOWME = RegisterWindowMessage("WM_SHOWME");
[DllImport("user32")]
public static extern bool PostMessage(IntPtr hwnd, int msg, IntPtr wparam, IntPtr lparam);
[DllImport("user32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam);
[DllImport("user32")]
public static extern int RegisterWindowMessage(string message);
}
A: A new one that uses Mutex and IPC stuff, and also passes any command line arguments to the running instance, is WPF Single Instance Application.
A: The code C# .NET Single Instance Application that is the reference for the marked answer is a great start.
However, I found it doesn't handle very well the cases when the instance that already exist has a modal dialog open, whether that dialog is a managed one (like another Form such as an about box), or an unmanaged one (like the OpenFileDialog even when using the standard .NET class). With the original code, the main form is activated, but the modal one stays unactive, which looks strange, plus the user must click on it to keep using the app.
So, I have create a SingleInstance utility class to handle all this quite automatically for Winforms and WPF applications.
Winforms:
1) modify the Program class like this:
static class Program
{
public static readonly SingleInstance Singleton = new SingleInstance(typeof(Program).FullName);
[STAThread]
static void Main(string[] args)
{
// NOTE: if this always return false, close & restart Visual Studio
// this is probably due to the vshost.exe thing
Singleton.RunFirstInstance(() =>
{
SingleInstanceMain(args);
});
}
public static void SingleInstanceMain(string[] args)
{
// standard code that was in Main now goes here
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
}
2) modify the main window class like this:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
protected override void WndProc(ref Message m)
{
// if needed, the singleton will restore this window
Program.Singleton.OnWndProc(this, m, true);
// TODO: handle specific messages here if needed
base.WndProc(ref m);
}
}
WPF:
1) modify the App page like this (and make sure you set its build action to page to be able to redefine the Main method):
public partial class App : Application
{
public static readonly SingleInstance Singleton = new SingleInstance(typeof(App).FullName);
[STAThread]
public static void Main(string[] args)
{
// NOTE: if this always return false, close & restart Visual Studio
// this is probably due to the vshost.exe thing
Singleton.RunFirstInstance(() =>
{
SingleInstanceMain(args);
});
}
public static void SingleInstanceMain(string[] args)
{
// standard code that was in Main now goes here
App app = new App();
app.InitializeComponent();
app.Run();
}
}
2) modify the main window class like this:
public partial class MainWindow : Window
{
private HwndSource _source;
public MainWindow()
{
InitializeComponent();
}
protected override void OnSourceInitialized(EventArgs e)
{
base.OnSourceInitialized(e);
_source = (HwndSource)PresentationSource.FromVisual(this);
_source.AddHook(HwndSourceHook);
}
protected virtual IntPtr HwndSourceHook(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
{
// if needed, the singleton will restore this window
App.Singleton.OnWndProc(hwnd, msg, wParam, lParam, true, true);
// TODO: handle other specific message
return IntPtr.Zero;
}
And here is the utility class:
using System;
using System.ComponentModel;
using System.Runtime.InteropServices;
using System.Threading;
namespace SingleInstanceUtilities
{
public sealed class SingleInstance
{
private const int HWND_BROADCAST = 0xFFFF;
[DllImport("user32.dll")]
private static extern bool PostMessage(IntPtr hwnd, int msg, IntPtr wparam, IntPtr lparam);
[DllImport("user32.dll", CharSet = CharSet.Unicode)]
private static extern int RegisterWindowMessage(string message);
[DllImport("user32.dll")]
private static extern bool SetForegroundWindow(IntPtr hWnd);
public SingleInstance(string uniqueName)
{
if (uniqueName == null)
throw new ArgumentNullException("uniqueName");
Mutex = new Mutex(true, uniqueName);
Message = RegisterWindowMessage("WM_" + uniqueName);
}
public Mutex Mutex { get; private set; }
public int Message { get; private set; }
public void RunFirstInstance(Action action)
{
RunFirstInstance(action, IntPtr.Zero, IntPtr.Zero);
}
// NOTE: if this always return false, close & restart Visual Studio
// this is probably due to the vshost.exe thing
public void RunFirstInstance(Action action, IntPtr wParam, IntPtr lParam)
{
if (action == null)
throw new ArgumentNullException("action");
if (WaitForMutext(wParam, lParam))
{
try
{
action();
}
finally
{
ReleaseMutex();
}
}
}
public static void ActivateWindow(IntPtr hwnd)
{
if (hwnd == IntPtr.Zero)
return;
FormUtilities.ActivateWindow(FormUtilities.GetModalWindow(hwnd));
}
public void OnWndProc(IntPtr hwnd, int m, IntPtr wParam, IntPtr lParam, bool restorePlacement, bool activate)
{
if (m == Message)
{
if (restorePlacement)
{
WindowPlacement placement = WindowPlacement.GetPlacement(hwnd, false);
if (placement.IsValid && placement.IsMinimized)
{
const int SW_SHOWNORMAL = 1;
placement.ShowCmd = SW_SHOWNORMAL;
placement.SetPlacement(hwnd);
}
}
if (activate)
{
SetForegroundWindow(hwnd);
FormUtilities.ActivateWindow(FormUtilities.GetModalWindow(hwnd));
}
}
}
#if WINFORMS // define this for Winforms apps
public void OnWndProc(System.Windows.Forms.Form form, int m, IntPtr wParam, IntPtr lParam, bool activate)
{
if (form == null)
throw new ArgumentNullException("form");
if (m == Message)
{
if (activate)
{
if (form.WindowState == System.Windows.Forms.FormWindowState.Minimized)
{
form.WindowState = System.Windows.Forms.FormWindowState.Normal;
}
form.Activate();
FormUtilities.ActivateWindow(FormUtilities.GetModalWindow(form.Handle));
}
}
}
public void OnWndProc(System.Windows.Forms.Form form, System.Windows.Forms.Message m, bool activate)
{
if (form == null)
throw new ArgumentNullException("form");
OnWndProc(form, m.Msg, m.WParam, m.LParam, activate);
}
#endif
public void ReleaseMutex()
{
Mutex.ReleaseMutex();
}
public bool WaitForMutext(bool force, IntPtr wParam, IntPtr lParam)
{
bool b = PrivateWaitForMutext(force);
if (!b)
{
PostMessage((IntPtr)HWND_BROADCAST, Message, wParam, lParam);
}
return b;
}
public bool WaitForMutext(IntPtr wParam, IntPtr lParam)
{
return WaitForMutext(false, wParam, lParam);
}
private bool PrivateWaitForMutext(bool force)
{
if (force)
return true;
try
{
return Mutex.WaitOne(TimeSpan.Zero, true);
}
catch (AbandonedMutexException)
{
return true;
}
}
}
// NOTE: don't add any field or public get/set property, as this must exactly map to Windows' WINDOWPLACEMENT structure
[StructLayout(LayoutKind.Sequential)]
public struct WindowPlacement
{
public int Length { get; set; }
public int Flags { get; set; }
public int ShowCmd { get; set; }
public int MinPositionX { get; set; }
public int MinPositionY { get; set; }
public int MaxPositionX { get; set; }
public int MaxPositionY { get; set; }
public int NormalPositionLeft { get; set; }
public int NormalPositionTop { get; set; }
public int NormalPositionRight { get; set; }
public int NormalPositionBottom { get; set; }
[DllImport("user32.dll", SetLastError = true)]
private static extern bool SetWindowPlacement(IntPtr hWnd, ref WindowPlacement lpwndpl);
[DllImport("user32.dll", SetLastError = true)]
private static extern bool GetWindowPlacement(IntPtr hWnd, ref WindowPlacement lpwndpl);
private const int SW_SHOWMINIMIZED = 2;
public bool IsMinimized
{
get
{
return ShowCmd == SW_SHOWMINIMIZED;
}
}
public bool IsValid
{
get
{
return Length == Marshal.SizeOf(typeof(WindowPlacement));
}
}
public void SetPlacement(IntPtr windowHandle)
{
SetWindowPlacement(windowHandle, ref this);
}
public static WindowPlacement GetPlacement(IntPtr windowHandle, bool throwOnError)
{
WindowPlacement placement = new WindowPlacement();
if (windowHandle == IntPtr.Zero)
return placement;
placement.Length = Marshal.SizeOf(typeof(WindowPlacement));
if (!GetWindowPlacement(windowHandle, ref placement))
{
if (throwOnError)
throw new Win32Exception(Marshal.GetLastWin32Error());
return new WindowPlacement();
}
return placement;
}
}
public static class FormUtilities
{
[DllImport("user32.dll")]
private static extern IntPtr GetWindow(IntPtr hWnd, int uCmd);
[DllImport("user32.dll", SetLastError = true)]
private static extern IntPtr SetActiveWindow(IntPtr hWnd);
[DllImport("user32.dll")]
private static extern bool IsWindowVisible(IntPtr hWnd);
[DllImport("kernel32.dll")]
public static extern int GetCurrentThreadId();
private delegate bool EnumChildrenCallback(IntPtr hwnd, IntPtr lParam);
[DllImport("user32.dll")]
private static extern bool EnumThreadWindows(int dwThreadId, EnumChildrenCallback lpEnumFunc, IntPtr lParam);
private class ModalWindowUtil
{
private const int GW_OWNER = 4;
private int _maxOwnershipLevel;
private IntPtr _maxOwnershipHandle;
private bool EnumChildren(IntPtr hwnd, IntPtr lParam)
{
int level = 1;
if (IsWindowVisible(hwnd) && IsOwned(lParam, hwnd, ref level))
{
if (level > _maxOwnershipLevel)
{
_maxOwnershipHandle = hwnd;
_maxOwnershipLevel = level;
}
}
return true;
}
private static bool IsOwned(IntPtr owner, IntPtr hwnd, ref int level)
{
IntPtr o = GetWindow(hwnd, GW_OWNER);
if (o == IntPtr.Zero)
return false;
if (o == owner)
return true;
level++;
return IsOwned(owner, o, ref level);
}
public static void ActivateWindow(IntPtr hwnd)
{
if (hwnd != IntPtr.Zero)
{
SetActiveWindow(hwnd);
}
}
public static IntPtr GetModalWindow(IntPtr owner)
{
ModalWindowUtil util = new ModalWindowUtil();
EnumThreadWindows(GetCurrentThreadId(), util.EnumChildren, owner);
return util._maxOwnershipHandle; // may be IntPtr.Zero
}
}
public static void ActivateWindow(IntPtr hwnd)
{
ModalWindowUtil.ActivateWindow(hwnd);
}
public static IntPtr GetModalWindow(IntPtr owner)
{
return ModalWindowUtil.GetModalWindow(owner);
}
}
}
A: You could use the Mutex class, but you will soon find out that you will need to implement the code to pass the arguments and such yourself. Well, I learned a trick when programming in WinForms when I read Chris Sell's book. This trick uses logic that is already available to us in the framework. I don't know about you, but when I learn about stuff I can reuse in the framework, that is usually the route I take instead of reinventing the wheel. Unless of course it doesn't do everything I want.
When I got into WPF, I came up with a way to use that same code, but in a WPF application. This solution should meet your needs based off your question.
First, we need to create our application class. In this class we are going override the OnStartup event and create a method called Activate, which will be used later.
public class SingleInstanceApplication : System.Windows.Application
{
protected override void OnStartup(System.Windows.StartupEventArgs e)
{
// Call the OnStartup event on our base class
base.OnStartup(e);
// Create our MainWindow and show it
MainWindow window = new MainWindow();
window.Show();
}
public void Activate()
{
// Reactivate the main window
MainWindow.Activate();
}
}
Second, we will need to create a class that can manage our instances. Before we go through that, we are actually going to reuse some code that is in the Microsoft.VisualBasic assembly. Since, I am using C# in this example, I had to make a reference to the assembly. If you are using VB.NET, you don't have to do anything. The class we are going to use is WindowsFormsApplicationBase and inherit our instance manager off of it and then leverage properties and events to handle the single instancing.
public class SingleInstanceManager : Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase
{
private SingleInstanceApplication _application;
private System.Collections.ObjectModel.ReadOnlyCollection<string> _commandLine;
public SingleInstanceManager()
{
IsSingleInstance = true;
}
protected override bool OnStartup(Microsoft.VisualBasic.ApplicationServices.StartupEventArgs eventArgs)
{
// First time _application is launched
_commandLine = eventArgs.CommandLine;
_application = new SingleInstanceApplication();
_application.Run();
return false;
}
protected override void OnStartupNextInstance(StartupNextInstanceEventArgs eventArgs)
{
// Subsequent launches
base.OnStartupNextInstance(eventArgs);
_commandLine = eventArgs.CommandLine;
_application.Activate();
}
}
Basically, we are using the VB bits to detect single instance's and process accordingly. OnStartup will be fired when the first instance loads. OnStartupNextInstance is fired when the application is re-run again. As you can see, I can get to what was passed on the command line through the event arguments. I set the value to an instance field. You could parse the command line here, or you could pass it to your application through the constructor and the call to the Activate method.
Third, it's time to create our EntryPoint. Instead of newing up the application like you would normally do, we are going to take advantage of our SingleInstanceManager.
public class EntryPoint
{
[STAThread]
public static void Main(string[] args)
{
SingleInstanceManager manager = new SingleInstanceManager();
manager.Run(args);
}
}
Well, I hope you are able to follow everything and be able use this implementation and make it your own.
A: Just some thoughts:
There are cases when requiring that only one instance of an application is not "lame" as some would have you believe. Database apps, etc. are an order of magnitude more difficult if one allows multiple instances of the app for a single user to access a database (you know, all that updating all the records that are open in multiple instances of the app on the users machine, etc.).
First, for the "name collision thing, don't use a human readable name - use a GUID instead or, even better a GUID + the human readable name. Chances of name collision just dropped off the radar and the Mutex doesn't care. As someone pointed out, a DOS attack would suck, but if the malicious person has gone to the trouble of getting the mutex name and incorporating it into their app, you are pretty much a target anyway and will have to do MUCH more to protect yourself than just fiddle a mutex name.
Also, if one uses the variant of:
new Mutex(true, "some GUID plus Name", out AIsFirstInstance), you already have your indicator as to whether or not the Mutex is the first instance.
A: Here is an example that allows you to have a single instance of an application. When any new instances load, they pass their arguments to the main instance that is running.
public partial class App : Application
{
private static Mutex SingleMutex;
public static uint MessageId;
private void Application_Startup(object sender, StartupEventArgs e)
{
IntPtr Result;
IntPtr SendOk;
Win32.COPYDATASTRUCT CopyData;
string[] Args;
IntPtr CopyDataMem;
bool AllowMultipleInstances = false;
Args = Environment.GetCommandLineArgs();
// TODO: Replace {00000000-0000-0000-0000-000000000000} with your application's GUID
MessageId = Win32.RegisterWindowMessage("{00000000-0000-0000-0000-000000000000}");
SingleMutex = new Mutex(false, "AppName");
if ((AllowMultipleInstances) || (!AllowMultipleInstances && SingleMutex.WaitOne(1, true)))
{
new Main();
}
else if (Args.Length > 1)
{
foreach (Process Proc in Process.GetProcesses())
{
SendOk = Win32.SendMessageTimeout(Proc.MainWindowHandle, MessageId, IntPtr.Zero, IntPtr.Zero,
Win32.SendMessageTimeoutFlags.SMTO_BLOCK | Win32.SendMessageTimeoutFlags.SMTO_ABORTIFHUNG,
2000, out Result);
if (SendOk == IntPtr.Zero)
continue;
if ((uint)Result != MessageId)
continue;
CopyDataMem = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Win32.COPYDATASTRUCT)));
CopyData.dwData = IntPtr.Zero;
CopyData.cbData = Args[1].Length*2;
CopyData.lpData = Marshal.StringToHGlobalUni(Args[1]);
Marshal.StructureToPtr(CopyData, CopyDataMem, false);
Win32.SendMessageTimeout(Proc.MainWindowHandle, Win32.WM_COPYDATA, IntPtr.Zero, CopyDataMem,
Win32.SendMessageTimeoutFlags.SMTO_BLOCK | Win32.SendMessageTimeoutFlags.SMTO_ABORTIFHUNG,
5000, out Result);
Marshal.FreeHGlobal(CopyData.lpData);
Marshal.FreeHGlobal(CopyDataMem);
}
Shutdown(0);
}
}
}
public partial class Main : Window
{
private void Window_Loaded(object sender, RoutedEventArgs e)
{
HwndSource Source;
Source = HwndSource.FromHwnd(new WindowInteropHelper(this).Handle);
Source.AddHook(new HwndSourceHook(Window_Proc));
}
private IntPtr Window_Proc(IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam, ref bool Handled)
{
Win32.COPYDATASTRUCT CopyData;
string Path;
if (Msg == Win32.WM_COPYDATA)
{
CopyData = (Win32.COPYDATASTRUCT)Marshal.PtrToStructure(lParam, typeof(Win32.COPYDATASTRUCT));
Path = Marshal.PtrToStringUni(CopyData.lpData, CopyData.cbData / 2);
if (WindowState == WindowState.Minimized)
{
// Restore window from tray
}
// Do whatever we want with information
Activate();
Focus();
}
if (Msg == App.MessageId)
{
Handled = true;
return new IntPtr(App.MessageId);
}
return IntPtr.Zero;
}
}
public class Win32
{
public const uint WM_COPYDATA = 0x004A;
public struct COPYDATASTRUCT
{
public IntPtr dwData;
public int cbData;
public IntPtr lpData;
}
[Flags]
public enum SendMessageTimeoutFlags : uint
{
SMTO_NORMAL = 0x0000,
SMTO_BLOCK = 0x0001,
SMTO_ABORTIFHUNG = 0x0002,
SMTO_NOTIMEOUTIFNOTHUNG = 0x0008
}
[DllImport("user32.dll", SetLastError=true, CharSet=CharSet.Auto)]
public static extern uint RegisterWindowMessage(string lpString);
[DllImport("user32.dll")]
public static extern IntPtr SendMessageTimeout(
IntPtr hWnd, uint Msg, IntPtr wParam, IntPtr lParam,
SendMessageTimeoutFlags fuFlags, uint uTimeout, out IntPtr lpdwResult);
}
A: Here's a lightweight solution I use which allows the application to bring an already existing window to the foreground without resorting to custom windows messages or blindly searching process names.
[DllImport("user32.dll")]
static extern bool SetForegroundWindow(IntPtr hWnd);
static readonly string guid = "<Application Guid>";
static void Main()
{
Mutex mutex = null;
if (!CreateMutex(out mutex))
return;
// Application startup code.
Environment.SetEnvironmentVariable(guid, null, EnvironmentVariableTarget.User);
}
static bool CreateMutex(out Mutex mutex)
{
bool createdNew = false;
mutex = new Mutex(false, guid, out createdNew);
if (createdNew)
{
Process process = Process.GetCurrentProcess();
string value = process.Id.ToString();
Environment.SetEnvironmentVariable(guid, value, EnvironmentVariableTarget.User);
}
else
{
string value = Environment.GetEnvironmentVariable(guid, EnvironmentVariableTarget.User);
Process process = null;
int processId = -1;
if (int.TryParse(value, out processId))
process = Process.GetProcessById(processId);
if (process == null || !SetForegroundWindow(process.MainWindowHandle))
MessageBox.Show("Unable to start application. An instance of this application is already running.");
}
return createdNew;
}
Edit: You can also store and initialize mutex and createdNew statically, but you'll need to explicitly dispose/release the mutex once you're done with it. Personally, I prefer keeping the mutex local as it will be automatically disposed of even if the application closes without ever reaching the end of Main.
A: Here's the same thing implemented via Event.
public enum ApplicationSingleInstanceMode
{
CurrentUserSession,
AllSessionsOfCurrentUser,
Pc
}
public class ApplicationSingleInstancePerUser: IDisposable
{
private readonly EventWaitHandle _event;
/// <summary>
/// Shows if the current instance of ghost is the first
/// </summary>
public bool FirstInstance { get; private set; }
/// <summary>
/// Initializes
/// </summary>
/// <param name="applicationName">The application name</param>
/// <param name="mode">The single mode</param>
public ApplicationSingleInstancePerUser(string applicationName, ApplicationSingleInstanceMode mode = ApplicationSingleInstanceMode.CurrentUserSession)
{
string name;
if (mode == ApplicationSingleInstanceMode.CurrentUserSession)
name = $"Local\\{applicationName}";
else if (mode == ApplicationSingleInstanceMode.AllSessionsOfCurrentUser)
name = $"Global\\{applicationName}{Environment.UserDomainName}";
else
name = $"Global\\{applicationName}";
try
{
bool created;
_event = new EventWaitHandle(false, EventResetMode.ManualReset, name, out created);
FirstInstance = created;
}
catch
{
}
}
public void Dispose()
{
_event.Dispose();
}
}
A: This is how I ended up taking care of this issue. Note that debug code is still in there for testing. This code is within the OnStartup in the App.xaml.cs file. (WPF)
// Process already running ?
if (Process.GetProcessesByName(Process.GetCurrentProcess().ProcessName).Length > 1)
{
// Show your error message
MessageBox.Show("xxx is already running. \r\n\r\nIf the original process is hung up you may need to restart your computer, or kill the current xxx process using the task manager.", "xxx is already running!", MessageBoxButton.OK, MessageBoxImage.Exclamation);
// This process
Process currentProcess = Process.GetCurrentProcess();
// Get all processes running on the local computer.
Process[] localAll = Process.GetProcessesByName(Process.GetCurrentProcess().ProcessName);
// ID of this process...
int temp = currentProcess.Id;
MessageBox.Show("This Process ID: " + temp.ToString());
for (int i = 0; i < localAll.Length; i++)
{
// Find the other process
if (localAll[i].Id != currentProcess.Id)
{
MessageBox.Show("Original Process ID (Switching to): " + localAll[i].Id.ToString());
// Switch to it...
SetForegroundWindow(localAll[i].MainWindowHandle);
}
}
Application.Current.Shutdown();
}
This may have issues that I have not caught yet. If I run into any I'll update my answer.
A: A time saving solution for C# Winforms...
Program.cs:
using System;
using System.Windows.Forms;
// needs reference to Microsoft.VisualBasic
using Microsoft.VisualBasic.ApplicationServices;
namespace YourNamespace
{
public class SingleInstanceController : WindowsFormsApplicationBase
{
public SingleInstanceController()
{
this.IsSingleInstance = true;
}
protected override void OnStartupNextInstance(StartupNextInstanceEventArgs e)
{
e.BringToForeground = true;
base.OnStartupNextInstance(e);
}
protected override void OnCreateMainForm()
{
this.MainForm = new Form1();
}
}
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
string[] args = Environment.GetCommandLineArgs();
SingleInstanceController controller = new SingleInstanceController();
controller.Run(args);
}
}
}
A: Please check the proposed solution from here that uses a semaphore to determine if an existing instance is already running, works for a WPF application and can pass arguments from second instance to the first already running instance by using a TcpListener and a TcpClient:
It works also for .NET Core, not only for .NET Framework.
A: Normally, this is the code I use for single-instance Windows Forms applications:
[STAThread]
public static void Main()
{
String assemblyName = Assembly.GetExecutingAssembly().GetName().Name;
using (Mutex mutex = new Mutex(false, assemblyName))
{
if (!mutex.WaitOne(0, false))
{
Boolean shownProcess = false;
Process currentProcess = Process.GetCurrentProcess();
foreach (Process process in Process.GetProcessesByName(currentProcess.ProcessName))
{
if (!process.Id.Equals(currentProcess.Id) && process.MainModule.FileName.Equals(currentProcess.MainModule.FileName) && !process.MainWindowHandle.Equals(IntPtr.Zero))
{
IntPtr windowHandle = process.MainWindowHandle;
if (NativeMethods.IsIconic(windowHandle))
NativeMethods.ShowWindow(windowHandle, ShowWindowCommand.Restore);
NativeMethods.SetForegroundWindow(windowHandle);
shownProcess = true;
}
}
if (!shownProcess)
MessageBox.Show(String.Format(CultureInfo.CurrentCulture, "An instance of {0} is already running!", assemblyName), assemblyName, MessageBoxButtons.OK, MessageBoxIcon.Asterisk, MessageBoxDefaultButton.Button1, (MessageBoxOptions)0);
}
else
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form());
}
}
}
Where native components are:
[DllImport("User32.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern Boolean IsIconic([In] IntPtr windowHandle);
[DllImport("User32.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern Boolean SetForegroundWindow([In] IntPtr windowHandle);
[DllImport("User32.dll", CharSet = CharSet.Unicode, ExactSpelling = true, SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
internal static extern Boolean ShowWindow([In] IntPtr windowHandle, [In] ShowWindowCommand command);
public enum ShowWindowCommand : int
{
Hide = 0x0,
ShowNormal = 0x1,
ShowMinimized = 0x2,
ShowMaximized = 0x3,
ShowNormalNotActive = 0x4,
Minimize = 0x6,
ShowMinimizedNotActive = 0x7,
ShowCurrentNotActive = 0x8,
Restore = 0x9,
ShowDefault = 0xA,
ForceMinimize = 0xB
}
A: You can also use the CodeFluent Runtime which is free set of tools. It provides a SingleInstance class to implement a single instance application.
A: Here is a solution:
Protected Overrides Sub OnStartup(e As StartupEventArgs)
Const appName As String = "TestApp"
Dim createdNew As Boolean
_mutex = New Mutex(True, appName, createdNew)
If Not createdNew Then
'app is already running! Exiting the application
MessageBox.Show("Application is already running.")
Application.Current.Shutdown()
End If
MyBase.OnStartup(e)
End Sub
A: Here is my 2 cents
static class Program
{
[STAThread]
static void Main()
{
bool createdNew;
using (new Mutex(true, "MyApp", out createdNew))
{
if (createdNew) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var mainClass = new SynGesturesLogic();
Application.ApplicationExit += mainClass.tray_exit;
Application.Run();
}
else
{
var current = Process.GetCurrentProcess();
foreach (var process in Process.GetProcessesByName(current.ProcessName).Where(process => process.Id != current.Id))
{
NativeMethods.SetForegroundWindow(process.MainWindowHandle);
break;
}
}
}
}
}
A: I like a solution to allow multiple Instances, if the exe is called from an other path. I modified CharithJ solution Method 1:
static class Program {
[DllImport("user32.dll")]
private static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
[DllImport("User32.dll")]
public static extern Int32 SetForegroundWindow(IntPtr hWnd);
[STAThread]
static void Main() {
Process currentProcess = Process.GetCurrentProcess();
foreach (var process in Process.GetProcesses()) {
try {
if ((process.Id != currentProcess.Id) &&
(process.ProcessName == currentProcess.ProcessName) &&
(process.MainModule.FileName == currentProcess.MainModule.FileName)) {
ShowWindow(process.MainWindowHandle, 5); // const int SW_SHOW = 5; //Activates the window and displays it in its current size and position.
SetForegroundWindow(process.MainWindowHandle);
return;
}
} catch (Exception ex) {
//ignore Exception "Access denied "
}
}
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());
}
}
A: Simply using a StreamWriter, how about this?
System.IO.File.StreamWriter OpenFlag = null; //globally
and
try
{
OpenFlag = new StreamWriter(Path.GetTempPath() + "OpenedIfRunning");
}
catch (System.IO.IOException) //file in use
{
Environment.Exit(0);
}
A: My favourite solution is from MVP Daniel Vaughan:
Enforcing Single Instance Wpf Applications
It use MemoryMappedFile to send command line arguments to the first instance:
/// <summary>
/// This class allows restricting the number of executables in execution, to one.
/// </summary>
public sealed class SingletonApplicationEnforcer
{
readonly Action<IEnumerable<string>> processArgsFunc;
readonly string applicationId;
Thread thread;
string argDelimiter = "_;;_";
/// <summary>
/// Gets or sets the string that is used to join
/// the string array of arguments in memory.
/// </summary>
/// <value>The arg delimeter.</value>
public string ArgDelimeter
{
get
{
return argDelimiter;
}
set
{
argDelimiter = value;
}
}
/// <summary>
/// Initializes a new instance of the <see cref="SingletonApplicationEnforcer"/> class.
/// </summary>
/// <param name="processArgsFunc">A handler for processing command line args
/// when they are received from another application instance.</param>
/// <param name="applicationId">The application id used
/// for naming the <seealso cref="EventWaitHandle"/>.</param>
public SingletonApplicationEnforcer(Action<IEnumerable<string>> processArgsFunc,
string applicationId = "DisciplesRock")
{
if (processArgsFunc == null)
{
throw new ArgumentNullException("processArgsFunc");
}
this.processArgsFunc = processArgsFunc;
this.applicationId = applicationId;
}
/// <summary>
/// Determines if this application instance is not the singleton instance.
/// If this application is not the singleton, then it should exit.
/// </summary>
/// <returns><c>true</c> if the application should shutdown,
/// otherwise <c>false</c>.</returns>
public bool ShouldApplicationExit()
{
bool createdNew;
string argsWaitHandleName = "ArgsWaitHandle_" + applicationId;
string memoryFileName = "ArgFile_" + applicationId;
EventWaitHandle argsWaitHandle = new EventWaitHandle(
false, EventResetMode.AutoReset, argsWaitHandleName, out createdNew);
GC.KeepAlive(argsWaitHandle);
if (createdNew)
{
/* This is the main, or singleton application.
* A thread is created to service the MemoryMappedFile.
* We repeatedly examine this file each time the argsWaitHandle
* is Set by a non-singleton application instance. */
thread = new Thread(() =>
{
try
{
using (MemoryMappedFile file = MemoryMappedFile.CreateOrOpen(memoryFileName, 10000))
{
while (true)
{
argsWaitHandle.WaitOne();
using (MemoryMappedViewStream stream = file.CreateViewStream())
{
var reader = new BinaryReader(stream);
string args;
try
{
args = reader.ReadString();
}
catch (Exception ex)
{
Debug.WriteLine("Unable to retrieve string. " + ex);
continue;
}
string[] argsSplit = args.Split(new string[] { argDelimiter },
StringSplitOptions.RemoveEmptyEntries);
processArgsFunc(argsSplit);
}
}
}
}
catch (Exception ex)
{
Debug.WriteLine("Unable to monitor memory file. " + ex);
}
});
thread.IsBackground = true;
thread.Start();
}
else
{
/* Non singleton application instance.
* Should exit, after passing command line args to singleton process,
* via the MemoryMappedFile. */
using (MemoryMappedFile mmf = MemoryMappedFile.OpenExisting(memoryFileName))
{
using (MemoryMappedViewStream stream = mmf.CreateViewStream())
{
var writer = new BinaryWriter(stream);
string[] args = Environment.GetCommandLineArgs();
string joined = string.Join(argDelimiter, args);
writer.Write(joined);
}
}
argsWaitHandle.Set();
}
return !createdNew;
}
}
A: Here is my entire App.xaml.cs, this code also brings the launched program instance to the foreground:
public partial class App : Application
{
private static Mutex _mutex = null;
[DllImport("user32.dll")]
static extern bool SetForegroundWindow(IntPtr hWnd);
[DllImport("user32.dll")]
static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);
protected override void OnStartup(StartupEventArgs e)
{
_mutex = new Mutex(true, "YourMutexName", out var createdNew);
if (!createdNew)
{
using (var currentProcess = Process.GetCurrentProcess())
{
foreach (var process in Process.GetProcessesByName(currentProcess.ProcessName))
{
if (process.Id != currentProcess.Id)
{
ShowWindow(process.MainWindowHandle, 9);
SetForegroundWindow(process.MainWindowHandle);
}
process.Dispose();
}
}
// app is already running! Exiting the application
Shutdown();
}
base.OnStartup(e);
}
protected override void OnExit(ExitEventArgs e)
{
_mutex.Dispose();
base.OnExit(e);
}
}
A: Based Matt Davis' answer, wrapped into a class for convenience.
public static class SingleAppInstanceChecker
{
/// <summary>
/// Arbitrary unique string
/// </summary>
private static Mutex _mutex = new Mutex(true, "0d12ad74-026f-40c3-bdae-e178ddee8602");
public static bool IsNotRunning()
{
return _mutex.WaitOne(TimeSpan.Zero, true);
}
}
Example usage:
private void Application_Startup(object sender, StartupEventArgs e)
{
if (!SingleAppInstanceChecker.IsNotRunning())
{
MessageBox.Show("Application is already running.");
// Exit application using:
// Environment.Exit(1);
// Application.Current.Shutdown();
// Etc...
return;
}
// Allow startup and continue with normal processing
// ...
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "732"
} |
Q: How to build a basic iterator? How would one create an iterative function (or iterator object) in python?
A: If you looking for something short and simple, maybe it will be enough for you:
class A(object):
def __init__(self, l):
self.data = l
def __iter__(self):
return iter(self.data)
example of usage:
In [3]: a = A([2,3,4])
In [4]: [i for i in a]
Out[4]: [2, 3, 4]
A: Iterator objects in python conform to the iterator protocol, which basically means they provide two methods: __iter__() and __next__().
*
*The __iter__ returns the iterator object and is implicitly called
at the start of loops.
*The __next__() method returns the next value and is implicitly called at each loop increment. This method raises a StopIteration exception when there are no more value to return, which is implicitly captured by looping constructs to stop iterating.
Here's a simple example of a counter:
class Counter:
def __init__(self, low, high):
self.current = low - 1
self.high = high
def __iter__(self):
return self
def __next__(self): # Python 2: def next(self)
self.current += 1
if self.current < self.high:
return self.current
raise StopIteration
for c in Counter(3, 9):
print(c)
This will print:
3
4
5
6
7
8
This is easier to write using a generator, as covered in a previous answer:
def counter(low, high):
current = low
while current < high:
yield current
current += 1
for c in counter(3, 9):
print(c)
The printed output will be the same. Under the hood, the generator object supports the iterator protocol and does something roughly similar to the class Counter.
David Mertz's article, Iterators and Simple Generators, is a pretty good introduction.
A: There are four ways to build an iterative function:
*
*create a generator (uses the yield keyword)
*use a generator expression (genexp)
*create an iterator (defines __iter__ and __next__ (or next in Python 2.x))
*create a class that Python can iterate over on its own (defines __getitem__)
Examples:
# generator
def uc_gen(text):
for char in text.upper():
yield char
# generator expression
def uc_genexp(text):
return (char for char in text.upper())
# iterator protocol
class uc_iter():
def __init__(self, text):
self.text = text.upper()
self.index = 0
def __iter__(self):
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += 1
return result
# getitem method
class uc_getitem():
def __init__(self, text):
self.text = text.upper()
def __getitem__(self, index):
return self.text[index]
To see all four methods in action:
for iterator in uc_gen, uc_genexp, uc_iter, uc_getitem:
for ch in iterator('abcde'):
print(ch, end=' ')
print()
Which results in:
A B C D E
A B C D E
A B C D E
A B C D E
Note:
The two generator types (uc_gen and uc_genexp) cannot be reversed(); the plain iterator (uc_iter) would need the __reversed__ magic method (which, according to the docs, must return a new iterator, but returning self works (at least in CPython)); and the getitem iteratable (uc_getitem) must have the __len__ magic method:
# for uc_iter we add __reversed__ and update __next__
def __reversed__(self):
self.index = -1
return self
def __next__(self):
try:
result = self.text[self.index]
except IndexError:
raise StopIteration
self.index += -1 if self.index < 0 else +1
return result
# for uc_getitem
def __len__(self)
return len(self.text)
To answer Colonel Panic's secondary question about an infinite lazily evaluated iterator, here are those examples, using each of the four methods above:
# generator
def even_gen():
result = 0
while True:
yield result
result += 2
# generator expression
def even_genexp():
return (num for num in even_gen()) # or even_iter or even_getitem
# not much value under these circumstances
# iterator protocol
class even_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
# getitem method
class even_getitem():
def __getitem__(self, index):
return index * 2
import random
for iterator in even_gen, even_genexp, even_iter, even_getitem:
limit = random.randint(15, 30)
count = 0
for even in iterator():
print even,
count += 1
if count >= limit:
break
print
Which results in (at least for my sample run):
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
How to choose which one to use? This is mostly a matter of taste. The two methods I see most often are generators and the iterator protocol, as well as a hybrid (__iter__ returning a generator).
Generator expressions are useful for replacing list comprehensions (they are lazy and so can save on resources).
If one needs compatibility with earlier Python 2.x versions use __getitem__.
A: All answers on this page are really great for a complex object. But for those containing builtin iterable types as attributes, like str, list, set or dict, or any implementation of collections.Iterable, you can omit certain things in your class.
class Test(object):
def __init__(self, string):
self.string = string
def __iter__(self):
# since your string is already iterable
return (ch for ch in self.string)
# or simply
return self.string.__iter__()
# also
return iter(self.string)
It can be used like:
for x in Test("abcde"):
print(x)
# prints
# a
# b
# c
# d
# e
A: Include the following code in your class code.
def __iter__(self):
for x in self.iterable:
yield x
Make sure that you replace self.iterablewith the iterable which you iterate through.
Here's an example code
class someClass:
def __init__(self,list):
self.list = list
def __iter__(self):
for x in self.list:
yield x
var = someClass([1,2,3,4,5])
for num in var:
print(num)
Output
1
2
3
4
5
Note: Since strings are also iterable, they can also be used as an argument for the class
foo = someClass("Python")
for x in foo:
print(x)
Output
P
y
t
h
o
n
A: This is an iterable function without yield. It make use of the iter function and a closure which keeps it's state in a mutable (list) in the enclosing scope for python 2.
def count(low, high):
counter = [0]
def tmp():
val = low + counter[0]
if val < high:
counter[0] += 1
return val
return None
return iter(tmp, None)
For Python 3, closure state is kept in an immutable in the enclosing scope and nonlocal is used in local scope to update the state variable.
def count(low, high):
counter = 0
def tmp():
nonlocal counter
val = low + counter
if val < high:
counter += 1
return val
return None
return iter(tmp, None)
Test;
for i in count(1,10):
print(i)
1
2
3
4
5
6
7
8
9
A: This question is about iterable objects, not about iterators. In Python, sequences are iterable too so one way to make an iterable class is to make it behave like a sequence, i.e. give it __getitem__ and __len__ methods. I have tested this on Python 2 and 3.
class CustomRange:
def __init__(self, low, high):
self.low = low
self.high = high
def __getitem__(self, item):
if item >= len(self):
raise IndexError("CustomRange index out of range")
return self.low + item
def __len__(self):
return self.high - self.low
cr = CustomRange(0, 10)
for i in cr:
print(i)
A: I see some of you doing return self in __iter__. I just wanted to note that __iter__ itself can be a generator (thus removing the need for __next__ and raising StopIteration exceptions)
class range:
def __init__(self,a,b):
self.a = a
self.b = b
def __iter__(self):
i = self.a
while i < self.b:
yield i
i+=1
Of course here one might as well directly make a generator, but for more complex classes it can be useful.
A: First of all the itertools module is incredibly useful for all sorts of cases in which an iterator would be useful, but here is all you need to create an iterator in python:
yield
Isn't that cool? Yield can be used to replace a normal return in a function. It returns the object just the same, but instead of destroying state and exiting, it saves state for when you want to execute the next iteration. Here is an example of it in action pulled directly from the itertools function list:
def count(n=0):
while True:
yield n
n += 1
As stated in the functions description (it's the count() function from the itertools module...) , it produces an iterator that returns consecutive integers starting with n.
Generator expressions are a whole other can of worms (awesome worms!). They may be used in place of a List Comprehension to save memory (list comprehensions create a list in memory that is destroyed after use if not assigned to a variable, but generator expressions can create a Generator Object... which is a fancy way of saying Iterator). Here is an example of a generator expression definition:
gen = (n for n in xrange(0,11))
This is very similar to our iterator definition above except the full range is predetermined to be between 0 and 10.
I just found xrange() (suprised I hadn't seen it before...) and added it to the above example. xrange() is an iterable version of range() which has the advantage of not prebuilding the list. It would be very useful if you had a giant corpus of data to iterate over and only had so much memory to do it in.
A:
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
Improving previous answer, one of the advantage of using class is that you can add __call__ to return self.value or even next_value.
class uc_iter():
def __init__(self):
self.value = 0
def __iter__(self):
return self
def __next__(self):
next_value = self.value
self.value += 2
return next_value
def __call__(self):
next_value = self.value
self.value += 2
return next_value
c = uc_iter()
print([c() for _ in range(10)])
print([next(c) for _ in range(5)])
# [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# [20, 22, 24, 26, 28]
Other example of a class based on Python Random that can be both called and iterated could be seen on my implementation here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "672"
} |
Q: PHP Script to populate MySQL tables Is anyone aware of a script/class (preferably in PHP) that would parse a given MySQL table's structure and then fill it with x number of rows of random test data based on the field types?
I have never seen or heard of something like this and thought I would check before writing one myself.
A: What you are after would be a data generator.
There is one available here which i had bookmarked but i haven't got around to trying it yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Are there reasons not to use JSONP for AJA~X requests? If you're building an AJA~Xy app, are there any downsides to using JSONP requests/responses even if you're not planning on any cross-domain requests?
The only thing I can think of is that there are a couple extra bytes for the callback wrapper...
Edit:
I found this which also suggests security and error handling as potential problems...
There's no error handling. The script injection either works, or it doesn't.
If there's an error from the injection, it'll hit the page, and short of a window wide error handler (bad, bad, very bad), you need to be sure the return value is valid on the server side.
I don't think error handling is much of a problem... most of us would use a library to generate the JSON... the well-formedness of my response isn't a concern for this question.
and security:
There are documents out on the web that can help, but as a cursory check, I would check the referrer in the server side script.
it seems like this is a potential problem with any type of response... certainly, there's nothing unique to JSONP in the security arena...?
A: Retrieving errors when a jsonp call fails is possible.
http://code.google.com/p/jquery-jsonp/
Hope it helps.
A: I would say the biggest limitation might be the extra overhead for have the browser render a script tag to call the server. Plus is JSONP really considered AJAX since it doesn't actually use the XMLHttpRequest object?
A: Downside? It's fairly limited - you trigger a "GET" request and get back some script that's executed. You don't get error handling if your server throws an error, so you need to wrap all errors in JSON as well. You can't really cancel or retry the request. You're at the mercy of the various browser author opinions of "correct" behavior for dynamically-generated <script> tags. Debugging is somewhat more difficult.
That said, i've used it on occasion, and haven't suffered. YMMV.
A: Here is another bit you may want to consider with JSONP.. possible memory leaks..
http://neil.fraser.name/news/2009/07/27/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is there a way to check to see if the user is currently idle? There is some documentation on the internet that shows that Windows changes the behavior of the NotifyIcon.BalloonTipShown command if the user is currently idle and this is detected by checking for keyboard and mouse events. I am currently working on an application that spends most of its time in the system tray, but pop-ups up multiple balloon tips from time to time and I would like to prevent the user from missing any of them if they are currently away from the system. Since any currently displayed balloon tips are destroyed if a new one is displayed, I want to hold off on displaying them if the user is away.
As such, is there any way to check to see if the user is currently idle if the application is minimized to the system tray?
A: How about the Win32 LASTINPUTINFO function?
using System.Runtime.InteropServices;
[DllImport("User32.dll")]
static extern bool GetLastInputInfo(ref LASTINPUTINFO plii);
struct LASTINPUTINFO
{
public uint cbSize;
public uint dwTime;
}
A: Managed code
Check position of the mouse every second. If there are new messages for user, hold on to them until you detect any move with the mouse.
Unmanaged code
See Detecting Idle Time with Mouse and Keyboard Hooks
A: Thanks for the responses, I ended up going with the GetLastInputInfo function as it is pretty straight forward to implement in the application I'm working on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to make a tree in C++? How do I make a tree data structure in C++ that uses iterators instead of pointers? I couldn't find anything in the STL that can do this. What I would like to do is to be able to create and manipulate trees like this:
#include <iostream>
#include <tree>
using namespace std;
int main()
{
tree<int> myTree;
tree<int>::iterator i = myTree.root();
*i = 42;
tree<int>::iterator j = i.add_child();
*j = 777;
j = j.parent();
if (i == myTree.root() && i == j) cout << "i and j are both pointing to the root\n";
return 0;
}
Thank you, tree.hh seems to be just what I was looking for.
If this is for gaining the benefit of
a data-structure holding arbitrary
index types, optimized for searching
and good at insertion then consider
using a map.
A map is an associative container that
has performance guarantees identical
to those of a tree: logarithmic
searching, logarithmic insertion,
logarithmic deletion, linear space.
Internally they are often implemented
as red-black trees, although that is
not a guarantee. Still, as an STL user
all you should care about is the
performance guarantees of the STL
algorithms and data-structures.
Whether they're implemented as trees
or little green men shouldn't matter
to you.
I'm not sure if a map is what I need, but thanks for the info. I will remember to use maps whenever possible instead of implementing trees.
A: Here is tree.hh which is a bit close to what you want to do, though a bit
different.
Here is a piece of code extracted from its website.
int main(int, char **)
{
tree<string> tr;
tree<string>::iterator top, one, two, loc, banana;
top=tr.begin();
one=tr.insert(top, "one");
two=tr.append_child(one, "two");
tr.append_child(two, "apple");
banana=tr.append_child(two, "banana");
tr.append_child(banana,"cherry");
tr.append_child(two, "peach");
tr.append_child(one,"three");
loc=find(tr.begin(), tr.end(), "two");
if(loc!=tr.end()) {
tree<string>::sibling_iterator sib=tr.begin(loc);
while(sib!=tr.end(loc)) {
cout << (*sib) << endl;
++sib;
}
cout << endl;
tree<string>::iterator sib2=tr.begin(loc);
tree<string>::iterator end2=tr.end(loc);
while(sib2!=end2) {
for(int i=0; i<tr.depth(sib2)-2; ++i)
cout << " ";
cout << (*sib2) << endl;
++sib2;
}
}
}
Now what's different? Your implementation is simpler when it comes to
append a node to the tree.
Though your version is indiscutably simpler, the dev of this lib probably wanted to have some info accessible without browsing the tree, such as the size of the tree for instance.
I also assume he didn't want to store the root on all nodes for performance reason.
So if you want to implement it your way, I suggest you keep most of the logic and add the link to the parent tree in the iterator and rewrite append a bit.
A: Why would you want to do that? If this is for learning purposes then you can write your own tree data structure. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map.
A map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you.
As a side note, there's no such thing as a root() function. All STL containers have the begin() function implementing the conceptual beginning of a container. The kind of iterator returned by that function depends on the characteristics of the container.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Accurev SCM Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev.
My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible.
The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements.
Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA.
Opinions?
A: Put me in the anti-Accurev camp. We moved to it recently, and it's been horrible. We have a number of quite large projects, and Accurev seems to be almost unusable for the quantity of files we have. Over a VPN, forget it. It takes forever to update, the cross-stream management doesn't work in any intuitive way, the UI is complex and slow.
Additionally, support for it in a number of tools we use is either non-existent or poorly implemented.
Add the various bugs that keep popping up, and I'd say we wasted a great deal of money for something that is done much better by open-source software, such as Subversion. We still use CVS for some projects, and even it is so much better for normal operations and workflow that I'd pick it over Accurev.
A: Another big thumbs down for Accurev. Every simple operation seems to become so horribly complex - cryptic error messages send you scattering to manual, only finding theoretical explanations about concepts that shouldn't have existed in the first place.
UI is so slow and unresponsive it makes you want to gouge your eyes out.
Stay away.
A: Accurev has some great concepts; but suffers from:
1) many many inconsistencies in the command-line interface.
2) many bugs and nuisances in the application/interface. E.g. their time-safe property is not actually time-safe at all because of several bugs that affect snapshots and pass-through streams.
3) major bugs in critically important features... As above; time-safe bugs; bugs in merging by issues.
3) they are a year behind where they should be, because they wasted a whole year on trying to move their backend to a database - this will be version 5 which may never see the light of day.
4) The marketing is excellent; but the product does not live up to the marketing hype
5) every release has had major critical bugs that has required them to release immediate hotfixes. This has been a major disruption for us. And these aren't minor bugs.
6) doesn't scale well... takes up a huge amount of disk space and gets slower over time
Having said all that; it's still a good product; but if I were to do it all again I'd consider git instead.
A: Sweet mother of god is Accurev awful. 300k lines of code? Try it with millions, with hundreds of developers working on scores of projects.
Continuous integration? Sure, that's something that developers can approximate by doing regular merges in, say, perforce, git, mercurial, or any of the countless other tools that actually gets the work done, but it becomes the choice of the developer as to how to proceed. For architects, leads, build engineers, or anyone who actually uses source control to slice and dice, Accurev is horrific.
I went to an "Advanced Accurev topics" talk, and the first tidbit was a large shell command for clearing out Accurev's client-side caching/sync mechanism to correct for when Accurev updates silently fail to pull down files that should be updated.
A Timestamp Optimization checkbox? Deep overlaps? Modal dialogs with only one background process? (That would be okay if those processes were anything other than glacial.) Cascading graphs of selectively configured streams put in place just to be able to pull off components and cross-merging? Updates aren't actually atomic without time-locking?! (honest answer: update again)
Every time I try and do anything serious in Accurev, I feel like I'm playing Russian Roulette at a table with HAL9000, Skynet, and a Speak & Spell. On the line? Four more hours of my life.
Why am I here, griping about Accurev? Because my other machine has taken a full four hours to try to update 10MB of files over VPN. Why? Because some other change has come up stranded and requires some sort of catastrophic resync scan for elements. The worst part? All of these files were on a workspace on the same computer. We're talking about several hours just to get a recently updated workspace to the point where I can put the right notches in the stream history.
One word Accurev review: Avoid
A: After 4 months, my very negative opinion hasn't changed at all. While Accurev has some very nice concepts, the slowness and complexity far outweigh the advantages, at least for us. Aside from the usual complaints about the GUI and the obscurity of a number of features, one of the absolutely most annoying faults is how many hoops you have to jump through just to update a workspace, made much worse by the inability to update only one directory (or directory tree).
A typical update consists of waiting a loooong time to be told you have overlaps. Of course, you aren't told what the overlaps are. So, you have to do an overlap search, wait another loooong time, resolve the overlaps, do another update, wait a looooong time, and hope it worked this time.
Some of our remote developers update as infrequently as possible because the update time over VPN is absurd. Granted, we have an enormous number of source files across a number of products, and if we reorganized everything we could probably improve performance.
However, we hired Accurev (at a significant cost) to come in and tell us how to set everything up. Still sucks. Aside from that, we really shouldn't have to reorganize the way we work with our sources to suit a source-code-control system. It's a tool, not a business model.
Lastly, we've been trying out an Accurev plugin for IntelliJ, written by Accurev. It works just as poorly as the rest, and, while Accurev has been very responsive about fixing the plugin, we aren't their QA group, nor did we sign up to be an alpha test site (yes, it's that buggy). We finally gave up and wrote our own plugin that actually works.
A: Honestly, I feel like I need to double check to see if I'm using the same tool as these folks that seem to like Accurev. I used Subversion in my previous job and liked it a lot. We never had any issues with it to speak of, and of course the price is right. My biggest problem with Accurev is that it seems they felt the need to be different for different's sake. It uses a completely different vocabulary to express versioning concepts, that even after using it for almost 6 months, feels very foreign to me. It has no fewer than 8 or 9 states any given file can be in, compared to about around 1/2 as many for Subversion. The GUI is crappy and slow, and the IDE integration plugins are sub-par. I had assumed that at some point I would "get" Accurev and see why it's so much better, but that has yet to happen. My advice is to stay away.
A: At a previous employer we reviewed Accurev and Plastic SCM. At the end of the day, I was not impressed with Accurev's interface, or the so-called "streams". We went with Plastic, and nobody complained.
@Jonathan
The streams are interesting,but I don't see how any version control can magically avoid collisions when two people touch the same code in the same file. Accurev's model was intriguing, but at the end of the day, nice clean branching and merging with a drop dead easy interface made Plastic the choice for us. Plastic's timeline view (I forget the actual name), showing the branch/merge/check-in history made it very simple to review the history of the project from a bird's eye view.
A: @Steveth
The Interface is lousy...However the streams model is very innovative.
Being able to create a stream for a new project off the trunk stream, and having 5 developers working on it, and not having any form of merge collisions when we merge that stream back into the main trunk is unheard of, yet it works well in Accurev.
A: Accurev is simply the worst tool I have ever used.
Subversion is very good, esp if you are migrating from cvs.
A: My company ha been using Accurev since early 2010, coming from StarTeam before that and CVS in the very distant past. I haven't used CVS (having been on a different team at the time) so I have no comparisons there, and I never bothered to learn StarTeam too intimately.
Since then I've also played with both the CLI and Tortoise versions of SVN, Git, and Mercurial (Hg) in my free time. I plan on giving Git a more thorough go at some point, but I found Hg to be much more intuitive and easy (at least under Windows). Anyway, like I said management saddled us with Accurev and after spending time to get fairly well acquainted with it (GUI and CLI both) as a developer... I absolutely hate it.
Someone earlier in the thread summed it up as software written by devs that had read about SCM in a book but never used it... I agree whole-heartedly but you also get the feeling that they had the same level of experience with GUIs, efficient processing, etc. (In fact, I see that Accurev has a new product called "Kando" based on Git...sounds like they've finally realized how bad their model is. But to quote a coworker "I wouldn't trust anything written by the same team at this point"... I have to wonder if it is a coincidence that there is a baby-wipe product named "Kandoo"...)
Ok, obviously I don't care for the product. If you've spent the time to read this thread, then obviously there are quite a few folks with similar views on it. But I wanted to share some of my own gripes that I've had with it over the last few years as well -- btw if it helps anyone, I think we were using v4.7 previously and have been on v5.3 (?) now for quite some time.
My biggest beef with Accurev is how horribly slow and inefficient it is. Notice I didn't use the word GUI -- I've tried both GUI and CLI-- the slow parts are on the server, so you're screwed either way. It seems like I see one of those damn modal dialog/status bars at every turn... I switch tabs -- bam!: processing, please wait. I reparent a stream -- oh wait just another minute. For "Updates" I expect it to be a little slow (although sometimes it gets annoying when it screams "Overlap" [aka a conflict] at me when I happen to have a file with IDENTICAL content to what it's pushing down). I change directories browsing to a path... processing, processing, "oh you want to go down one more sub-folder"... let me process that some more. You get the idea.
Is that my only beef? Hell no.
1) For the merge tool, I've had the "ignore white space" option checked for years, but I can only ever recall it working ONE time (for example, say we're talking about about comparing say 2 versions of a JSP where I converted spaces to tabs or trimmed some trailing white space or something). Why is this an issue? Because it becomes pure torture for every other developer that looks in the history and wants to see what REALLY changed. If they can't get implement this correctly, don't put the F***ING option there. (Note:using WinMerge as an external compare tool, with appropriate settings works fine)
2) I've had instances where checking a file into one stream and then needing to put an IDENTICAL copy of that same file into another stream (using the same issues #) causes it to throw a temper tantrum. If I use the wrong issue #, it goes in with no problem. This is probably an isolated case (and maybe due to other poor process decisions my company saddles us with) but I thought I'd mention it for completeness.
3) The history? All stored on the server. Translation: If you enjoyed waiting for it to switch tabs, create/reparent a workspace, and update then you're in for more of the same when you want to view history.
4) The way it's exclusionary rules are done is not only terrible but also pathetic. Under Windows, you actually have to create an environment variable where you can create some exclusions to files that you don't want to show up. IT DOES NOT SUPPORT REGEX. I've seen several other SCMs that offer much better approaches (I'm fond of the the ignore files used in Hg. I think there is something similar in Git too) -- not only are both regex and glob patterns supported, but defining this in a FILE is more system-friendly and much easier to edit that putting it into an Environment variable. Not only that, but it seems that the ignore filters are iffy at best. The way our projects are defined have the build folder under the project folder (which is source controlled) and trying to exclude all folders under the the build folder doesnt seem to work -- most of them still show up in my "External" filter even after setting up rules.
5) It's check-in process (a "Promote") also seems to run with the theme of slow and inefficient. We use an external ticket system (not AccuWork... our ticketing system has its flaws but after using AccuRev, I can't image that product to be much better). Anyway, when we say "Promote [this file]", first it pops up with another modal dialog (after the required waiting, while it does more stat processing), then it presents a list of ALL tickets it has pulled (there are a lot...too many to reliably find anything). Next, we must enter our ticket number from the other system, and wait some more while it takes forever to find a match (I thought it already pulled the list...geez). Finally, it will display the matches, then we pick one and tell it to promote using that ticket number. After yet some more waiting, we're finally done.
I could go on but I'll stop there.... this post is getting too long. Instead, let me sum up Accurev in my own way: After having to wait for all these slow annoying "Stat processing", etc dialogs during an issue where we were trying to quickly get a fix out, I came up with a new slogan for them: "AccuRev: when seconds count, your fix is only minutes away".
Since management won't get rid of Accurev (I know they won't go for anything without Enterprise support but I've begged for them to consider anything else: SmartGit...Kiln...Perforce...), I have been using TortoiseHg to locally version control my files (in addition to Accurev). It is a little more work. But for those saddled with Accurev, it makes life so much easier. You get: better diff management -- MUCH MUCH easier to see and review code changes after an "accurev update", the ability to view some history without waiting 10 years for the server, ability to share directly between you and another dev (assuming they also install it), ability to revert/restore your changes if you accidentally wipe something out while trying to get clear of Accurev's merge hell ("Overlapped" files), and even more if you can get the rest of your team using it.
EDIT: Forgot to mention, during a conversion with our build engineers I was told that while Accurev has a Java API that you can develop for, it apparently requires purchasing some sort of additional licensing. I can't confirm this since a) I can't find pricing anywhere on Accurev's website* and b) I doubt like hell they'd tell me at work...
*Kinda weird considering I can find some sort of rough pricing for Perforce, Kiln, StarTeam and SmartGit quite easily. I usually get a sketchy feeling when some product won't list any sort of price up front, guess it shouldn't surprise me too much that Accurev falls into that category...
A: Well, all I can say is that I completely agree. The back-end is great but the UI sucks. The stream functionality is great because it makes merging no brainier as all changes from parent streams are automatically propagated to all children. I wrote a post about Accurev UI that explains most of the shortcomings I've come across for last 2 years.
A: The sort answer:
Use the latest SVN server and SmartSVN (the community edition is free) as a client.
You will not pay anything and you can get everything you need.
The gory details:
BTW the feature of imposing the change management rules during checking is trivial to write as a SVN hook. We did it in a couple of hours, in one hundred lines (or thereabout) of code - it works wonderfully and never broke. It integrates SVN with Bugzilla and imposes rules such as:
*
*In order to commit you have to enter a message
*In order to commit you must have entered a Bugzilla ID that is in a "Valid" commit state.
... and so on, you can build your own rules to your heart's content
Accurev seems to be marketware to me ... lousy GUI client ... very slow (we had to upgrade the HW to make it actually work effectively), and of course ... you have to pay for it! Ah yes, if you do use it, I hope that you do not have to replicate your server between some place in the US and some place in India :)
Perforce is more robust but it is not very easy to administer. In any case, it is a superior product in comparison to Accurev.
VSS and stuff like that should not even be considered as "version control" systems when it comes to writing professional software (typically, enterprise software) in the 21st century. That's like writing your reports on a typewriter ;-)
If you know what you are doing (with your software) then SVN will be a robust and efficient solution for you. With (at least) two robust and efficient revision control systems in existence today (SVN/GIT) there is very little room to justify working with a proprietary solution; some reasons could be "inertia": you have it, you don't care paying for it, and you didn't have any major issues -- in other words, it works for you.
I use SVN everywhere, when it didn't exist I was using CVS, and before that ... no I am not going to tell you how old I am ;-)
Hope this helped ...
Ciao.
A: I've been the SVN and Accurev administrator for a time. Accurev took a long time to grow on me - about six months, but I like it now for a corporate enterprise environment. Here's a few things to consider.
Pros:
*
*Personal code history
*
*The code changes are kept on the server when the users performs a keep. The keep is personal to the user and doesn't distribute to other users until a promote command is issued.
*The code performed by a keep is kept on the server and available even if the user performs a revert operation.
*In most cases, promoting the code to higher streams for distribution is fairly simple.
*Administration is fairly simple
*Installation works well
*Performance is much improved on version 5.3 which changed the backend to a PostGres database
*The CLI is rich and extensive
Cons:
*
*A real clunky user interface
*Resolving overlaps can be complex, just like conflicts in SVN
However, like any complex tool, your appreciation will increase the more you understand and know about it.
A: I have used AccuRev for nine months and I anxiously await the day I use it no more. My one line review is:
It's like source control written by developers who have read about it in a
book, but have never actually used it before.
*
*Basic concepts are missing or extremely complicated. Example, I've just lost 8 hours work because there's no good way to "revert" a change once it's in a stream. You can "purge" that transaction - but thats it its gone, you can't then cherry pick the changes you really wanted.
*The GUI is slow, bloated and inconsistent. Warnings are cryptic eg "error merging element id 1234556". Every single dialog box is modal. As one poster said, there are 9 states a file can be in - but what's more, you must manually click through a list box of 9 options to see the setting for each file.
*The streams model sounds like a good idea, but the default behavior of "inheriting" changes from a parent stream is actually incredibly bad in practice. Just say the word "Deep Overlap" to anyone who has really used AccuRev and watch them shudder, turn pale, and/or faint. Making the streams is very easy, but actually merging them with any meaningful differences is arcane and non-deterministic.
*No one has mentioned this, but the whole system of "include/exclude" rules to manage file and directory filters is completely broken. This system lives outside the transaction system so there's no way to revert, track history or reproduce changes to a live source stream - for example when Johnny Intern decides the "core" library isn't useful to the entire development team.
The only reason I can account for Accurev's popularity is that it is optimized for the "Demo to Management" case. We're using AccuRev for serious software development - dozens of projects and many more developers. The streams and the GUI look great but after a few weeks use varnish comes off revealing a old, busted, mechanical-turk like system.
Stay far away from Accurev - use Git or Mercurial if you want something modern and free, or Perforce if you want something rock solid, well-supported but expensive.
EDIT:
As a postscript here's one of the many examples of the lack of care and general shoddiness in the UI:
The default difference viewer has its numbering "off by one" - for example if you have 2 diffs in a file - the viewer shows diff "0 of 1" and diff "1 of 1". I mean, really, would you feel comfortable trusting your code to a system that exhibits such a stupid and easily fixable bug.
A: I used AccuRev at a previous job and didn't have any problems with it, but I very much prefer Subversion (even without comparing the price difference). I remember the client GUI being pretty slow too. Also, I do recall that the GUI just called their command-line utilities to interface with the repository. So, it probably won't be that hard to use those interfaces for your DIY tools.
A: I have used Accurev for one year. I don't like it. Here are some problems I encountered:
1. Its GUI is terrible: it's so slow that each time I switch between tabs (streams and workspaces) or perform some actions I have to wait for several seconds. It sometimes gave you a confusing error message that could not help find what's wrong.
2. It has so many concepts that you have to spend much time learning Accurev itself.
3. I once encountered such a problem: I have a version controlled file modified by our build process. Later my teammate moved that file to other location in his workspace and promoted the change. When I run "accurev update" it simply told me "some file has been moved" and everything looked normal. But actually the command stopped at the moved file and no longer updated other files. It's very confusing - your update command did not update the worksapce but you have no idea about it. The only outputed message "some file has been moved" looked just like other verbose output. It did not tell me my update failed or aborted or something else.
Before that I used SVN and ClearCase. SVN is a great tool, simple and easy to use. And I did not have so many complaints about ClearCase. Accurev is really frustrating...
A: I worked on (and administrated) AccuRev for more than a year lately, and most of my impressions are very good.
We evaluated it alongside with "Plastic SCM", SVN and "ClearCase-UCM" (that we already owned and used), and decided to dump ClearCase and SVN (both were used in two different groups) and to purchase AccuRev.
*
*First, the stream architecture is much more solid, easy and safe SCM method than the old branching architecture that all the other tools are tied to (yes even ClearCase's "streams" are eventually a wrapper of branches). There's a lot of articles about the differences in their site, you can search and read about it to make it understandable. (Try this link, and this one too)
*The timesafe architecture - you can't delete anything from the depots (=repositories) database. I saw tools that this operation is possible with the proper admin permissions. In AccuRev you will use internal commands in order to change or fix a mistake you made, which in turn will be recorded as a new transaction also. Very smart very safe.
*integrations! AccuRev integrates with so many tools (to give you ALM bundle) - bug tracking tools (like JIRA, ClearQuest), IDEs, testing tools (quality center), and if you can't find one you can write your own (they provide Java/Perl/XML/CLI SDKs)
*Change Package, I don't know about you, but I can't stand SCM tools that doesn't supply change management (did anyone said SVN?), like ClearCase "activities", and AccuRev's "issues". It's a must for my opinion, and one of my CM "best practice". And they can be integrated with your bug tracking tool too, so your users can work on real tasks like features and defects.
*the support is just amazing. As a former client of IBM (because of Rational ClearCase is now part of IBM), the shift to AccuRev was simply awesome. During the evaluation they gave numerous on-line support to understand how we want the tool to act for us, so we tweaked it together even before we payed a cent. And they kept that degree of responsiveness after that evaluation period too; We had some problem during an upgrade from 4.5.4 to 4.6, in just a couple of hours (while the upgrade is still in process) the support guy contacted me back, suggested a few tips, connected to my desktop and finally fixed the problem before any other company's support would even started to try to figure who you are. Of course that if you choose open source tool than you're on your own!
Also the tool comes with help system, which can be even too verbose sometimes. And don't forget the forums (especially on cmcrossroads) that are very good in supplying rapid answers too.
*And there's so many more....
Of course there are also drawbacks (which software is perfect?) - I would like, for example, to see the file(s) <-> issue(s) association during check-in too, like in ClearCase, not just in "promotion" like it is today - but IMHO they are really minor.
So, as you can understand, if you read it all, I'm a big fan of AccuRev, and I'm highly recommending it. IMHO, it is today one of the best SCM tools you got the chance to work on; modern, wise, easy and strong.
A: My current client uses Accurev for SCM and after a few projects using a DVCS like Git or Mercurial, I can honestly say that using Accurev is about as enjoyable as closing your face in a car door.
The GUI for Mac and Linux is god awful slow. You can forget using the refactoring support in your IntelliJ or NetBeans IDE, if you use Accurev...that is unless you are going to write your own plugin.
Oh yea...let's not forget about this little chestnut ==> evil twin.
On a positive note, it could be worse...it could be Clearcase.
A: Accurev sucks! It's overcomplicated for the price of productivity of the team.
I've worked with several SCMs and the idea of accurev is great but not practical. It's Merge Hell with a hierarchy that looks good in the UI but is pain to deal with when it comes to real life.
Specially when you refactor your code (something that some people actually do everyonce in a while) and you get in a mess when a defunct file is not promote all the way up. Or even worse if somebody else overrides a defunct file and creates a new file with the same name....etc
The UI is incredible terrible. Which honestly doesn't matter how good you think the backend is. You will still use the UI (I use the VS plugin which is half decent except it freezes the IDE sometimes, nice huh!).
If you live in the 80's and are planing to use the command line for you day to day use, then i guess you can avoid the UI. If you have an integration build server then of course you have no choice but to use the command line (No native tasks for MSbuild/ANT/NANT that i know of). I just heard that they are doing some work with http://www.electric-cloud.com/. Don't know anything about it still.
Accurev is new therefore there has little resources available online as apposed to svn which you would find tons of integration work that was done by hundreds (with jira for example).
If you are a manager. Accurev will make you feel good looking at the streams, because it does look pretty as long as you dont have to deal with it..
If you are a developer, (a junior developer will not care much, he/she will do whatever you ask them to do)
If you are an architect, refactors a lot, re-addresses architecural descisions...etc you will find accurev as your worste enemy, moving stuff around is pain. Very anti-agile if you ask me. It's not fluid..
If you are a build engineer, you will find it PAIN to get all the developers into a procedure, which you will have to do if you use accurev (ex. promote their code to the agreed upon stream in preperation for a release)....
CRM is supposed to make things easier... I dont see Accurev at this point doing that.. It's still not mature enough, If you want to be a pioneer and strugle in the hope for things to get better..go for it..
Otherwise, don't re-invent the wheel and go with something more established with much more case studies and applications. Because to be practical, what accurev claims to offer that differs is not worth it when you deal with it's pains on a daily basis...
A: We've been using AccuRev for a few years now. It's a serious improvement over our last tool (Razor) and while I'd recommend it for others- it does have a few drawbacks.
Benefits:
*
*The stream based interface is quite intuitive. I make snapshots every second week and have a number of ongoing development streams branching off the snapshot.
*Moving changes between stream is really easy, just select the change, send it to the "change palette" and select the destination stream. It guides you through all the files that need to be merged.
*The command-line utilities are great. We've managed to script most of our release generation around it.
*Integrations for Visual Studio, Bugzilla, etc...
Drawbacks:
*
*As monjardin pointed out, the client GUI can be slow. I use the windows version for all my history/stream searching since it's much faster than the X11 one. Of course, the GUI's written in Java so performance obviously wasn't their first concern.
*It's starting to get slow for really large databases (I'm talking over 300,000 LOC), although they've apparently addressed it in today's release of 4.7.
We opted to go with the cheaper license and not get the change packages feature (I can't see them working that well anyways, as the entire idea of promoting individual changes flies in the face of continuous integration). So far it hasn't hurt us.
Overall, for the price you pay it's a nice tool. We evaluated ClearCase, MKS, Spectrum and Subversion during our trial period. Subversion may have been a good choice, but it was still pretty green when we were evaluating. I've never heard of Plastic before, but I regret not evaluating Perforce.
Also, I understand that the engineers over at Trolltech (makers of Qt) have recently switched to git. I'd be interested in checking that out as well.
A: We use AccuRev for 4 years already. I hate it very much, mostly because of its horrible GUI. Several years ago AccuRev sent a survey for their clients to fill and at the end of the survey there was a field with suggestions. I started collecting things which annoy me the most, and below you'll find what I have now. Unfortunately, it's full of AccuRev terminology, but I think you'll get the idea anyway.
Accurev GUI possible improvements
Working with history
When examining history, a developer most often wants to see diff to previous transaction/version. This should be as accessible as a double click. For example double-clicking on file in transaction log could open diff to previous version. Double-clicking on a file in Default Group filter could open diff to backed, double-clicking on a file in Modified search could open diff to most recent. That would save tons of time.
Common experience is that developers rarely open files for editing from within AccuRev. Rather they very often diff files, then revert or promote changes. So double click should not open files for editing, it should diff them instead. This may be an option in preferences, so different people may decide whether they want double click to diff files or open files.
It should be possible to select two transactions in stream or workspace history and perform a file diff between them.
Overlaps merging
Merging overlaps in a stream requires performing "Deep Overlap" search in a workspace that takes much more time than to search overlaps in specific stream. Then it is needed to sort overlaps by overlap stream and merge only those from specific stream. There should be more convenient way to perform merging overlaps in stream. For example ability to limit deep overlap search by specific stream and do not show overlaps in parent streams. Limiting Deep Overlap search by timelocked stream is not very useful if you are several streams under this timelocked stream or there is no timelock on parents at all.
Now there is a simplified way that involves creating change palettes, but it is still not convenient. Merge menu item should be available at stream level if there is a workspace under that stream that may be used for overlaps merge.
Annotate tool
Annotate tool is very awkward:
*
*using slider at the top to browse different version resets position in the file that is VERY annoying for large files;
*should be able to open history at the specific transaction directly from annotate tool. Now developer needs to remember transaction# and search for it in stream history (also need to search for the stream where transaction was made).
Stream Favorites
Context menu item "Add to stream filter" was removed when new stream favorites were introduced. It should be possible to right click on stream and add it to one of the stream favorites (2nd level context menu, or dialog may pop up). Now it is very annoying to edit stream favorites, particularly when you need to have 2 similar sets of streams.
Stream browser
It should be easy to copy stream name to clipboard. Now you need to open "Change stream" dialog for that. Ctrl+C in stream browser could copy the name of selected stream to clipboard.
There is no way to copy stream name from stream view. Right click on tab could copy stream name in clipboard, or show context menu with "copy stream name to clipboard" item in it.
Diff and merge tool
Shows only the first different character in line, not the whole line difference, does not highlight syntax. Luckily, diff tool may be easily switched to external tool, so this is minor.
Other suggestions
*
*Option in preferences to enable Multiple columns sort mode by default.
*It would be nice to save not only the latest keep/promote log, but at least 5-10 older ones.
*File extension column in stream or workspace view with ability to sort on it would be great.
*Reordering tabs would be nice.
*Very small font in keep/promote/lock message under Windows, it's unreadable. Increase the font size or allow user to change it.
*Implement more convenient way to locally ignore files, environment variable is not very useful (user may want to ignore different sets of files in different streams/depots).
For the last 3 years AccuRev added 3 things from this list (I removed them as they are already implemented):
*
*Hardcoded (can't be customized) keyboard shortcuts for most actions
*Made it possible to call diff for several files from transaction at once (before that one should have to right click on every file and call "Diff to previous version" from context menu.
*Added search for text to Annotate tool. But because of the position reset when you try to switch to different version (see above) the Annotate tool is still unusable.
Besides GUI there are fundamental flaws in AccuRev in the whole:
Difficult to update backwards
You can't easily update backwards. There is accurev update -t <transaction-number> command, but if you updated to transaction 100, you can't update to transaction 95 using accurev update -t 95. In order to do that you need to set up time lock on your backed stream (which will introduce transaction in AccuRev) and update your workspace.
Deep Overlaps
When you update you may happen to have invalid state of sources without any notice. This is because Overlaps feature. Overlap is basically a conflict (when file is changed by both you and them). If you have overlap in your workspace, you'll need to merge it before you are allowed to update. But if you have overlap in the stream, under which you have your workspace, you won't have any notice about that, but overlapped file won't be updated in your workspace. Consider the following stream structure
[Depot Root] <- [Team stream] <- [Your stream] <- [Your workspace]
Let's say you changed foo.cpp and promoted it to [Your stream]. After that someone from your team changed both foo.h and foo.cpp, let's say added method to the class Foo, and promoted the files to [Team stream]. After you update your workspace, you'll get new version of foo.h (because you didn't change it), but you won't get foo.cpp because it's overlapped in [Your stream]. So, your update will go clean, but linker will complain about unresolved symbol Foo::NewMethod if you try to build after that.
A: I've been a long time Accurev user, and have recently moved to a job where I'm using Perforce. I gotta tell you, I wish I had Accurev back. I do agree - the UI is slow and has problems.
However there are some truly AWESOME visualization tools in there. I can't believe that anyone would look at the version history browser and not fall in love! The stream browser is a great simple tool to understand what is going on in your development organization.
Also, dirt simple to administer. Accurev is actually one of my favourite tools.
A: One of the best days I've ever had at my current job is they day we ditched Accurev and moved to Subversion. Accurev uses overly complicated concepts. Like one of the commenters above, after working with it for years, I still didn't understand the different states that artifacts could be in. It seems that Accurev's greatest asset is its whitepapers and stream visualization, both of which are v appealing to management but does nothing for developers. I use Subversion, Mercurial and Git for various projects and would recommend these tools over any other.
A: Accurev is anti-agile tool:
*
*Main idea of Accurev is to use different streams for different teams, so, changes made by team1 won't affect team2.Sounds good, but in a real world we all know that at the end we have to merge the code from both teams and believe me it's nightmare in Accurev. The more changes both teams will do in their streams, the more time everybody will spend on merging at the end. It's the same if every team will do their development in the separate branch using SVN and trying to merge everything after 1 month of development....Basically Accurev creates late merge price and you are going to pay this price for ever if you will choose Accurev for more than 1 team.
*In order to fix problem created by point 1 people decide to refuse from cross functional teams in favor of functional. They even provide argument to support this idea, like "knowledge expertise" principle. In other words when you don't have cross functional teams (and Agile as well) it's easier to have experts for particular part of the system, so they will perform code reviews better and act as "information/design/implementation experts". We all know that information expert is an anti-pattern not only in Agile, since it's better to spread the expertise in order to avoid knowledge bottlenecks in development.
A: I've just come across this discussion and thought I would share our experiences with AccuRev.
We have been using the Dimensions SCM from Serena for around 8 years. Two years ago we had a major problem integrating our India based Development team with our UK Dev team. It was clear that we were not going to meet our needs with the current system and hence we set about evaluating a number of options. I discuss all of this in this article How We Integrated Our Offshore Dev Team.
Our experience of using AccuRev has so far been very positive.
*
*It is easy to setup and administer.
*Users are able to get going very very quickly (especially important for the India dev team)
*We've never had a problem with speed (in fact this is one of the main plus points for us)
*The replication works like a dream
*I do agree that the UI can be a bit clunky (especially the Unix client). I am hoping that it will be better in the latest version when we update to that next month.
All in all I would say that this was one of the best decisions and purchases we have made.
A: Note: I am an AccuRev user and I like it very much. I have already upvoted a few answers here, and would like to add:
I've just recently stumbled over this "review" of AccuRev in the book Continuous Delivery by Jez Humble and David Farley:
[Chapter 14, p 385]
Commercial Version Control Systems
(...) the only commercial VCSs that we are able to wholeheartedly recommend are:
*
*(...)
*AccuRev. Offers ClearCase-like ability to do stream-based development without the crippling administrative overhead and poor performance associated with ClearCase.
*(...)
To which I might add that I never have used ClearCase, but I am the AccuRev admin around here, and it is indeed very little work to administer. (WRT performance, this question might give more insight.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Forms Authentication across Applications I'm working on a internal web based tool for my company. Part of this tool is another application (The Cruise Control Dashboard) that runs in its own Virtual Directory under my root application.
I wanted to limit access to this internal application by setting up Forms Authentication on it, and having a login form in the root application.
I put the following into the root applications web.config:
<location path="ccnet">
<system.web>
<authentication mode="Forms">
<forms loginUrl="/default.aspx" timeout="5000"/>
</authentication>
<authorization>
<allow users="?"/>
<deny users="?"/>
</authorization>
</system.web>
</location>
However, the Forms Authentication does not appear to be working, it does not redirect back to the login page when I access that application directly.
I have a feeling I have the <allow> and <deny> tags set wrong. Can someone clarify?
A:
You might also need to put path="/" in the
That was it!
So, Summary, inorder todo this;
In root web.config add:
<machineKey validationKey="AutoGenerate" decryptionKey="AutoGenerate" validation="SHA1" />
This must be done because by default it is "AutoGenerate,IsolateApps".
Second, you must name the form Auth cookie the same in both, I did this all in my root, using the location tag:
<authentication mode="Forms">
<forms name="ccAuth" loginUrl="/default.aspx" path="/" timeout="5000"/>
</authentication>
<authorization>
<deny users="?"/>
</authorization>
Finally:
<location path="ccnet">
<system.web>
<authentication mode="Forms">
<forms name="ccAuth" loginUrl="/default.aspx" path="/" timeout="5000"/>
</authentication>
<authorization>
<deny users="?"/>
</authorization>
</system.web>
</location>
Thanks everyone for your help. This was a stumper.
A: FormsAuthentication encrypts the tokens that it gives to the user, and by default it encrypts keys different for each application. To get Forms Auth to work across applications, there are a couple of things you need to do:
Firstly, set the Forms Auth "name" the same on all Applications. This is done with:
<authentication mode="Forms">
<forms name="{name}" path="/" ...>
</authentication>
Set the "name" to be the same in both applications web.configs.
Secondly, you need to tell both applications to use the same key when encrypting. This is a bit confusing. When I was setting this up, all I had to do was add the following to both web.configs:
<machineKey validationKey="AutoGenerate" decryptionKey="AutoGenerate" validation="SHA1" />
According to the docs, thats the default value, but it didnt work for me unless I specified it.
A: You might also need to put path="/" in the <forms tag(s) I think. Sorry, its been a while since i've done this
A: you are allowing all unauthenticated. You might be looking for something like this
<deny users="?"/>
A:
That does not work, it still allows all users, (Authenticated or not) to access.
I would think you could even omit the allow tag, as it's redundant. Just:
<deny users="?"/>
A: Where does that code sit Jonathan? In my experience I have a login control and in the OnAuthenticate event I would set Authenticated to false...
If CustomAuthenticate(Login1.UserName, Login1.Password) Then
FormsAuthentication.RedirectFromLoginPage(Login1.UserName, False)
Else
e.Authenticated = False
End If
But that's using the Microsoft Way
A: What is the file extension for this cruise control application? If it is not a file type that ASP.NET is registered to handle (e.g. jsp, java, etc), then ASP.NET will not act as an authentication mechanism (on IIS 5 and 6). For example, for static html files, unless you have wildcard mapping implemented, IIS does all the authentication and authorization and serves up the file without involving the ASP.NET isapi extension. IIS7 can use the new integrated pipeline mode to intercept all requests. For IIS6, you'll want to look at Scott Gu's article on the matter.
A: None of the above suggestions worked for me. Turns out in the root web.config set:
<forms loginUrl="/pages/login.aspx" enableCrossAppRedirects="true"...
and make sure that both the root and child app have in system.web
<machineKey validationKey="AutoGenerate" decryptionKey="AutoGenerate" validation="SHA1"/>
which turns off the IsolateApps default.
Then everything just worked!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: IIS 6/COM+ hangs I have a web application that sometimes just hangs over heavy load. To make it come back I have to kill the "dllhost.exe" process. Does someone know what to do?
This is an Classic ASP (VBScript) app with lots of COM+ objects.
The server has the following configuration:
*
*Intel Core 2 Duo 2.2 GHz / 4 GB RAM
*Windows Server 2003 Web Edition SP2
*IIS 6.0
There is some errors in the event log related to the COM objects. But why errors in the COM objects would crash the whole server?
The COM objects are PowerBuilder objects deployed as COM objects.
Is IIS 7.0 (much) more stable than IIS 6.0?
A: You have a memory leak :)
This blog entry is my bible for IIS troubleshooting:
http://blogs.msdn.com/david.wang/archive/2005/12/31/HOWTO_Basics_of_IIS6_Troubleshooting.aspx
If you can't audit your code and find where the reference leaks are, an alternative is to recycle the application by restarting IIS every 24 hours or so. You can just setup a commandline script as a server job to do this.
A: Sounds like dodgy COM objects causing the problem .. do you load them into the "Application", if you do then are they threadsafe; or are they used and discarded on each request?
Yes, recycling every few hours would help 'hide' the problem, but they ought to be debugged and fixed properly ... have you tried divide/conquer to discover which COM object is the problem ... I can imagine this is tricky on a production environment so you need to set up some heavy automated tests to reproduce the problem locally then you can do something about it.
A: There is probably some errors in your eventlog under the Application and System categories. Try to find the origin of these errors or post them here we'll see what we can do :)
Edit :
@Daniel Silveira
A memory leak is probable. What COM+ object do you use? I had some issues with Excel with an application I support.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Code to ask yes/no question in javascript I could only find the function confirm() that gives OK/Cancel buttons. Is there any way to give Yes/No buttons?
A: Javascript offers 3 modal boxes. prompt, confirm and alert. None of those satisfy your request.
There are a plethora of js modal popup solutions. Here's an example.
*
*ModalBox
A: No.
Instead you could use a in browser modal popup.
A: Like everyone else above says, you're stuck with OK/Cancel using confirm().
I would like to recommend this jQuery plugin though: jqModal. I've used it on 3 recent projects and it has worked great for each one. Specifically check out this example:
6). FUN! Overrides -- a. view (alert), b. view (confirm) It is now time to
show a real-world use for jqModal --
overriding the standard alert() and
confirm dialogs! Note; due to the
single threaded nature of javascript,
the confirm() function must be passed
a callback -- it does NOT return
true/false.
A: No, but there are JavaScript libraries that can accomplish this for you. Just as an example, Ext JS can be used to create a message box dialog.
A: I'm a fan of jQuery UI Dialog for this sort of thing. Here's a sample...
<script>
$(function() {
$( "#dialog-confirm" ).dialog({
resizable: false,
height:140,
modal: true,
buttons: {
"Yes": function() {
$( this ).dialog( "close" );
alert("You chose Yes!");
},
"No": function() {
$( this ).dialog( "close" );
alert("You chose No!");
}
}
});
});
</script>
<div id="dialog-confirm" title="Are you sure you want to continue?">
<p><span class="ui-icon ui-icon-alert" style="float:left; margin:0 7px 20px 0;"></span>These items will be permanently deleted and cannot be recovered. Are you sure?</p>
</div>
A: i would use sweetalert https://sweetalert.js.org/guides/ to achieve something like this
swal("Are you sure you want to do this?", {
buttons: ["yes", "no"],
});
<script src="https://unpkg.com/sweetalert/dist/sweetalert.min.js"></script>
A: Use dialog box to display yes or no
<div id="dialog_box" class="mnk-modal-bg" style="display:none">
<div id="dbg" class="mnk-modal-box">
<i class="uk-icon-exclamation-triangle" style="color:#757575; padding-right:5px;">
</i>Confirm?
<div class="uk-text-center" style="margin-top:10px;">
<button class="md-btn md-btn-small md-btn-primary" id="ok_btn">
<i class="uk-icon-save" style="padding-right:3px;"></i>OK
</button>
<button class="md-btn md-btn-small md-btn-danger" id="close_btn">
<i class="uk-icon-remove" style="padding-right:3px;"></i>Cancel
</button>
</div>
</div>
<script>
$("#ok_btn").click(function(){
alert("OK");
$("#dialog_box").hide();
});
$("#close_btn").click(function(){
alert("CANCEL");
$("#dialog_box").hide();
});
</script>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Calling base Methods When Overriding Page Level Events In my code behind I wire up my events like so:
protected override void OnInit(EventArgs e)
{
base.OnInit(e);
btnUpdateUser.Click += btnUpateUserClick;
}
I've done it this way because that's what I've seen in examples.
*
*Does the base.OnInit() method need to be called?
*Will it be implicitly be called?
*Is it better to call it at the beginning of the method or at the end?
*What would be an example where confusion over the base method can get you in trouble?
A: I should clarify:
The guidelines recommend that firing an event should involve calling a virtual "OnEventName" method, but they also say that if a derived class overrides that method and forgets to call the base method, the event should still fire.
See the "Important Note" about halfway down this page:
Derived classes that override the protected virtual method are not required to call the base class implementation. The base class must continue to work correctly even if its implementation is not called.
A: In this case, if you don't call the base OnInit, then the Init even will not fire.
In general, it is best practice to ALWAYS call the base method, unless you specifically know that you do not want the base behaviour to occur.
Whether its called at the start or the end depends on how you want things to work. In a case like this, where you are using an override instead of hooking up an event handler, calling it at the start of the method makes more sense. That way, your code will run after any handlers, which makes it more emulate a "normal" event handler.
A: Although the official framework design guidelines recommend otherwise, most class designers will actually make the OnXxx() method responsible for firing the actual event, like this:
protected virtual void OnClick(EventArgs e)
{
if (Click != null) Click(this, e);
}
... so if you inherit from the class and don't call base.OnClick(e), the Click event will never fire.
So yes, even though this shouldn't be the case according to the official design guidelines, I think it's worth calling base.OnInit(e) just to be sure.
A:
official framework design guidelines recommend otherwise
They do? I'm curious, i've always thought the opposite, and reading Framework Design Guidelines and running FxCop has only cemented my view. I was under the impression that events should always be fired from virtual OnXxx() methods, that take an EventArgs parameter
A: You probably are better off doing it that way, then this debate goes away. The article is interesting though, especially considering that the .NET Framework doesn't honour this guideline.
A: @Ch00k and @Scott I dunno - I like the OnEventName pattern myself. And yeah, I'm one of the people who are guilty of firing the event from that method.
I think overriding the On* method and calling the base one is the way to go. Handling your own events seems wrong somehow.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Database backed i18n for java web-app I'd like to use a database to store i18n key/value pairs so we can modify / reload the i18n data at runtime. Has anyone done this? Or does anyone have an idea of how to implement this? I've read several threads on this, but I haven't seen a workable solution.
I'm specifically refering to something that would work with the jstl tags such as
<fmt:setlocale>
<fmt:bundle>
<fmt:setBundle>
<fmt:message>
I think this will involve extending ResourceBundle, but when I tried this I ran into problems that had to do with the way the jstl tags get the resource bundle.
A: Are you just asking how to store UTF-8/16 characters in a DB? in mysql it's just a matter of making sure you build with UTF8 support and setting that as the default, or specifying it at the column or table level. I've done this in oracle and mysql before. Create a table and cut and paste some i18n data into it and see what happens... you might be set already..
or am I completely missing your point?
edit:
to be more explicit... I usually implement a three column table... language, key, value... where "value" contains potentially foreign language words or phrases... "language" contains some language key and "key" is an english key (i.e. login.error.password.dup)... language and key are indexed...
I've then built interfaces on a structure like this that shows each key with all its translations (values)... it can get fancy and include audit trails and "dirty" markers and all the other stuff you need to enable translators and data entry folk to make use of it..
Edit 2:
Now that you added the info about the JSTL tags, I understand a bit more... I've never done that myself.. but I found this old info on theserverside...
HttpSession session = .. [get hold of the session]
ResourceBundle bundle = new PropertyResourceBundle(toInputStream(myOwnProperties)) [toInputStream just stores the properties into an inputstream]
Locale locale = .. [get hold of the locale]
javax.servlet.jsp.jstl.core.Config.set(session, Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle ,locale));
A: I finally got this working with danb's help above.
This is my resource bundle class and resource bundle control class.
I used this code from @[danb]'s.
ResourceBundle bundle = ResourceBundle.getBundle("AwesomeBundle", locale, DbResourceBundle.getMyControl());
javax.servlet.jsp.jstl.core.Config.set(actionBeanContext.getRequest(), Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle, locale));
and wrote this class.
public class DbResourceBundle extends ResourceBundle
{
private Properties properties;
public DbResourceBundle(Properties inProperties)
{
properties = inProperties;
}
@Override
@SuppressWarnings(value = { "unchecked" })
public Enumeration<String> getKeys()
{
return properties != null ? ((Enumeration<String>) properties.propertyNames()) : null;
}
@Override
protected Object handleGetObject(String key)
{
return properties.getProperty(key);
}
public static ResourceBundle.Control getMyControl()
{
return new ResourceBundle.Control()
{
@Override
public List<String> getFormats(String baseName)
{
if (baseName == null)
{
throw new NullPointerException();
}
return Arrays.asList("db");
}
@Override
public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException,
InstantiationException, IOException
{
if ((baseName == null) || (locale == null) || (format == null) || (loader == null))
throw new NullPointerException();
ResourceBundle bundle = null;
if (format.equals("db"))
{
Properties p = new Properties();
DataSource ds = (DataSource) ContextFactory.getApplicationContext().getBean("clinicalDataSource");
Connection con = null;
Statement s = null;
ResultSet rs = null;
try
{
con = ds.getConnection();
StringBuilder query = new StringBuilder();
query.append("select label, value from i18n where bundle='" + StringEscapeUtils.escapeSql(baseName) + "' ");
if (locale != null)
{
if (StringUtils.isNotBlank(locale.getCountry()))
{
query.append("and country='" + escapeSql(locale.getCountry()) + "' ");
}
if (StringUtils.isNotBlank(locale.getLanguage()))
{
query.append("and language='" + escapeSql(locale.getLanguage()) + "' ");
}
if (StringUtils.isNotBlank(locale.getVariant()))
{
query.append("and variant='" + escapeSql(locale.getVariant()) + "' ");
}
}
s = con.createStatement();
rs = s.executeQuery(query.toString());
while (rs.next())
{
p.setProperty(rs.getString(1), rs.getString(2));
}
}
catch (Exception e)
{
e.printStackTrace();
throw new RuntimeException("Can not build properties: " + e);
}
finally
{
DbUtils.closeQuietly(con, s, rs);
}
bundle = new DbResourceBundle(p);
}
return bundle;
}
@Override
public long getTimeToLive(String baseName, Locale locale)
{
return 1000 * 60 * 30;
}
@Override
public boolean needsReload(String baseName, Locale locale, String format, ClassLoader loader, ResourceBundle bundle, long loadTime)
{
return true;
}
};
}
A: We have a database table with key/language/term where key is a n integer and is a combined primary key together with language.
We are using Struts, so we ended up writing our own PropertyMessageResources implementation which allows us to do something like <bean:message key="impressum.text" />.
It works very well and gives us the flexibility to do dynamically switch languages in the front-end as well as updating the translations on the fly.
A: Actuly what ScArcher2 needed is davids response which is not marked a correct or helpfull.
The solution ScArcher2 chose to use is imo terrible mestake:) Loading ALL the translations at one time... in any bigger application its gonna kill it. Loading thousends of translations each request...
david's method is more commonly used in real production environments.
Sometimes to limit db calls, which is with every message translated, you can create groups of translations by topic, functionality etc. to preload them. But this is little bit more complex and can be substituted with good cache system.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Alternatives to Windows Workflow Foundation? I've been using WWF for a while as part of an internal call center application (ASP.NET), and while learning it was a good practice in understanding how a state machine based workflow system should work, I am definitely not in love with WWF itself. In my opinion it is:
*
*Overly complex, especially for use within web apps (all that threaded runtime stuff)
*Immature (ever worked with that horrible designer?)
*Anemic in its current feature set
Does anyone have a suggestion for a better .NET based workflow framework? Specifically, I am looking for the following features:
*
*State machine based (mapping states to available actions)
*A focus on user permissions (controlling who has access to what actions)
*The ability to run workflows as timed background tasks (for example, to send out reminders for items that have been sitting in a certain state for x days)
That's really all I need. I don't need to be able to "drag and drop" any activities or visually design the flow. I am perfectly comfortable writing actual code once a particular action is triggered.
A: I would stay away from Drools.Net since it's last SVN commit was in September 2007. Looks nice but it seems a bit too risky to make such a big library part of your project when you know it doesn't get any attention anymore.
A: You could try Simple State Machine. You would have to implement access control and background timers yourself, but that shouldn't be a big deal. SSM was also built out of frustration with WF. There are some other state machine implementations on Codeplex as well. If one of them doesn't fit he bill out of the box, they are open source and should get you close enough.
I wholeheartedly agree with you about state machines in WF - they aren't testable, are too complicated, the threading model is peculiar and hard to follow, and I'm not sure a visual designer could have been more poorly conceived for designing state machines graphically. I think this may be because the state machine concept feels tacked onto the WF runtime, which was designed for sequential state machines, something WF does a much better job with, in my opinion. The problem is that state machines are really not the same animal as a sequential work flow, and should have been given a first class implementation of their own, because the warping of WF to make it seem to support them turned out to be more or less unsupportable, if not actually unusable.
A: Try Drools.NET
A: Have a look at Workflow Engine. It is a lightweight workflow framework for .NET and Java solutions. It has an HTML5 visual designer, version control, a decent UI and supports a wide range of databases.
A: Do you have the option to consider BizTalk Server?
A: I quite enjoyed working with Oracle BPEL Process Manager. It's part of JDeveloper.
http://www.oracle.com/technology/bpel/index.html
http://gemsres.com/story/dec06/313602/jellema-fig1.jpg
A: You may want to take a look at Jazz - http://jazz.codeplex.com/
A: Try WF4.5. It was completely redesigned since .NET4.0.
A: First of all you should look for a engine supporting BPMN. BPMN is a standard in Workflow and Process management and well supported from a lot of projects.
Second you should think about the requirements to thus an engine.
When you look for a BPMN Engine, there are two different approaches:
Task-Orientated
These engines (e.g. JBoss BPM - jbpm) are designed to process an input data by a well defined process model. Each task in the model gives the control to a piece of code - either a standard or an individual implementation. The process ends when the process-token reaches the end of the process model (End-Event). This kind of processing takes milliseconds. The engine can be used for batch jobs or processing data with a complex process orientated flow.
Event-Driven
Human-Centric workflow engines are event driven (e.g. Imixs-Workflow). This is a kind of state machine but offers typically much more functionality. You can start a new processinstance by assigning your business object with the initial task (defined by the start event). Than the workflow engine allows you to trigger events assigned to each task, defined in your model. Each event (Intermediate CatchEvent) triggers the workflow engine to transfer the running processinstance to the next task (state). Until no new event is triggered, the processinstance 'waits' in the current task (state). An approval process is an typical example for this kind of human-centric workflow.
You can find a list of engines here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Calling ASP.NET web service from ASP using SOAPClient I have an ASP.NET webservice with along the lines of:
[WebService(Namespace = "http://internalservice.net/messageprocessing")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
public class ProvisioningService : WebService
{
[WebMethod]
public XmlDocument ProcessMessage(XmlDocument message)
{
// ... do stuff
}
}
I am calling the web service from ASP using something like:
provWSDL = "http://servername:12011/MessageProcessor.asmx?wsdl"
Set service = CreateObject("MSSOAP.SoapClient30")
service.ClientProperty("ServerHTTPRequest") = True
Call service.MSSoapInit(provWSDL)
xmlMessage = "<request><task>....various xml</task></request>"
result = service.ProcessMessage(xmlMessage)
The problem I am encountering is that when the XML reaches the ProcessMessage method, the web service plumbing has added a default namespace along the way. i.e. if I set a breakpoint inside ProcessMessage(XmlDocument message) I see:
<request xmlns="http://internalservice.net/messageprocessing">
<task>....various xml</task>
</request>
When I capture packets on the wire I can see that the XML sent by the SOAP toolkit is slightly different from that sent by the .NET WS client. The SOAP toolkit sends:
<SOAP-ENV:Envelope
xmlns:SOAPSDK1="http://www.w3.org/2001/XMLSchema"
xmlns:SOAPSDK2="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAPSDK3="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<ProcessMessage xmlns="http://internalservice.net/messageprocessing">
<message xmlns:SOAPSDK4="http://internalservice.net/messageprocessing">
<request>
<task>...stuff to do</task>
</request>
</message>
</ProcessMessage>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Whilst the .NET client sends:
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<ProcessMessage xmlns="http://internalservice.net/messageprocessing">
<message>
<request xmlns="">
<task>...stuff to do</task>
</request>
</message>
</ProcessMessage>
</soap:Body>
</soap:Envelope>
It's been so long since I used the ASP/SOAP toolkit to call into .NET webservices, I can't remember all the clever tricks/SOAP-fu I used to pull to get around stuff like this.
Any ideas? One solution is to knock up a COM callable .NET proxy that takes the XML as a string param and calls the WS on my behalf, but it's an extra layer of complexity/work I hoped not to do.
A: Kev,
I found the solution, but its not trivial.
You need to create a custom implementation of IHeaderHandler that creates the proper headers.
There is a good step by step here:
http://msdn.microsoft.com/en-us/library/ms980699.aspx
EDIT: I saw your update. Nice workaround, you might want to bookmark this link regardless :D
A: I take it you have access to the Services code, not just the consuming client right?
Just pull the namespace out of the XmlDocument as the first part of the method.
Something like:
XmlDocument changeDocumentNamespace(XmlDocument doc, string newNamespace)
{
if (doc.DocumentElement.NamespaceURI.Length > 0)
{
doc.DocumentElement.SetAttribute("xmlns", newNameSpace);
XmlDocument newDoc = new XmlDocument();
newDoc.LoadXml(doc.OuterXml);
return newDoc;
}
else
{
return doc;
}
}
Then:
[WebService(Namespace = "http://internalservice.net/messageprocessing")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
public class ProvisioningService : WebService
{
[WebMethod]
public XmlDocument ProcessMessage(XmlDocument message)
{
message = changeDocumentNamespace(message,String.Empty);
// Do Stuff...
}
}
A: I solved this:
The SOAP client request node was picking up the default namespace from:
<ProcessMessage xmlns="http://internalservice.net/messageprocessing">
Adding an empty default namespace to the XML sent by the ASP client overrides this behaviour:
xmlMessage = "<request xmlns=''><task>....various xml</task></request>"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Transpose/Unzip Function (inverse of zip)? I have a list of 2-item tuples and I'd like to convert them to 2 lists where the first contains the first item in each tuple and the second list holds the second item.
For example:
original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
# and I want to become...
result = (['a', 'b', 'c', 'd'], [1, 2, 3, 4])
Is there a builtin function that does that?
A: In 2.x, zip is its own inverse! Provided you use the special * operator.
>>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)])
[('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
This is equivalent to calling zip with each element of the list as a separate argument:
zip(('a', 1), ('b', 2), ('c', 3), ('d', 4))
except the arguments are passed to zip directly (after being converted to a tuple), so there's no need to worry about the number of arguments getting too big.
In 3.x, zip returns a lazy iterator, but this is trivially converted:
>>> list(zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)]))
[('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
A: Naive approach
def transpose_finite_iterable(iterable):
return zip(*iterable) # `itertools.izip` for Python 2 users
works fine for finite iterable (e.g. sequences like list/tuple/str) of (potentially infinite) iterables which can be illustrated like
| |a_00| |a_10| ... |a_n0| |
| |a_01| |a_11| ... |a_n1| |
| |... | |... | ... |... | |
| |a_0i| |a_1i| ... |a_ni| |
| |... | |... | ... |... | |
where
*
*n in ℕ,
*a_ij corresponds to j-th element of i-th iterable,
and after applying transpose_finite_iterable we get
| |a_00| |a_01| ... |a_0i| ... |
| |a_10| |a_11| ... |a_1i| ... |
| |... | |... | ... |... | ... |
| |a_n0| |a_n1| ... |a_ni| ... |
Python example of such case where a_ij == j, n == 2
>>> from itertools import count
>>> iterable = [count(), count()]
>>> result = transpose_finite_iterable(iterable)
>>> next(result)
(0, 0)
>>> next(result)
(1, 1)
But we can't use transpose_finite_iterable again to return to structure of original iterable because result is an infinite iterable of finite iterables (tuples in our case):
>>> transpose_finite_iterable(result)
... hangs ...
Traceback (most recent call last):
File "...", line 1, in ...
File "...", line 2, in transpose_finite_iterable
MemoryError
So how can we deal with this case?
... and here comes the deque
After we take a look at docs of itertools.tee function, there is Python recipe that with some modification can help in our case
def transpose_finite_iterables(iterable):
iterator = iter(iterable)
try:
first_elements = next(iterator)
except StopIteration:
return ()
queues = [deque([element])
for element in first_elements]
def coordinate(queue):
while True:
if not queue:
try:
elements = next(iterator)
except StopIteration:
return
for sub_queue, element in zip(queues, elements):
sub_queue.append(element)
yield queue.popleft()
return tuple(map(coordinate, queues))
let's check
>>> from itertools import count
>>> iterable = [count(), count()]
>>> result = transpose_finite_iterables(transpose_finite_iterable(iterable))
>>> result
(<generator object transpose_finite_iterables.<locals>.coordinate at ...>, <generator object transpose_finite_iterables.<locals>.coordinate at ...>)
>>> next(result[0])
0
>>> next(result[0])
1
Synthesis
Now we can define general function for working with iterables of iterables ones of which are finite and another ones are potentially infinite using functools.singledispatch decorator like
from collections import (abc,
deque)
from functools import singledispatch
@singledispatch
def transpose(object_):
"""
Transposes given object.
"""
raise TypeError('Unsupported object type: {type}.'
.format(type=type))
@transpose.register(abc.Iterable)
def transpose_finite_iterables(object_):
"""
Transposes given iterable of finite iterables.
"""
iterator = iter(object_)
try:
first_elements = next(iterator)
except StopIteration:
return ()
queues = [deque([element])
for element in first_elements]
def coordinate(queue):
while True:
if not queue:
try:
elements = next(iterator)
except StopIteration:
return
for sub_queue, element in zip(queues, elements):
sub_queue.append(element)
yield queue.popleft()
return tuple(map(coordinate, queues))
def transpose_finite_iterable(object_):
"""
Transposes given finite iterable of iterables.
"""
yield from zip(*object_)
try:
transpose.register(abc.Collection, transpose_finite_iterable)
except AttributeError:
# Python3.5-
transpose.register(abc.Mapping, transpose_finite_iterable)
transpose.register(abc.Sequence, transpose_finite_iterable)
transpose.register(abc.Set, transpose_finite_iterable)
which can be considered as its own inverse (mathematicians call this kind of functions "involutions") in class of binary operators over finite non-empty iterables.
As a bonus of singledispatching we can handle numpy arrays like
import numpy as np
...
transpose.register(np.ndarray, np.transpose)
and then use it like
>>> array = np.arange(4).reshape((2,2))
>>> array
array([[0, 1],
[2, 3]])
>>> transpose(array)
array([[0, 2],
[1, 3]])
Note
Since transpose returns iterators and if someone wants to have a tuple of lists like in OP -- this can be made additionally with map built-in function like
>>> original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
>>> tuple(map(list, transpose(original)))
(['a', 'b', 'c', 'd'], [1, 2, 3, 4])
Advertisement
I've added generalized solution to lz package from 0.5.0 version which can be used like
>>> from lz.transposition import transpose
>>> list(map(tuple, transpose(zip(range(10), range(10, 20)))))
[(0, 1, 2, 3, 4, 5, 6, 7, 8, 9), (10, 11, 12, 13, 14, 15, 16, 17, 18, 19)]
P.S.
There is no solution (at least obvious) for handling potentially infinite iterable of potentially infinite iterables, but this case is less common though.
A: It's only another way to do it but it helped me a lot so I write it here:
Having this data structure:
X=[1,2,3,4]
Y=['a','b','c','d']
XY=zip(X,Y)
Resulting in:
In: XY
Out: [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]
The more pythonic way to unzip it and go back to the original is this one in my opinion:
x,y=zip(*XY)
But this return a tuple so if you need a list you can use:
x,y=(list(x),list(y))
A: Consider using more_itertools.unzip:
>>> from more_itertools import unzip
>>> original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
>>> [list(x) for x in unzip(original)]
[['a', 'b', 'c', 'd'], [1, 2, 3, 4]]
A: None of the previous answers efficiently provide the required output, which is a tuple of lists, rather than a list of tuples. For the former, you can use tuple with map. Here's the difference:
res1 = list(zip(*original)) # [('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
res2 = tuple(map(list, zip(*original))) # (['a', 'b', 'c', 'd'], [1, 2, 3, 4])
In addition, most of the previous solutions assume Python 2.7, where zip returns a list rather than an iterator.
For Python 3.x, you will need to pass the result to a function such as list or tuple to exhaust the iterator. For memory-efficient iterators, you can omit the outer list and tuple calls for the respective solutions.
A: You could also do
result = ([ a for a,b in original ], [ b for a,b in original ])
It should scale better. Especially if Python makes good on not expanding the list comprehensions unless needed.
(Incidentally, it makes a 2-tuple (pair) of lists, rather than a list of tuples, like zip does.)
If generators instead of actual lists are ok, this would do that:
result = (( a for a,b in original ), ( b for a,b in original ))
The generators don't munch through the list until you ask for each element, but on the other hand, they do keep references to the original list.
A: I like to use zip(*iterable) (which is the piece of code you're looking for) in my programs as so:
def unzip(iterable):
return zip(*iterable)
I find unzip more readable.
A: If you have lists that are not the same length, you may not want to use zip as per Patricks answer. This works:
>>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)])
[('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
But with different length lists, zip truncates each item to the length of the shortest list:
>>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', )])
[('a', 'b', 'c', 'd', 'e')]
You can use map with no function to fill empty results with None:
>>> map(None, *[('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', )])
[('a', 'b', 'c', 'd', 'e'), (1, 2, 3, 4, None)]
zip() is marginally faster though.
A: While numpy arrays and pandas may be preferrable, this function imitates the behavior of zip(*args) when called as unzip(args).
Allows for generators, like the result from zip in Python 3, to be passed as args as it iterates through values.
def unzip(items, cls=list, ocls=tuple):
"""Zip function in reverse.
:param items: Zipped-like iterable.
:type items: iterable
:param cls: Container factory. Callable that returns iterable containers,
with a callable append attribute, to store the unzipped items. Defaults
to ``list``.
:type cls: callable, optional
:param ocls: Outer container factory. Callable that returns iterable
containers. with a callable append attribute, to store the inner
containers (see ``cls``). Defaults to ``tuple``.
:type ocls: callable, optional
:returns: Unzipped items in instances returned from ``cls``, in an instance
returned from ``ocls``.
"""
# iter() will return the same iterator passed to it whenever possible.
items = iter(items)
try:
i = next(items)
except StopIteration:
return ocls()
unzipped = ocls(cls([v]) for v in i)
for i in items:
for c, v in zip(unzipped, i):
c.append(v)
return unzipped
To use list cointainers, simply run unzip(zipped), as
unzip(zip(["a","b","c"],[1,2,3])) == (["a","b","c"],[1,2,3])
To use deques, or other any container sporting append, pass a factory function.
from collections import deque
unzip([("a",1),("b",2)], deque, list) == [deque(["a","b"]),deque([1,2])]
(Decorate cls and/or main_cls to micro manage container initialization, as briefly shown in the final assert statement above.)
A: To get a tuple of lists, as in the question:
>>> original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
>>> tuple([list(tup) for tup in zip(*original)])
(['a', 'b', 'c', 'd'], [1, 2, 3, 4])
To unpack the two lists into separate variables:
list1, list2 = [list(tup) for tup in zip(*original)]
A: Since it returns tuples (and can use tons of memory), the zip(*zipped) trick seems more clever than useful, to me.
Here's a function that will actually give you the inverse of zip.
def unzip(zipped):
"""Inverse of built-in zip function.
Args:
zipped: a list of tuples
Returns:
a tuple of lists
Example:
a = [1, 2, 3]
b = [4, 5, 6]
zipped = list(zip(a, b))
assert zipped == [(1, 4), (2, 5), (3, 6)]
unzipped = unzip(zipped)
assert unzipped == ([1, 2, 3], [4, 5, 6])
"""
unzipped = ()
if len(zipped) == 0:
return unzipped
dim = len(zipped[0])
for i in range(dim):
unzipped = unzipped + ([tup[i] for tup in zipped], )
return unzipped
A: While zip(*seq) is very useful, it may be unsuitable for very long sequences as it will create a tuple of values to be passed in. For example, I've been working with a coordinate system with over a million entries and find it signifcantly faster to create the sequences directly.
A generic approach would be something like this:
from collections import deque
seq = ((a1, b1, …), (a2, b2, …), …)
width = len(seq[0])
output = [deque(len(seq))] * width # preallocate memory
for element in seq:
for s, item in zip(output, element):
s.append(item)
But, depending on what you want to do with the result, the choice of collection can make a big difference. In my actual use case, using sets and no internal loop, is noticeably faster than all other approaches.
And, as others have noted, if you are doing this with datasets, it might make sense to use Numpy or Pandas collections instead.
A: Just to summarize:
# data
a = ('a', 'b', 'c', 'd')
b = (1, 2, 3, 4)
# forward
zipped = zip(a, b) # [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
# reverse
a_, b_ = zip(*zipped)
# verify
assert a == a_
assert b == b_
A: Here's a simple one-line answer that produces the desired output:
original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
list(zip(*original))
# [('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "585"
} |
Q: What is the best way to go from Java/C# to C++? At my university most of my classes have been in Java. I have also recently learned C# (and the Visual Studio environment) at a summer internship. Now I'm taking an Intro to Computer Graphics class and the grad student teaching the class prefers us to use C++ to access the OpenGL bindings via GLUT.
Does anyone have any good resources on how to make a good transition from Java/C# to C++? Obviously pointers are going to be a big issue, but any other things I should be looking out for? Any tutorials, guides, etc. would be very helpful!
Thanks!
A: Yeah, I got bit by the same bug. The university tended to lean on Java, and then allowed you to choose the language you wanted to work with during projects.
The best way is to just jump in. Start small, take baby steps, and just Google things that confuse you when you get there. Also find projects that have released their source code. See how they structure their programs. Basically, just tinker with concepts. There is plenty of information around the web.
Make it fun and grab a C++ game development book so it doesn't become mind numbing too quickly.
Here's some places that I found useful while learning
http://www.cprogramming.com/
http://www.wikipedia.com
http://www.cplusplus.com/
A: If you already know Java/C# I'd recommend going directly to C instead of C++. According to the website, GLUT has the same bindings for C as C++ so you should be all set. Anyways, the best way to learn C is to purchase and read a copy of "The C Programming Language" and sit down with your C compiler and get your stuff to run.
A: Effective C++ by Scott Meyers is a great book to help you learn C++. Gives you an overview of the language and introduces a lot of key concepts that you will use throughout the development of basically any C++ program.
A:
Effective C++ by Scott Meyers is a great book to help you learn C++. Gives you an overview of the language and introduces a lot of key concepts that you will use throughout the development of basically any C++ program.
I love this book in all 3 editions, and it was one of the books in a class I had as a Senior at UT, but it's just not a starting book. You can become comfortable in C++ with a lot less, though you certainly won't be one with the compiler until you have read Meyer's work.
I don't know if it's still in print but I found Navigating C++ usefull, but I was also very comfortable with pointers from Pascal. Err of course I am forgetting that 15 years ago you had to learn what OOP was, now it's a little more assumed. So perhaps Meyer's is not out of line. Thoughts?
A: Wikipedia has an article on comparisons between Java and C++.
You don't have to worry about checked exceptions in C++, but you do need to know about const correctness.
A: There are two main differences: the syntax, and memory management.
In C++ you have pointers, which are more powerful (or less powerful depending on your interpretation of power) object references, which you already know about from Java.
In Java you might do this:
Thing mything = new Thing(); // mything is an object reference
mything.method();
In C++ you would do this:
Thing * mything = new Thing(); // mything is an object pointer
mything->method();
delete mything;
The syntactical difference is obvious: '->' instead of '.' when calling an object method from a pointer to an object. In C++, you have to free the memory explicitly when you are done with an object. At the end of the day you are doing the same thing in C++ and Java, instantiating objects and calling methods, putting useless semicolons at the end of every line, etc. Is it any wonder that Python is becoming so popular?:
mything = Thing() # mything is whatever I want it to be
mything.method()
Skimming through any half decent C++ text will help you fill in the rest of the details.
A: I also thoroughly recommend Bruce Eckel's Thinking in C++. A fantastic book for already experienced programmers that want to get into the C++ mindset.
He is kind enough to make electronic versions of his books available for free.
A: I strongly recommend that anyone learning C++ reads Stroustrups "The C++ Programming Language." Meyers and Eckel have great stuff, but nothing beats learning from the guy who decided what the language should be and how he intended for it to be used.
A: I had the exact same issue. The only book I was able to find was "Pro Visual C++ 2005 for C# Developers" by Dean C. Wills. It's a good read with excellent examples, and I think the angle from which the book comes is probably what you're looking for.
A: You will need a completely differnt feeling for memory handling. Also think about freeing everything you don't need anymore. In Java and C# you just let go of your objects and the memory gets tidy up for you - you can't do that in CPP
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: "Data Execution Prevention" kills (VS2008) local ASP.Net Development Server (aka Cassini) on Vista 64 Occasionally, I find that while debugging an ASP.Net application (written in visual studio 2008, running on Vista 64-bit) the local ASP.Net development server (i.e. 'Cassini') stops responding.
A message often comes up telling me that "Data Execution Prevention (DEP)" has killed WebDev.WebServer.exe
The event logs simply tell me that "WebDev.WebServer.exe has stopped working"
I've heard that this 'problem' presents itself more often on Vista 64-bit because DEP is on by default. Hence, turning DEP off may 'solve' the problem.
But i'm wondering:
Is there a known bug/situation with Cassini that causes DEP to kill the process?
Alternatively, what is the practical danger of disabling Data Execution Prevention?
A: The only way to know for sure would be to dig through the Cassini source and see if there are any areas where it generates code on the heap and then executes it without clearing the NX flag.
However, instead of doing that, why not use IIS?
EDIT:
The danger of disabling DEP is that you open up security holes. DEP works by not allowing arbitrary generated code on the heap to be executed. This helps prevent malware programs from inserting code into the data segments of legit programs.
A: You are on vista, iis got better (7), cassini stayed crappy.
So just start this app on iis with a host header and a hosts file entry.
A: You can grant certain programs exclusion from DEP if you need.
As Jonathan
mentions this does open up any vulnerabilities that application may have.
A: Using IIS in Visual Studio isn't the pain in the ass that it used to be in 1.1/VS02/03 days. There are lots of good reasons to prefer IIS over the Cassini server (articles by Dominick Baier):
Cassini considered harmful
Another Reason why I would not recommend Cassini
Dominick is 'the man' when it comes to IIS and security stuff.
When using IIS for a web app, I always create the app in IIS first, point it at my preferred folder, then get VS to create the project. This means you don't end up cluttering c:\inetpub\wwwroot with your web apps.
Of course, now we have IISExpress which if you're targeting IIS7.x it's the obvious choice for developing ASP.NET applications in Visual Studio.
A: Thanks for the answers. I guess I developed such an aversion to IIS in the .net 1.x era that I've refused to consider re-using it -- until now.
aside: when choosing between two equally acceptable answers from ChanChan and Jonathan, I arbitrarily marked Jonathan's as 'accepted' because a) he got in first and b) his rep is currently lower.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Detecting audio silence in WAV files using C# I'm tasked with building a .NET client app to detect silence in a WAV files.
Is this possible with the built-in Windows APIs? Or alternately, any good libraries out there to help with this?
A: Here a nice variant to detect threshold alternatings:
static class AudioFileReaderExt
{
private static bool IsSilence(float amplitude, sbyte threshold)
{
double dB = 20 * Math.Log10(Math.Abs(amplitude));
return dB < threshold;
}
private static bool IsBeep(float amplitude, sbyte threshold)
{
double dB = 20 * Math.Log10(Math.Abs(amplitude));
return dB > threshold;
}
public static double GetBeepDuration(this AudioFileReader reader,
double StartPosition, sbyte silenceThreshold = -40)
{
int counter = 0;
bool eof = false;
int initial = (int)(StartPosition * reader.WaveFormat.Channels * reader.WaveFormat.SampleRate / 1000);
if (initial > reader.Length) return -1;
reader.Position = initial;
var buffer = new float[reader.WaveFormat.SampleRate * 4];
while (!eof)
{
int samplesRead = reader.Read(buffer, 0, buffer.Length);
if (samplesRead == 0)
eof = true;
for (int n = initial; n < samplesRead; n++)
{
if (IsBeep(buffer[n], silenceThreshold))
{
counter++;
}
else
{
eof=true; break;
}
}
}
double silenceSamples = (double)counter / reader.WaveFormat.Channels;
double silenceDuration = (silenceSamples / reader.WaveFormat.SampleRate) * 1000;
return TimeSpan.FromMilliseconds(silenceDuration).TotalMilliseconds;
}
public static double GetSilenceDuration(this AudioFileReader reader,
double StartPosition, sbyte silenceThreshold = -40)
{
int counter = 0;
bool eof = false;
int initial = (int)(StartPosition * reader.WaveFormat.Channels * reader.WaveFormat.SampleRate / 1000);
if (initial > reader.Length) return -1;
reader.Position = initial;
var buffer = new float[reader.WaveFormat.SampleRate * 4];
while (!eof)
{
int samplesRead = reader.Read(buffer, 0, buffer.Length);
if (samplesRead == 0)
eof=true;
for (int n = initial; n < samplesRead; n++)
{
if (IsSilence(buffer[n], silenceThreshold))
{
counter++;
}
else
{
eof=true; break;
}
}
}
double silenceSamples = (double)counter / reader.WaveFormat.Channels;
double silenceDuration = (silenceSamples / reader.WaveFormat.SampleRate) * 1000;
return TimeSpan.FromMilliseconds(silenceDuration).TotalMilliseconds;
}
}
Main usage:
using (AudioFileReader reader = new AudioFileReader("test.wav"))
{
double duratioff = 1;
double duration = 1;
double position = 1;
while (duratioff >-1 && duration >-1)
{
duration = reader.GetBeepDuration(position);
Console.WriteLine(duration);
position = position + duration;
duratioff = reader.GetSilenceDuration(position);
Console.WriteLine(-duratioff);
position = position + duratioff;
}
}
A: Audio analysis is a difficult thing requiring a lot of complex math (think Fourier Transforms). The question you have to ask is "what is silence". If the audio that you are trying to edit is captured from an analog source, the chances are that there isn't any silence... they will only be areas of soft noise (line hum, ambient background noise, etc).
All that said, an algorithm that should work would be to determine a minimum volume (amplitude) threshold and duration (say, <10dbA for more than 2 seconds) and then simply do a volume analysis of the waveform looking for areas that meet this criteria (with perhaps some filters for millisecond spikes). I've never written this in C#, but this CodeProject article looks interesting; it describes C# code to draw a waveform... that is the same kind of code which could be used to do other amplitude analysis.
A: http://www.codeproject.com/Articles/19590/WAVE-File-Processor-in-C
This has all the code necessary to strip silence, and mix wave files.
Enjoy.
A: If you want to efficiently calculate the average power over a sliding window: square each sample, then add it to a running total. Subtract the squared value from N samples previous. Then move to the next step. This is the simplest form of a CIC Filter. Parseval's Theorem tells us that this power calculation is applicable to both time and frequency domains.
Also you may want to add Hysteresis to the system to avoid switching on&off rapidly when power level is dancing about the threshold level.
A: I'm using NAudio, and I wanted to detect the silence in audio files so I can either report or truncate.
After a lot of research, I came up with this basic implementation. So, I wrote an extension method for the AudioFileReader class which returns the silence duration at the start/end of the file, or starting from a specific position.
Here:
static class AudioFileReaderExt
{
public enum SilenceLocation { Start, End }
private static bool IsSilence(float amplitude, sbyte threshold)
{
double dB = 20 * Math.Log10(Math.Abs(amplitude));
return dB < threshold;
}
public static TimeSpan GetSilenceDuration(this AudioFileReader reader,
SilenceLocation location,
sbyte silenceThreshold = -40)
{
int counter = 0;
bool volumeFound = false;
bool eof = false;
long oldPosition = reader.Position;
var buffer = new float[reader.WaveFormat.SampleRate * 4];
while (!volumeFound && !eof)
{
int samplesRead = reader.Read(buffer, 0, buffer.Length);
if (samplesRead == 0)
eof = true;
for (int n = 0; n < samplesRead; n++)
{
if (IsSilence(buffer[n], silenceThreshold))
{
counter++;
}
else
{
if (location == SilenceLocation.Start)
{
volumeFound = true;
break;
}
else if (location == SilenceLocation.End)
{
counter = 0;
}
}
}
}
// reset position
reader.Position = oldPosition;
double silenceSamples = (double)counter / reader.WaveFormat.Channels;
double silenceDuration = (silenceSamples / reader.WaveFormat.SampleRate) * 1000;
return TimeSpan.FromMilliseconds(silenceDuration);
}
}
This will accept almost any audio file format not just WAV.
Usage:
using (AudioFileReader reader = new AudioFileReader(filePath))
{
TimeSpan duration = reader.GetSilenceDuration(AudioFileReaderExt.SilenceLocation.Start);
Console.WriteLine(duration.TotalMilliseconds);
}
References:
*
*How audio dB levels are calculated.
*Floating-point samples range.
*More about amplitude.
A: I don't think you'll find any built-in APIs for detection of silence. But you can always use good ol' math/discreete signal processing to find out loudness.
Here's a small example: http://msdn.microsoft.com/en-us/magazine/cc163341.aspx
A: Use Sox. It can remove leading and trailing silences, but you'll have to call it as an exe from your app.
A: See code below from Detecting audio silence in WAV files using C#
private static void SkipSilent(string fileName, short silentLevel)
{
WaveReader wr = new WaveReader(File.OpenRead(fileName));
IntPtr format = wr.ReadFormat();
WaveWriter ww = new WaveWriter(File.Create(fileName + ".wav"),
AudioCompressionManager.FormatBytes(format));
int i = 0;
while (true)
{
byte[] data = wr.ReadData(i, 1);
if (data.Length == 0)
{
break;
}
if (!AudioCompressionManager.CheckSilent(format, data, silentLevel))
{
ww.WriteData(data);
}
}
ww.Close();
wr.Close();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: How to manage Configuration Settings for each Developer In a .NET project, say you have a configuration setting - like a connection string - stored in a app.config file, which is different for each developer on your team (they may be using a local SQL Server, or a specific server instance, or using a remote server, etc).
How can you structure your solution so that each developer can have their own development "preferences" (i.e. not checked into source control), but provide a default connection string that is checked into source control (thereby supplying the correct defaults for a build process or new developers).
Edit: Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section?
A: AppSettings can be overridden with a local file:
<appSettings file="localoveride.config"/>
This allows for each developer to keep their own local settings.
As far as the connection string, in a perfect world all developers should connect to a test DB, not run SQL Server each.
However, I've found it best to keep a file named Web.Config.Prd in source control, and use that for build deployments. If someone modifies web.config, they must also add the change to the .PRD file...There is no good automation there :(
A:
Edit: Can the "file" method suggested
by @Jonathon be somehow used with the
connectionStrings section?
Or you can have multiple connection strings in the checked in config file, and use an AppSettings key to determine which ConnectionString is to be used. I have the following in my codebase for this purpose:
public class ConnectionString
{
public static string Default
{
get
{
if (string.IsNullOrEmpty(ConfigurationManager.AppSettings["DefaultConnectionStringName"]))
throw new ApplicationException("DefaultConnectionStringName must be set in the appSettings");
return GetByName(ConfigurationManager.AppSettings["DefaultConnectionStringName"]);
}
}
public static string GetByName(string dsn)
{
return ConfigurationManager.ConnectionStrings[dsn].ConnectionString;
}
}
A: I always make templates for my config files.
As an example I use NAnt for the building of my projects. I have a file checked in called local.properties.xml.template. My NAnt build will warn the developer if local.properties.xml does not exist. Inside that file will be workstation specific settings. The template will be checked into source control, but the actual config won't be.
A: I use quite archaic design that just works.
*
*/_Test__app.config
*/_Prod__app.config
*/app.config
Then in my nant script, I have a task that copies, the current build environment plus _ app.config and copy it to app.config.
Its nasty, but you can't get in between providers and ConfigurationManager to spoof it, by saying providers look at "dev" or "prod" connection string and just have 3 named connection strings.
nant task:
<target name="copyconfigs" depends="clean">
<foreach item="File" property="filename" unless="${string::get-length(ConfigPrefix) == 0}">
<in>
<items>
<include name="**/${ConfigPrefix}App.config" />
<include name="**/${ConfigPrefix}connectionstrings.config" />
<include name="**/${ConfigPrefix}web.config" />
</items>
</in>
<do>
<copy overwrite="true" file="${filename}" tofile="${string::replace(filename, ConfigPrefix,'')}" />
</do>
</foreach></target>
A:
Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section?
No, but there is nothing stopping you from storing the ConnectionString as an AppSettings key.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Experience documentation about Shared Nothing Architecture Do you have any experience of designing a Real Shared-Nothing Architecture?
Would you have some readings to recommend me?
A: Building Scalable Web Sites by Flickr architect Cal Henderson is pretty much the holy book for scalable web architectures.
The presentations by Brad Fitzpatrick of Danga Interactive, creators of LiveJournal, are also excellent case studies. Check out this one first.
A: I think that The J2EE guy still doesn’t get PHP is (still) worth a read.
A: I think you should just study and think about the concept of stateless beans, and then apply it to web app programming. On the server side you have stateless JSON channels, on client you have all state including authorization token. Server need only to verify this token, which is included in every JSON request.
I've designed and written once fully AJAX application without using HTTP sessions. In fact I was then a beginner and didn't know that it is called Shared-Nothing Architecture :) I've done good just with my invention, without any readings. But maybe I should describe my experiences in details for other people...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Rich GUI OS X Frameworks? What would you recommend for OS X development of a graphical application like those possible in WPF?
My specific background is in Smalltalk & Java, but I currently work mostly in DHTML/.NET (ASP.NET/C#).
A: Aside from Interface Builder which is included as part of the Xcode tools, you can also use QT, GTK+, AWT & SWING (for your Java background), Tk, Squeak (for your Smalltalk background), Shoes (very cool little Ruby GUI toolkit), FXRuby (more Ruby), wxWidgets, XULRunner, and others I'm sure I've forgotten. For the most native-like apps, however, Interface Builder is your best bet.
A: Cocoa is the primary framework to use on Mac OS X. It's what Apple uses, it's what most new development uses, and it's where new features are principally added.
If you're coming from WPF, I think you might find quite a few of the concepts in Cocoa familiar. (Despite the fact that Cocoa is just a bit older.) It's built entirely around MVC, there are property-change notifications and bindings, there's animation support, there's a persistence and object-graph management framework, and so on.
(Also, you might want to add "mac" to the tags.)
A: With your Java background, don't get sidetracked by the now deprecated Cocoa-Java bridge. Early in OS X history, Apple provided a (laboriously hand-maintained) Java interface for the Cocoa libraries. Because of the semantic differences between Java and Objective-C, many of the most powerful features of Cocoa, including Key-value binding (upon which many other features are built) is very difficult, leading to divergence of Objetive-C and Java capabilities and the eventual deprecation of the bridge. All Cocoa development is best done with Objective-C or one of the many (automatically generated) bridges to dynamic languages such as Python or Ruby.
With your background in smalltalk, I would expect you could pick up Objective-C in a day or two.
A: Cocoa. Considered by many to be the best application framework ever. The language is Objective-C, SmallTalk-like language that inspired the creators of Java.
Really, there is no reasonable alternative to Cocoa for OS X development, unless you have specific needs like wanting to be cross-platform.
A: I'm not sure what WPF is, but most development for the OSX platform is done in Objective-C with Cocoa. You can use the deprecated Carbon APIs with other languages like Java, but new applications for OSX really should be developed in Objective-C. You can start with Apple's guide with Xcode as your IDE.
A: To put it a different way than previous posters: if you are not designing your interface in InterfaceBuilder and manipulating it with Objective-C, then you are going to end up with an application that does not look, feel, act, or work the way a Macintosh application should, and it will stick out like a sore thumb to users. It will be an unpleasant experience for the user compared to other apps, and they will likely desire a different application because of it.
Toolkits like QT are acceptable if your application already uses QT and you want to port it fast, but if you're writing a new application (or a separate GUI) then write it in Cocoa using ObjC or ObjC++.
A: You might have a look at PyObjc which is a bridge between the Python programming language and Objective-C, including bindings for Mac OS X components, including Cocoa.
A: With a Smalltalk background, I'd recommend straight Cocoa and Objective-C. However, if you're leaning towards a dynamic language, RubyCocoa will let you use Ruby which I think you'll find easier to pick up than Python.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: GOTO command in PHP? I've heard rumors that PHP is planning on introducing a "goto" command. What is it supposed to be doing?
I've tried searching a bit, but haven't found anything awfully descriptive. I understand that it won't be a "GOTO 10"-like command...
A: They are not adding a real GOTO, but extending the BREAK keyword to use static labels. Basically, it will be enhancing the ability to break out of switch nested if statements. Here's the concept example I found:
<?php
for ($i = 0; $i < 9; $i++) {
if (true) {
break blah;
}
echo "not shown";
blah:
echo "iteration $i\n";
}
?>
Of course, once the GOTO "rumor" was out, there was nothing to stop some evil guys to propagate an additional COMEFROM joke. Be on your toes.
See also:
http://www.php.net/~derick/meeting-notes.html#adding-goto
A: I'm always astonished at how incredibly dumb the PHP designers are.
If the purpose of using GOTOs is to make breaking out of multiply nested
loops more efficient there's a better way: labelled code blocks
and break statements that can reference labels:
a: for (...) {
b: for (...) {
c: for (...) {
...
break a;
}
}
}
Now is is clear which loop/block to exit, and the exit is structured;
you can't get spaghetti code with this like you can with real gotos.
This is an old, old, old idea. Designing good control flow management
structures has been solved since the 70s, and the literature on all this
is long since written up. The Bohm-Jacopini theorem showed that
you could code anything with function call, if-then-else, and while loops.
In practice, to break out of deeply nested blocks, Bohm-Jacopini style
coding required extra boolean flags ("set this flag to get out of the loop")
which was clumsy coding wise and inefficient (you don't want such flags
in your inner loop). With if-then-else, various loops (while,for)
and break-to-labelled block, you can code any algorithm without no
loss in efficiency. Why don't people read the literature, instead
of copying what C did? Grrr.
A: Granted, I am not a PHP programmer, and I don't know what PHP's exact implementation of GOTO will look like, but here is my understanding of GOTO:
GOTO is just a more explicit flow control statement like any other. Let's say you have some nested loops and you only need to find one thing. You can put in a conditional statement (or several) and when conditions are met properly, you can use a GOTO statement to get out of all the loops, (instead of having a 'break' statement at each level of nesting with a conditional statement for each. And yes, I believe the traditional implementation is to have named labels that the GOTO statement can jump to by name. You can do something like this:
for(...) {
for (...) {
for (...) {
// some code
if (x) GOTO outside;
}
}
}
:outside
This is a simpler (and more efficient) implementation than without GOTO statements. The equivalent would be:
for(...) {
for (...) {
for (...) {
// some code
if (x) break;
}
if(x) break;
}
if(x) break;
}
In the second case (which is common practice) there are three conditional statements, which is obviously slower than just having one. So, for optimization/simplification reasons, you might want to use GOTO statements in tightly nested loops.
A: In the example given by steveth45 you can use a function instead:
function findItem(...) {
for (...) {
for (...) {
for (...) {
if (x) {
return theItem;
}
}
}
}
}
// no need for label now
theItem = findItem(a, b, c);
A: It looks like it's currently in PHP 5.3, but is not fully documented yet. From what I can tell it shares its goto syntax with C, so it should be easy to pick up and use. Just remember Dijkstra's warning and use it only when necessary.
A: @steveth45
My rule of thumb is that if you have nested code more than 3 levels deep, you are doing
something wrong.
Then you don't have to worry about using multiple break statements or goto :D
A: there is a goto in php -> http://php.net/manual/en/control-structures.goto.php, but i wouldn't use it, just write normal code...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Anyway to stop Windows bringing app to front when displaying a context menu on tray icon? We are experiencing this annoying problem where we have a context menu on our tray icon, if we display this context menu we have to SetForegroundWindow and bring it to the front. This is really annoying and not at all what we want.
Is there a workaround, I notice that Outlook MS Messenger and other MS apps do not suffer this, perhaps they are not using a standard menu and have had to write their own ... why dont they release this code if they have?
This article describes the 'as design' behaviour: Menus for Notification Icons Do Not Work Correctly
EDIT
We are using C++/Win32 not forms, so we use TrackPopupMenu.
A: Are you using ContextMenu or ContextMenuStrip?
Your saying that opening the ContextMenu on a trayicon focuses all app forms?
I have not experienced that, though I use the newer ContextMenuStrip class, not ContextMenu for my trayicons.
EDIT: Would be nice to know if you are using Windows.Forms or WIN32, or MFC or what.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Incrementing from 0 to 100 in assembly language This is kinda oddball, but I was poking around with the GNU assembler today (I want to be able to at least read the syntax), and was trying to get this little contrived example of mine to work. Namely I just want to go from 0 to 100, printing out numbers all the while. So a few minutes later I come up with this:
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
movl $0, %eax # The starting point/current value.
movl $100, %ebx # The ending point.
_loop:
# Display the current value.
pushl %eax
pushl $string
call _printf
addl $8, %esp
# Check against the ending value.
cmpl %eax, %ebx
je _end
# Increment the current value.
incl %eax
jmp _loop
_end:
All I get from this is 3 printed over and over again. Like I said, just a little contrived example, so don't worry too much about it, it's not a life or death problem.
(The formatting's a little messed up, but nothing major).
A: I'm not too familiar with _printf, but could it be that it modifies eax? Printf should return the number of chars printed, which in this case is two: '0' and '\n'. I think it returns this in eax, and when you increment it, you get 3, which is what you proceed to print.
You might be better off using a different register for the counter.
A: You can safely use registers that are "callee-saved" without having to save them yourself. On x86 these are edi, esi, and ebx; other architectures have more.
These are documented in the ABI references: http://math-atlas.sourceforge.net/devel/assembly/
A: Well written functions will usually push all the registers onto the stack and then pop them when they're done so that they remain unchanged during the function. The exception would be eax that contains the return value. Library functions like printf are most likely written this way, so I wouldn't do as Wedge suggests:
You'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)
In fact, from what I know, compilers usually compile functions that way to deal exactly with this issue.
@seanyboy, your solution is overkill. All that's needed is to replace eax with some other register like ecx.
A: You can't trust what any called procedure does to any of the registers.
Either push the registers onto the stack and pop them back off after calling printf or have the increment and end point values held in memory and read/written into registers as you need them.
I hope the following works. I'm assuming that pushl has an equivalant popl and you can push an extra couple of numbers onto the stack.
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
movl $0, %eax # The starting point/current value.
movl $100, %ebx # The ending point.
_loop:
# Remember your registers.
pushl %eax
pushl %ebx
# Display the current value.
pushl %eax
pushl $string
call _printf
addl $8, %esp
# reinstate registers.
popl %ebx
popl %eax
# Check against the ending value.
cmpl %eax, %ebx
je _end
# Increment the current value.
incl %eax
jmp _loop
_end:
A: Nathan is on the right track. You can't assume that register values will be unmodified after calling a subroutine. In fact, it's best to assume they will be modified, else the subroutine wouldn't be able to do it's work (at least for low register count architectures like x86). If you want to preserve a value you should store it in memory (e.g. push it onto the stack and keep track of it's location).
You'll need to do the same for any other variable you have. Using registers to store local variables is pretty much reserved to architectures with enough registers to support it (e.g. EPIC, amd64, etc.)
A: You could rewrite it so that you use registers that aren't suppose to change, for example %ebp. Just make sure you push them onto the stack at the beginning, and pop them off at the end of your routine.
# count.s: print the numbers from 0 to 100.
.text
string: .asciz "%d\n"
.globl _main
_main:
push %ecx
push %ebp
movl $0, %ecx # The starting point/current value.
movl $100, %ebp # The ending point.
_loop:
# Display the current value.
pushl %ecx
pushl $string
call _printf
addl $8, %esp
# Check against the ending value.
cmpl %ecx, %ebp
je _end
# Increment the current value.
incl %ecx
jmp _loop
_end:
pop %ebp
pop %ecx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to request a random row in SQL? How can I request a random row (or as close to truly random as is possible) in pure SQL?
A: See this post: SQL to Select a random row from a database table. It goes through methods for doing this in MySQL, PostgreSQL, Microsoft SQL Server, IBM DB2 and Oracle (the following is copied from that link):
Select a random row with MySQL:
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
Select a random row with PostgreSQL:
SELECT column FROM table
ORDER BY RANDOM()
LIMIT 1
Select a random row with Microsoft SQL Server:
SELECT TOP 1 column FROM table
ORDER BY NEWID()
Select a random row with IBM DB2
SELECT column, RAND() as IDX
FROM table
ORDER BY IDX FETCH FIRST 1 ROWS ONLY
Select a random record with Oracle:
SELECT column FROM
( SELECT column FROM table
ORDER BY dbms_random.value )
WHERE rownum = 1
A: I don't know how efficient this is, but I've used it before:
SELECT TOP 1 * FROM MyTable ORDER BY newid()
Because GUIDs are pretty random, the ordering means you get a random row.
A: If possible, use stored statements to avoid the inefficiency of both indexes on RND() and creating a record number field.
PREPARE RandomRecord FROM "SELECT * FROM table LIMIT ?,1";
SET @n=FLOOR(RAND()*(SELECT COUNT(*) FROM table));
EXECUTE RandomRecord USING @n;
A: ORDER BY NEWID()
takes 7.4 milliseconds
WHERE num_value >= RAND() * (SELECT MAX(num_value) FROM table)
takes 0.0065 milliseconds!
I will definitely go with latter method.
A: Best way is putting a random value in a new column just for that purpose, and using something like this (pseude code + SQL):
randomNo = random()
execSql("SELECT TOP 1 * FROM MyTable WHERE MyTable.Randomness > $randomNo")
This is the solution employed by the MediaWiki code. Of course, there is some bias against smaller values, but they found that it was sufficient to wrap the random value around to zero when no rows are fetched.
newid() solution may require a full table scan so that each row can be assigned a new guid, which will be much less performant.
rand() solution may not work at all (i.e. with MSSQL) because the function will be evaluated just once, and every row will be assigned the same "random" number.
A: For SQL Server 2005 and 2008, if we want a random sample of individual rows (from Books Online):
SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
A: In late, but got here via Google, so for the sake of posterity, I'll add an alternative solution.
Another approach is to use TOP twice, with alternating orders. I don't know if it is "pure SQL", because it uses a variable in the TOP, but it works in SQL Server 2008. Here's an example I use against a table of dictionary words, if I want a random word.
SELECT TOP 1
word
FROM (
SELECT TOP(@idx)
word
FROM
dbo.DictionaryAbridged WITH(NOLOCK)
ORDER BY
word DESC
) AS D
ORDER BY
word ASC
Of course, @idx is some randomly-generated integer that ranges from 1 to COUNT(*) on the target table, inclusively. If your column is indexed, you'll benefit from it too. Another advantage is that you can use it in a function, since NEWID() is disallowed.
Lastly, the above query runs in about 1/10 of the exec time of a NEWID()-type of query on the same table. YYMV.
A: Insted of using RAND(), as it is not encouraged, you may simply get max ID (=Max):
SELECT MAX(ID) FROM TABLE;
get a random between 1..Max (=My_Generated_Random)
My_Generated_Random = rand_in_your_programming_lang_function(1..Max);
and then run this SQL:
SELECT ID FROM TABLE WHERE ID >= My_Generated_Random ORDER BY ID LIMIT 1
Note that it will check for any rows which Ids are EQUAL or HIGHER than chosen value.
It's also possible to hunt for the row down in the table, and get an equal or lower ID than the My_Generated_Random, then modify the query like this:
SELECT ID FROM TABLE WHERE ID <= My_Generated_Random ORDER BY ID DESC LIMIT 1
A: As pointed out in @BillKarwin's comment on @cnu's answer...
When combining with a LIMIT, I've found that it performs much better (at least with PostgreSQL 9.1) to JOIN with a random ordering rather than to directly order the actual rows: e.g.
SELECT * FROM tbl_post AS t
JOIN ...
JOIN ( SELECT id, CAST(-2147483648 * RANDOM() AS integer) AS rand
FROM tbl_post
WHERE create_time >= 1349928000
) r ON r.id = t.id
WHERE create_time >= 1349928000 AND ...
ORDER BY r.rand
LIMIT 100
Just make sure that the 'r' generates a 'rand' value for every possible key value in the complex query which is joined with it but still limit the number of rows of 'r' where possible.
The CAST as Integer is especially helpful for PostgreSQL 9.2 which has specific sort optimisation for integer and single precision floating types.
A: For MySQL to get random record
SELECT name
FROM random AS r1 JOIN
(SELECT (RAND() *
(SELECT MAX(id)
FROM random)) AS id)
AS r2
WHERE r1.id >= r2.id
ORDER BY r1.id ASC
LIMIT 1
More detail http://jan.kneschke.de/projects/mysql/order-by-rand/
A: With SQL Server 2012+ you can use the OFFSET FETCH query to do this for a single random row
select * from MyTable ORDER BY id OFFSET n ROW FETCH NEXT 1 ROWS ONLY
where id is an identity column, and n is the row you want - calculated as a random number between 0 and count()-1 of the table (offset 0 is the first row after all)
This works with holes in the table data, as long as you have an index to work with for the ORDER BY clause. Its also very good for the randomness - as you work that out yourself to pass in but the niggles in other methods are not present. In addition the performance is pretty good, on a smaller dataset it holds up well, though I've not tried serious performance tests against several million rows.
A: Random function from the sql could help. Also if you would like to limit to just one row, just add that in the end.
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
A: For SQL Server and needing "a single random row"..
If not needing a true sampling, generate a random value [0, max_rows) and use the ORDER BY..OFFSET..FETCH from SQL Server 2012+.
This is very fast if the COUNT and ORDER BY are over appropriate indexes - such that the data is 'already sorted' along the query lines. If these operations are covered it's a quick request and does not suffer from the horrid scalability of using ORDER BY NEWID() or similar. Obviously, this approach won't scale well on a non-indexed HEAP table.
declare @rows int
select @rows = count(1) from t
-- Other issues if row counts in the bigint range..
-- This is also not 'true random', although such is likely not required.
declare @skip int = convert(int, @rows * rand())
select t.*
from t
order by t.id -- Make sure this is clustered PK or IX/UCL axis!
offset (@skip) rows
fetch first 1 row only
Make sure that the appropriate transaction isolation levels are used and/or account for 0 results.
For SQL Server and needing a "general row sample" approach..
Note: This is an adaptation of the answer as found on a SQL Server specific question about fetching a sample of rows. It has been tailored for context.
While a general sampling approach should be used with caution here, it's still potentially useful information in context of other answers (and the repetitious suggestions of non-scaling and/or questionable implementations). Such a sampling approach is less efficient than the first code shown and is error-prone if the goal is to find a "single random row".
Here is an updated and improved form of sampling a percentage of rows. It is based on the same concept of some other answers that use CHECKSUM / BINARY_CHECKSUM and modulus.
*
*It is relatively fast over huge data sets and can be efficiently used in/with derived queries. Millions of pre-filtered rows can be sampled in seconds with no tempdb usage and, if aligned with the rest of the query, the overhead is often minimal.
*Does not suffer from CHECKSUM(*) / BINARY_CHECKSUM(*) issues with runs of data. When using the CHECKSUM(*) approach, the rows can be selected in "chunks" and not "random" at all! This is because CHECKSUM prefers speed over distribution.
*Results in a stable/repeatable row selection and can be trivially changed to produce different rows on subsequent query executions. Approaches that use NEWID() can never be stable/repeatable.
*Does not use ORDER BY NEWID() of the entire input set, as ordering can become a significant bottleneck with large input sets. Avoiding unnecessary sorting also reduces memory and tempdb usage.
*Does not use TABLESAMPLE and thus works with a WHERE pre-filter.
Here is the gist. See this answer for additional details and notes.
Naïve try:
declare @sample_percent decimal(7, 4)
-- Looking at this value should be an indicator of why a
-- general sampling approach can be error-prone to select 1 row.
select @sample_percent = 100.0 / count(1) from t
-- BAD!
-- When choosing appropriate sample percent of "approximately 1 row"
-- it is very reasonable to expect 0 rows, which definitely fails the ask!
-- If choosing a larger sample size the distribution is heavily skewed forward,
-- and is very much NOT 'true random'.
select top 1
t.*
from t
where 1=1
and ( -- sample
@sample_percent = 100
or abs(
convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
) % (1000 * 100) < (1000 * @sample_percent)
)
This can be largely remedied by a hybrid query, by mixing sampling and ORDER BY selection from the much smaller sample set. This limits the sorting operation to the sample size, not the size of the original table.
-- Sample "approximately 1000 rows" from the table,
-- dealing with some edge-cases.
declare @rows int
select @rows = count(1) from t
declare @sample_size int = 1000
declare @sample_percent decimal(7, 4) = case
when @rows <= 1000 then 100 -- not enough rows
when (100.0 * @sample_size / @rows) < 0.0001 then 0.0001 -- min sample percent
else 100.0 * @sample_size / @rows -- everything else
end
-- There is a statistical "guarantee" of having sampled a limited-yet-non-zero number of rows.
-- The limited rows are then sorted randomly before the first is selected.
select top 1
t.*
from t
where 1=1
and ( -- sample
@sample_percent = 100
or abs(
convert(bigint, hashbytes('SHA1', convert(varbinary(32), t.rowguid)))
) % (1000 * 100) < (1000 * @sample_percent)
)
-- ONLY the sampled rows are ordered, which improves scalability.
order by newid()
A: Solutions like Jeremies:
SELECT * FROM table ORDER BY RAND() LIMIT 1
work, but they need a sequential scan of all the table (because the random value associated with each row needs to be calculated - so that the smallest one can be determined), which can be quite slow for even medium sized tables. My recommendation would be to use some kind of indexed numeric column (many tables have these as their primary keys), and then write something like:
SELECT * FROM table WHERE num_value >= RAND() *
( SELECT MAX (num_value ) FROM table )
ORDER BY num_value LIMIT 1
This works in logarithmic time, regardless of the table size, if num_value is indexed. One caveat: this assumes that num_value is equally distributed in the range 0..MAX(num_value). If your dataset strongly deviates from this assumption, you will get skewed results (some rows will appear more often than others).
A: You didn't say which server you're using. In older versions of SQL Server, you can use this:
select top 1 * from mytable order by newid()
In SQL Server 2005 and up, you can use TABLESAMPLE to get a random sample that's repeatable:
SELECT FirstName, LastName
FROM Contact
TABLESAMPLE (1 ROWS) ;
A: For SQL Server
newid()/order by will work, but will be very expensive for large result sets because it has to generate an id for every row, and then sort them.
TABLESAMPLE() is good from a performance standpoint, but you will get clumping of results (all rows on a page will be returned).
For a better performing true random sample, the best way is to filter out rows randomly. I found the following code sample in the SQL Server Books Online article Limiting Results Sets by Using TABLESAMPLE:
If you really want a random sample of
individual rows, modify your query to
filter out rows randomly, instead of
using TABLESAMPLE. For example, the
following query uses the NEWID
function to return approximately one
percent of the rows of the
Sales.SalesOrderDetail table:
SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= CAST(CHECKSUM(NEWID(),SalesOrderID) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
The SalesOrderID column is included in
the CHECKSUM expression so that
NEWID() evaluates once per row to
achieve sampling on a per-row basis.
The expression CAST(CHECKSUM(NEWID(),
SalesOrderID) & 0x7fffffff AS float /
CAST (0x7fffffff AS int) evaluates to
a random float value between 0 and 1.
When run against a table with 1,000,000 rows, here are my results:
SET STATISTICS TIME ON
SET STATISTICS IO ON
/* newid()
rows returned: 10000
logical reads: 3359
CPU time: 3312 ms
elapsed time = 3359 ms
*/
SELECT TOP 1 PERCENT Number
FROM Numbers
ORDER BY newid()
/* TABLESAMPLE
rows returned: 9269 (varies)
logical reads: 32
CPU time: 0 ms
elapsed time: 5 ms
*/
SELECT Number
FROM Numbers
TABLESAMPLE (1 PERCENT)
/* Filter
rows returned: 9994 (varies)
logical reads: 3359
CPU time: 641 ms
elapsed time: 627 ms
*/
SELECT Number
FROM Numbers
WHERE 0.01 >= CAST(CHECKSUM(NEWID(), Number) & 0x7fffffff AS float)
/ CAST (0x7fffffff AS int)
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
If you can get away with using TABLESAMPLE, it will give you the best performance. Otherwise use the newid()/filter method. newid()/order by should be last resort if you have a large result set.
A: SELECT * FROM table ORDER BY RAND() LIMIT 1
A: Most of the solutions here aim to avoid sorting, but they still need to make a sequential scan over a table.
There is also a way to avoid the sequential scan by switching to index scan. If you know the index value of your random row you can get the result almost instantially. The problem is - how to guess an index value.
The following solution works on PostgreSQL 8.4:
explain analyze select * from cms_refs where rec_id in
(select (random()*(select last_value from cms_refs_rec_id_seq))::bigint
from generate_series(1,10))
limit 1;
I above solution you guess 10 various random index values from range 0 .. [last value of id].
The number 10 is arbitrary - you may use 100 or 1000 as it (amazingly) doesn't have a big impact on the response time.
There is also one problem - if you have sparse ids you might miss. The solution is to have a backup plan :) In this case an pure old order by random() query. When combined id looks like this:
explain analyze select * from cms_refs where rec_id in
(select (random()*(select last_value from cms_refs_rec_id_seq))::bigint
from generate_series(1,10))
union all (select * from cms_refs order by random() limit 1)
limit 1;
Not the union ALL clause. In this case if the first part returns any data the second one is NEVER executed!
A: You may also try using new id() function.
Just write a your query and use order by new id() function. It quite random.
A: Didn't quite see this variation in the answers yet. I had an additional constraint where I needed, given an initial seed, to select the same set of rows each time.
For MS SQL:
Minimum example:
select top 10 percent *
from table_name
order by rand(checksum(*))
Normalized execution time: 1.00
NewId() example:
select top 10 percent *
from table_name
order by newid()
Normalized execution time: 1.02
NewId() is insignificantly slower than rand(checksum(*)), so you may not want to use it against large record sets.
Selection with Initial Seed:
declare @seed int
set @seed = Year(getdate()) * month(getdate()) /* any other initial seed here */
select top 10 percent *
from table_name
order by rand(checksum(*) % seed) /* any other math function here */
If you need to select the same set given a seed, this seems to work.
A: In MSSQL (tested on 11.0.5569) using
SELECT TOP 100 * FROM employee ORDER BY CRYPT_GEN_RANDOM(10)
is significantly faster than
SELECT TOP 100 * FROM employee ORDER BY NEWID()
A: For Firebird:
Select FIRST 1 column from table ORDER BY RAND()
A: In SQL Server you can combine TABLESAMPLE with NEWID() to get pretty good randomness and still have speed. This is especially useful if you really only want 1, or a small number, of rows.
SELECT TOP 1 * FROM [table]
TABLESAMPLE (500 ROWS)
ORDER BY NEWID()
A: I have to agree with CD-MaN: Using "ORDER BY RAND()" will work nicely for small tables or when you do your SELECT only a few times.
I also use the "num_value >= RAND() * ..." technique, and if I really want to have random results I have a special "random" column in the table that I update once a day or so. That single UPDATE run will take some time (especially because you'll have to have an index on that column), but it's much faster than creating random numbers for every row each time the select is run.
A: Be careful because TableSample doesn't actually return a random sample of rows. It directs your query to look at a random sample of the 8KB pages that make up your row. Then, your query is executed against the data contained in these pages. Because of how data may be grouped on these pages (insertion order, etc), this could lead to data that isn't actually a random sample.
See: http://www.mssqltips.com/tip.asp?tip=1308
This MSDN page for TableSample includes an example of how to generate an actualy random sample of data.
http://msdn.microsoft.com/en-us/library/ms189108.aspx
A: It seems that many of the ideas listed still use ordering
However, if you use a temporary table, you are able to assign a random index (like many of the solutions have suggested), and then grab the first one that is greater than an arbitrary number between 0 and 1.
For example (for DB2):
WITH TEMP AS (
SELECT COMLUMN, RAND() AS IDX FROM TABLE)
SELECT COLUMN FROM TABLE WHERE IDX > .5
FETCH FIRST 1 ROW ONLY
A: A simple and efficient way from http://akinas.com/pages/en/blog/mysql_random_row/
SET @i = (SELECT FLOOR(RAND() * COUNT(*)) FROM table); PREPARE get_stmt FROM 'SELECT * FROM table LIMIT ?, 1'; EXECUTE get_stmt USING @i;
A: There is better solution for Oracle instead of using dbms_random.value, while it requires full scan to order rows by dbms_random.value and it is quite slow for large tables.
Use this instead:
SELECT *
FROM employee sample(1)
WHERE rownum=1
A: For SQL Server 2005 and above, extending @GreyPanther's answer for the cases when num_value has not continuous values. This works too for cases when we have not evenly distributed datasets and when num_value is not a number but a unique identifier.
WITH CTE_Table (SelRow, num_value)
AS
(
SELECT ROW_NUMBER() OVER(ORDER BY ID) AS SelRow, num_value FROM table
)
SELECT * FROM table Where num_value = (
SELECT TOP 1 num_value FROM CTE_Table WHERE SelRow >= RAND() * (SELECT MAX(SelRow) FROM CTE_Table)
)
A: select r.id, r.name from table AS r
INNER JOIN(select CEIL(RAND() * (select MAX(id) from table)) as id) as r1
ON r.id >= r1.id ORDER BY r.id ASC LIMIT 1
This will require a lesser computation time
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "588"
} |
Q: What is a good free library for editing MP3s/FLACs? What is a good free library for editing MP3s/FLACs.
By editing I mean:
*
*Cutting audio file into multiple parts
*Joining multiple audio files together
*Increase playback speed of file without affecting the pitch (eg. podcasts up to 1.3x)
*Re-encoding audio file from Flac -> MP3 or vice versa
I don't mean software, I mean a library that I can use within another application. Programming language agnostic.
A: Just about every language has bindings to C, so you'll probably want to get the applicable C libraries for encoding/decoding mp3's and FLAC files. This list might include
libFLAC http://flac.sourceforge.net/api/index.html FLAC encoding/decoding
LAME http://lame.sourceforge.net/index.php MP3 encoding
MAD http://www.underbit.com/products/mad/ MP3 decoding
The rest of your signal processing needs could be gathered around a single popular API such as LADSPA http://www.ladspa.org/.
Here's a stretching / pitch shifting library: http://www.breakfastquay.com/rubberband/
Most audio processing programs have a certain internal format they use. That keeps things simple. Everything coming in gets converted to the same format. Once you've standardized the internal format, cutting and splicing audio data is about as difficult as cutting and splicing strings. You don't really need a library for that.
A: I use Audacity for all my editing needs
Audacity is a free, easy-to-use audio
editor and recorder for Windows, Mac
OS X, GNU/Linux and other operating
systems. You can use Audacity to:
* Record live audio.
* Convert tapes and records into digital recordings or CDs.
* Edit Ogg Vorbis, MP3, WAV or AIFF sound files.
* Cut, copy, splice or mix sounds together.
* Change the speed or pitch of a recording.
A: Audacity uses the Lame library, however not only is this not language agnostic it also has some questions over licensing. Nevertheless it might be a start
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Delete Datalist row with multiple primary keys I have a datalist with a OnDeleteCommand="Delete_Command".
I want the delete a record with multiple primary Keys but I do not know how to access it from the Delete_Command event.
If I use DataKeyField I'm limited to only one key.
Any workarounds for this?
A: You can access all of the keys:
gridView.DataKeys[rowNum][dataKeyName]
where rowNum is e.RowIndex from the gridView_RowDeleting event handler, and dataKeyName is the key you want to get:
<asp:GridView ID="gridView" runat="server" DataKeyNames="userid, id1, id2, id3" OnRowDeleting="gridView_RowDeleting">
protected void gridView_RowDeleting(object sender, GridViewDeleteEventArgs e)
{
gridView.DataKeys[e.RowIndex]["userid"]...
gridView.DataKeys[e.RowIndex]["id1"]...
gridView.DataKeys[e.RowIndex]["id2"]...
gridView.DataKeys[e.RowIndex]["id3"]...
}
A: Oh, sorry, I missed it.
AFAIK there is no such a possibility by default. Maybe you can create a composite key from your primary keys, like
Key1UnderscoreKey2UnderscoreKey3
and split it in the event handler. So this is a DIY multi-key handler for DataList :-)
Edit: The underscore got lost during format, it replaces with italic text. So instead of "underscore" word use real underscores
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Custom font in SQL Server 2005 Reporting Services I'm having issues with my SQL Reporting Services reports. I'm using a custom font for report headers, and when deployed to the server it does not render correctly when I print or export to PDF/TIFF. I have installed the font on the server. Is there anything else I need to do in order to use custom fonts?
When viewing the font in the browser it looks correct - since all client computers have the font installed...
Thanks Ryan, your post to the FAQ solved the problem. Installing the fonts on the server fixes the print problem, as well as problems with charts (which are also rendered on the server). Like you point out (as well as being mentioned in the FAQ) Reporting Services 2005 does not do font embedding in PDF files. I guess that is okay for now - the most important part was being able to hit print and get the correct fonts.
The reason the fonts didn't show up straight away is answered in the FAQ:
Q: I've installed the font on my client/server but I still see ?'s or
black boxes. Why? A: For the client
machine, closing all instances of the
PDF viewer then reopening them should
fix the issue.
For the server, restarting the
services should allow the PDF renderer
to pick up the new font information.
Unfortunately, I have also seen times
where I needed a full machine reboot
to get the client/server to recognize
the newly installed font.
A: The PDF files served up from SSRS, like many PDF files, have embedded postscript fonts. So, the local fonts used in the report are converted to a best matching postscript font when the conversion takes place so the PDF is totally portable without relying on locally installed fonts.
You can see the official MS guidelines and font requirements for SSRS PDF exports here: SQL Server 2005 Books Online (September 2007) Designing for PDF Output. Also, this post should provide some help as well: Reporting Services: PDF Renderer FAQ
Aspose apparently also has a component that claims to be able to add custom embedded fonts in SQL Report PDFs.
See Aspose.Pdf for Reporting Services
Aspose.Pdf for Reporting Services
makes it possible generating PDF
reports in Microsoft SQL Server 2000
and 2005 Reporting Services. Some
advanced features like XMP metadata,
custom embedded font and rendering
watermark for pages are now supported.
All RDL report features including
sections, images, charts, tables,
matrices, headers and footers are
converted with the highest degree of
precision to PDF.
I've not tried this component, so I can only share what it claims to be able to do.
A: Note: I have found that when you install the fonts on the Reporting Services server box, you may need to:
= Actually open the font from the Fonts control panel, so you can see the preview
AND
= Reboot the server box.
And yes, I agree you should not need to do this - but I have seen it work.
A: Running into the same problem - When you export to pdf, it doesn't render the Free 3 of 9 font. The font is installed on my report server, and does appear when you run the report using SSRS 2005.
The user can print directly, which is nice. And the report renders successfully during an Excel export. But that requires extra steps to print from Excel (page setup, etc.).
What I found to be a workaround is to use CutePDF (freeware).
Just click the direct print button on SSRS, and choose the CutePDF printer. It asks you where to save the file. Open the file, and the barcode fonts render successfully.
A: We had to install NeoDynamic barcode software to render the barcode as an image since we can't include the barcode fonts in PDF exports.
A: I have used barcode fonts successfully with SSRS and PDF. You must have the font installed on both the server (for rendering and viewing from the browser), as well as from the client.
When using barcode fonts, there's not really a best "match" for postscript so the PDF does not have a valid barcode font embedded with the document, which just yieds a bunch of garbage text. To solve that, just install the font on the client computer that will view the PDF.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: how to allow files starting with period and no extension in windows 2003 server? How can I create this file in a directory in windows 2003 SP2:
.hgignore
I get error: You must type a file name.
A: By the way Raymond Chen had a blog post about this topic a while back:
Why doesn't Explorer let you create a file whose name begins with a dot? (archive.org link with comments: https://web.archive.org/web/20100305064616/http://blogs.msdn.com/oldnewthing/archive/2008/04/14/8389268.aspx)
In which he mentions
You can do it from the command line or
use your favorite file management
tool.
A: That's a "feature" of Windows Explorer. Try to create your files from a command line (or from a batch/program you wrote) and it should work fine. Try this from a dos prompt:
echo Hello there! > .hgignore
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Content Management system recommendations Management is thinking of changing out Content Management Systems. What do you use/recommend?
*
*What UCM solution is your company using?
*How big is your company?
*Are you happy with the implementation?
Current setup:
*
*The company I work for uses Oracle ECM (formerly Stellent UCM).
*We have somewhere over 10,000 employees across Australia, New
Zealand and Indonesia.
*It works! Having worked with the system for a while now. I can see
where the initial implementation went wrong. Its 3 years later and
it is Rewrite Time! (Three year itch?)
A: Our external business orientated site is running joomla which once you get passed the learning procces of how it constructs sites, is very good for a multi-user environement.
Company = 25+ people
A: 1) CMS: Oracle's BEA Aqualogic
2) Size: 10,000+
3) Experience: As an end user with full community and content admin privileges, I find the tool to be outdated and stifling in terms of knowledge sharing and trying to get the benefits that exist in social networks. Perhaps this is due to the implementation, and not an inherent weakness in the product. Not sure of the future direction of the product either, since Oracle recently acquired it.
A: We use Plone open source for the internal site...
A: We use a DotNetNuke intranet site. I think we need to upgrade or redesign cause I like Joomla much more.
A: 1) We are moving from Microsoft Content Managemet Server 2002 to Sitecore 6.0 though we have internal PHP Wikis and Dot Net Nuke sites that have user content as well.
2) 1,000-2,000 people with about 3500 pages of Web content to migrate.
3) I'm content with it so far. There is still a lot of work to do in the migration and it will probably take a couple of years to move everything over, which includes legacy ASP and ASP.Net 1.1 and 2.0 sites that haven't been worked on in a few years as well. It would take a lot of things going easily for me to be happy with an implementation of this size.
A: Drupal. I've used it for small and medium sized projects.
A: 1) We're using a CMS that was custom written in vbscript and sucks horribly. We're going to start using MODx for our external stuff, but we're not sure what's going to happen with our internal stuff.
2) A university with about 30,000 students (about 10,000 of which have ties to my department).
3) MODx looks cool, but haven't had much of a chance to use it. As stated previously, our other CMS sucks.
A: Tridion. And yes, there is that 3-year itch. Is Oracle on a new release or did the first implementation just look wrong now? I remember Stellent being on the development team's shortlist.
Us:
Mid-sized (small?) 700+ employee company, with over a dozen websites, but not all sites have the CMS implemented. In-house development team has worked on, and still support, a few custom solutions. Legacy code never dies. :-)
All of the CMS we researched had compelling features, but for content re-use, cross-site sharing, and programmability we found Tridion to be a good fit (compared to Ektron and RedDot). Our mandate was to stay ".NET programmers" and not have the tool take over the site.
I'm comfortable with and like with Tridion, but admire those of you who've done CMS with multiple platforms.
A: 1) My company currently uses Word Press or no CMS at all. We are however working on a CMS that will work exactly as we want it to.
2) It's me and my friend so 2 of us
3) We're still starting up and finding clients so haven't had a chance to use it.
A: In my daily work, I use Tridion, and some of my colleagues use Hippo. At home I use Plone.
A: Institution-wide we see a variety of systems.
A few Plone sites. I'm a Plone fan.
The centre within which I work is somewhat multi-institutional (a good history of collaborative work) (one of two research centres situated within the same building) and the Plone sites that I'm setting up are fitting very nicely with diverse user/group requirements.
A: Companies I worked for usually developed CMS systems inhouse I've mostly worked for webshops and when cranking out websites is your core business the best way to get an edge is to be on top of this sort of thing.
So custom CMSes for:
*Simplicity, just deliver what the client wants and nothing else.
*Understanding it, it's developed in house so you can usually just talk to the guy who wrote it.
*Profit, it's easier to ask for license fees.
A: *
*We use the Alterian Content Manager application. It is very robust and suites our needs well.
*20000 staff+
*Very happy. Developers and business team find the application very easy to work with.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: In a DDS file can you detect textures with 0/1 alpha bits? In my engine I have a need to be able to detect DXT1 textures that have texels with 0 alpha (e.g. a cutout for a window frame). This is easy for textures I compress myself, but I'm not sure about textures that are already compressed.
Is there an easy way to tell from the header whether a DDS image contains alpha?
A: As far as I know, there's no way to tell from the header. There's a DDPF_ALPHAPIXELS flag, but I don't think that will get set based on what's in the pixel data. You'd need to parse the DXT1 blocks, and look for colours that have 0 alpha in them (making sure to check that the colour is actually used in the block, too, I suppose).
A: No, the DDS header only uses alpha flags for uncompressed images. I had a similar need to figure out if a DXT1 image was using 1-bit alpha and after a long search I came across this reference here: https://msdn.microsoft.com/en-us/library/windows/desktop/bb147243(v=vs.85).aspx
Basically if color_0 <= color_1 then there is a possibility the texture has 1-Bit alpha. To further verify it, you need to check the next 32-bits in 2-bit pairs if they are 11. Then continue this for every block if not found.
A: I agree with the accepted answer. Your job may be made a bit easier by using the "squish" library to decompress the blocks for you.
http://www.sjbrown.co.uk/?code=squish
A: DDS is a very poor wrapper for DXT (or BTC) data. The header will not help you.
Plain original DXT1 did not have any alpha. I believe d3d nowadays does actually decode DXT1 with alpha though. Every DXT1 block looks like this: color1(16 bits) color2(16 bits) indices(32 bits). If the 16 bit color1 value is less than color2 (just a uint16 comparison, nothing fancy!) the block has no alpha. Otherwise it does. So to answer you question: Skip the header, read 16 bits a, read 16 bits b, if a>b there is alpha. otherwise skip 32 bits and repeat until eof. Other DXT formats like DXT5 always have alpha. It is very rare that people rely on the DXT1 alpha trick because some hw (intel..) does not support it reliably.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Enforce Attribute Decoration of Classes/Methods Following on from my recent question on Large, Complex Objects as a Web Service Result. I have been thinking about how I can ensure all future child classes are serializable to XML.
Now, obviously I could implement the IXmlSerializable interface and then chuck a reader/writer to it but I would like to avoid that since it then means I need to instantiate a reader/writer whenever I want to do it, and 99.99% of the time I am going to be working with a string so I may just write my own.
However, to serialize to XML, I am simply decorating the class and its members with the Xml??? attributes ( XmlRoot , XmlElement etc.) and then passing it to the XmlSerializer and a StringWriter to get the string. Which is all good. I intend to put the method to return the string into a generic utility method so I don't need to worry about type etc.
The this that concerns me is this: If I do not decorate the class(es) with the required attributes an error is not thrown until run time.
Is there any way to enforce attribute decoration? Can this be done with FxCop? (I have not used FxCop yet)
UPDATE:
Sorry for the delay in getting this close off guys, lots to do!
Definitely like the idea of using reflection to do it in a test case rather than resorting to FxCop (like to keep everything together).. Fredrik Kalseth's answer was fantastic, thanks for including the code as it probably would have taken me a bit of digging to figure out how to do it myself!
+1 to the other guys for similar suggestions :)
A: I'd write a unit/integration test that verifies that any class matching some given criteria (ie subclassing X) is decorated appropriately. If you set up your build to run with tests, you can have the build fail when this test fails.
UPDATE: You said, "Looks like I will just have to roll my sleeves up and make sure that the unit tests are collectively maintained" - you don't have to. Just write a general test class that uses reflection to find all classes that needs to be asserted. Something like this:
[TestClass]
public class When_type_inherits_MyObject
{
private readonly List<Type> _types = new List<Type>();
public When_type_inherits_MyObject()
{
// lets find all types that inherit from MyObject, directly or indirectly
foreach(Type type in typeof(MyObject).Assembly.GetTypes())
{
if(type.IsClass && typeof(MyObject).IsAssignableFrom(type))
{
_types.Add(type);
}
}
}
[TestMethod]
public void Properties_have_XmlElement_attribute
{
foreach(Type type in _types)
{
foreach(PropertyInfo property in type.GetProperties())
{
object[] attribs = property.GetCustomAttributes(typeof(XmlElementAttribute), false);
Assert.IsTrue(attribs.Count > 0, "Missing XmlElementAttribute on property " + property.Name + " in type " + type.FullName);
}
}
}
}
A: You can write unit tests to check for this kind of thing - it basically uses reflection.
Given the fact this is possible I guess it would also be possible to write a FxCop rule, but I've never done such a thing.
A: You can write an FxCop rule or even check for the attributes by calling GetType() in the base class's constructor and reflecting over the returned type.
A: A good FXCop rule (and one which I am finding I need right now) would be to check that all objects that are being added to the ASP.NET Session have the Serializable attribute. I'm trying to move from InProc session state to SQL Server. First time I requested a page, my site blew up on me because non-serializable objects were being stored in Session. Then came the task of hunting through all the source code looking for every instance where an object is set in the Session... FXCop would be a nice solution. Something to work on...
A: You can also use this concept/post-processor to enforce relationships between attributes and use similar login to enforce relationships between classes and attributes at compile time:
http://www.st.informatik.tu-darmstadt.de/database/publications/data/cepa-mezini-gpce04.pdf?id=92
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Which RDBMS should I use? I have developed a high speed transactional server for transfering data over the internet so I do not need to rely upon a database implementation like MySQL to provide this. That opens up the question of which SQL version to use?
I really like SQLite, but I am not convinced it is industrial strength yet
What I do like is how lightweight it is on resources.
I loathed MySQL 8 years ago, but now it obviously IS industrial strength and my partners use it, so it is the obvious choice on the server side. If I use it I will just be connecting through "localhost" to the installed server (windows service). My concern is about the memory usage.
I DO NOT load the result set into memory, but I notice about 6Mb for the first connection. I am hoping subsequent connections are not an additional 6MB!
If I use the libmysqld.dll embedded libarary then does each new connection load a new instance of the embedded client/server code into memory? We assume so since each process will have its own in process memory...
Regardless, the manual states that When using the libmysqld embedded server, the memory benefits are essentially lost when retrieving results row by row because "memory usage incrementally increases with each row retrieved until mysql_free_result() is called."
http://dev.mysql.com/doc/refman/5.1/en/mysql-use-result.html
This means I must use the installed service. But is this as fast as the embedded server?
Are there any other low cost flavors that have high reliability?
A: SQLite is used in more applications than any other DB. (Citation required).
There are some issues with MySQL, like that it doesn't respect foreign integrity constraints.
I'm currently a fan of PostgreSQL, which is also freely available (and, I think if you read the licensing of MySQL, actually turns out to have a more amenable license for commercial use). It seems to be higher performance than SQLite, which probably has more to do with it being run on an SMP machine, and making use to different threads. It also seems to be quite solid.
A: Sorry to be pedantic, but the title should really be "Which RDBMS?" - the way it's phrased makes about as much sense as "Which Java?" or "Which Internet?"...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Click an image, get coordinates I have a standard HTML image tag with an image in it, 100 by 100 pixels in size. I want people to be able to click the image and for that to pass the X and Y that they click into a function.
The coordinates need to be relative to the image top and left.
A: I think you're talking about:
<input id="info" type="image">
When submitted, there are form values for the x and y coordinate based on the input element id (info.x and info.y in this case).
http://www.w3.org/TR/REC-html40/interact/forms.html#h-17.4.1
A: from what you describe you should register to the image mouse event, for this case you should have the image mouse button event.
at the function you should use
Point mousePoint = e.GetPosition( this );
that will give you the mouse position according to the top left point int pixels.
than at the mousePoint you can print the X and Y information.
A: Replace the canvas with your image and it will work the same
let img = document.getElementById("canvas");
img.x = img.getBoundingClientRect().left;
img.y = img.getBoundingClientRect().top;
function click(e) {
document.getElementById("output").innerHTML = "X coords: " + (e.clientX - img.x) + "<br> Y coords: " + (e.clientY - img.y);
}
img.addEventListener("click", click);
<!--- Like a image --->
<canvas id="canvas" width="100" height="100"></canvas>
<p id="output"></p>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Check file permissions How can I check file permissions, without having to run operating system specific command via passthru() or exec()?
A: Real coders use bitwise operations, not strings ;) This is much more elegant way of handling permissions:
function checkPerms($path)
{
clearstatcache(null, $path);
return decoct( fileperms($path) & 0777 );
}
A: Use fileperms() function
clearstatcache();
echo substr(sprintf('%o', fileperms('/etc/passwd')), -4);
A: You can use the is_readable(), is_executable() etc.. commands.
A: Use fileperms() function and substring:
substr(decoct(fileperms(__DIR__)), -4); // 0777
substr(decoct(fileperms(__DIR__)), -3); // 777
For file:
substr(decoct(fileperms(__FILE__)), -4); // 0644
substr(decoct(fileperms(__FILE__)), -3); // 644
Replace __FILE__ and __DIR__ with your path or variable
A: What do you want to do by checking file permissions?
When writing secure code, it's almost always incorrect to "check, then do" anything. The reason is that between the checking whether you can do something and actually doing it, the state of the system could change such that doing it would have a different result.
For example, if you check whether a file exists before writing one, don't check whether you wrote the file successfully (or don't check in a detailed-enough fashion), and then later depend on the contents of the file you wrote, you could actually be reading a file written by an attacker.
So instead of checking file permissions, just do whatever it was you were going to do if the permissions check succeeded, and handle errors gracefully.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Weird yellow bar pops-up: 'Microsoft Data Access - Remote Data Services When I access my site from any computer, I see this warning popping up:
"This web site wants to run the following add-on: 'Microsoft Data
Access - Remote Data Services Dat...' from 'Microsoft Corporation'. If
you trust the web site and the add-on and want to allow it to run,
click here..."
I am guessing this is some kind of virus or something. I would like to know how to remove this from my site.
A: Id be very concerned if this is on your own server.
I found the following blog post that warns on the issue: http://msmvps.com/blogs/hostsnews/archive/2007/09/13/can-you-spot-the-fake.aspx but doesn't provide any way of removing it.
I'd recommend making sure both the server and the client are up to date on Windows Updates, and then installing a good virus scanner.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is data binding a bad idea? Another discussion (we've been having a lot of them these days!) in our work is whether data binding is a bad idea or not.
Personally, I think it is a Bad Thing™.
My reasons are thrice:
*
*It circumvents my well architectured MVP framework - with databinding, the view communicates bi-directionally with a model. Ewww.
*It promotes hooking up view controls to datafields at design time. In my experience, this leads to vital code (binding column A to Field X) being obscure and hidden away in some designer file. IMO this code should be explicit and in-your-face, so that it is easy to modify and see what is going on, without having to use a clunky designer interface.
*Relating to Point #1 this direct binding makes it harder to isolate each component (view, model, controller/presenter) and unit-test.
The pros are that it is easy to set up, and you can take advantage of some nice features (validation etc) which come with the plumbing already done for you.
But for me, databinding becomes much more of a hindrance when dealing with a large data-centric application.
Any thoughts?
A: As we say in the UK, "It's Horses for courses"
First off all, I agree with you! But...
For enterprise level applications, then spending the extra time on the system architecture, modelling and standards will give you a robust and sustainable system.
But it will take longer to develop (or at least longer to get to an initial release) and this may not be appropriate for every system or every part of the system.
Sometimes you just need to "get it done and done quick". For internal applications, back office systems and maintenance applications that are rarely used or very dynamic (the spec's change often) then there is little justification in building the Rolls Royce solution for this. It's better to get the developer spending time on the CRITICAL part of the system.
What you have to avoid / prevent is using these "one click framework" solutions on the MISSION CRITICAL area's of the system where the big transaction rate area's and where data quality and integrity is critical. Spend quality time shaving the milliseconds off on the most heavily used area's on the system!!
A:
Another discussion (we've been having a lot of them these days!) in our work
is whether data binding is a bad idea or not.
Personally, I think it is a Bad Thing™.
Strong opinion, but imho, you bring out all the wrong reasons.
*
*
It circumvents my well architectured MVP framework - with databinding, the view communicates bi-directionally with a model. Ewww.
I guess it depends on the implementation of the data binding.
In the early years of my programming career, I used to do a lots of VBA for MS Access programming and Access forms had indeed this direct binding to tables/fields in database.
Most of the general purpose languages/frameworks have databinding as a separate component, do not use such a direct binding and are usually considered as a easy generic dropin for a controller in MVC pattern sense.
*
It promotes hooking up view controls to datafields at design time. In my experience, this leads to vital code (binding column A to Field X) being obscure and hidden away in some designer file. IMO this code should be explicit and in-your-face, so that it is easy to modify and see what is going on, without having to use a clunky designer interface.
I guess you are talking about the binding in WinForms?
My experience with win forms comes from a long ago, so I might be pretty out of date here.
It sure is a convenience feature, and I would strongly argue against it, unless you are writing really simple modal context CRUD style interfaces.
*
Relating to Point #1 this direct binding makes it harder to isolate each component (view, model, controller/presenter) and unit-test.
Again - assuming the view (a widget in WinFoms?) is tied together with databinding awareness, you are right.
But for me, databinding becomes much more of a hindrance when dealing with a large data-centric application.
Quite contrary - if data binding is implemented as an independent component (eg. bindings in Cocoa or JFace DataBinding, or JGoodies Binding), that acts as a controller between View and a Model, taking care of all the nitty-gritty of event handling and conversion and validation, then it is just so much more easier to use, change and replace than your custom controller code doing just the same thing.
The only downside of a general purpose data binding framework is that if the binding is off and/or misconfigured, the interactions between bound pieces are just notoriously difficult to debug due to the level of abstraction inside the data binding code... So You better not make any mistakes! ;)
A: I've used databinding in some pretty large systems and find that it works pretty well.
Seems that I do things a bit differently from you though ...
... I don't databind to the model, instead to a dedicated view class that works as an adapter between the model's structure and what I need on screen. This includes things like providing choices for comboboxes & listviews, and so on.
... I never set up the binding using the UI. Instead, I have a single method (usually called Bind() or BindXYZ() that hooks everything up in one place.
My Model remains agnostic, knowing nothing about databinding; my Presenter sticks to the workflow coordinate it's designed for; my Views are now also simple classes (easy to test) that encapsulate my UI behavior (is button X enabled, etc) and the actual UI is relegated to a simple helper on the side.
A: I have had a few unshakable realizations about data binding over the last few years:
*
*The claim that the data binding allows for the business and presentation to be designed in isolation of each other is actually really quite far from what actually goes on in reality. Usually the deficiencies in the technologies become readily apparent and then all you have done is break apart the UI from the UI-specific business and the resulting separation often becomes far more unwieldy than a all-in-one approach.
*Most data binding engines (HTML / WPF / or whatever) all make assertions on the technical business model, and since the designer is not usually equipped to make said assertions, the developer ends up having to touch the view. Not only that, the view shouldn't be making assertions about the business model---if anything, it should be the other way around.
*Most of the time, the view model / controller / model / view are all "coupled" and then all you have really done is "move code around" rather than just simply using code behind. With that said, I do find the most pragmatic approach is often to just use data binding sparingly with code behind and forget about MVVM/MVC esque patterns.
*Developers often put view level concerns on the view model and then start to use data binding as a crutch rather than a proper approach. for example, I have seen so many view models controlling visibility of UI elements.
*Admittedly, data binding is useful for "small systems". I have observed that the performance, complexity and maintainability dramatically suffer as an application grows in richness.
*Memory usage techniques with data binding can often become a real hazard. WPF for example uses a LOT of trickery to avoid issues and often developers can still shoot themselves in the foot. Unless you are using something like Sencha for HTML (I think), you will find your memory foot print on your applications start to suffer even with a modest amount of data.
*I have found that data binding / UI patterns in general sometimes tend to break down a little when dealing with hierarchical and situational data / presentation.
My personal outlook on data binding is that it is a tool that can be easily abused yet has some compelling uses. You can say the same for any technique, pattern, or guideline. Like anything, too much of something tends to become a problem. I tend to like to try and use the most pragmatic approach for the situation. Prefer consistency when it is pragmatic to do so, but consistently be pragmatic. In other words, you don't have to go down the path of developing for two years and only then come to the conclusion that the code base has become a grotesque smelly mammoth in a china shop full of orphan kittens.
...
A: @Point 1: Isn't the data binding engine the controller, if you really want to think in patterns? You just do not program it yourself, which is the whole point of using data binding in the first place.
A: No. DataBinding when used correctly is a Good Thing™.
*
*No; but see #2 and #3. Make the Presenter expose the properties/well-defined sources to bind. Do not expose the Model. Nothing is circumvented.
*I agree. I do not use any of the standard ASP.NET data-sources. Instead I use GenericDataSourceControl which is wired to a "select method" that returns well-defined types. The DataSource consumers in the View only knows of these Presenter-types; nothing more.
*No. Relating to #1. The Presenter exposes the properties/well-defined sources to bind. These can be tested without the view for correctness (unit tests), and with the view for correctness of application (integration tests).
(My Experience is using ASP.NET WebForms, which may differ from other data-binding scenarios.)
A: @Timbo:
Yes and no.... but from a TDD perspective I'd like to cordon-off each controller so that I can test it in isolation. Also, say we want to run each edit via an EditCommand (so that we support Undo, for example) - for me, this rules out databinding.
@Guy:
Yes, this is exactly my POV. For me, databinding is great for very simple apps, but we don't do any of those!
A: I feel that in many frameworks, data binding is just an excuse to do things the easy way. It often results, as does almost any designer-generated code, in too much code which is too complicated and can't be easily tweaked. I've never come across a task I couldn't do just as well (if not better) and, in most cases, just as quickly, by data binding as by writing the code myself.
A: I have used databinding on large enterprise systems inconjunction with a framework. In my case it was CSLA.
It worked so well, and was extremly fast to get the view working. CSLA has lots of support for databinding and validation built in though.
If it breaks the MVP patturn, so what? if it works better and faster and is easier to manage. However, I would argue that it doesn't break the patturn at all... You can hook up databind in the presenter as it has a reference to the view and also to the model.
for example this is what you would put in your presenter and it would populate the list box or whatever control you want.
myView.list.datasource = myModel.myCollection;
*
*Also I would like to point out the databinding shouldn't be taken as an all or nothing approch. Many times I use databinding when i have a simple and easy UI requirment to map to my object model. However, when there is special functionality needed I might put some code in the presenter to build up the view as I need it rather than using databinding.
Alan
A: I quite agree with you, data binding have drawbacks...
In our application, if not used carefully, it leads us sometimes to bad data consistency...
But there may be some elegant ways work with databinding with large forms?
Please give me your opinion here:
How to use a binding framework efficiently
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: What's the best way to go from a Photoshop mockup to semantic HTML and CSS? I generally use a manual process:
*
*Look at the page, figure out the semantic elements, and build the HTML
*Slice up the images I think I'll need
*Start writing CSS
*Tweak and repeat different steps as necessary
Got a better approach, or a tool?
A: Well, when I build a website I tend to try and forget about the design completely while writing the HTML. I do this so I won't end up with any design-specific markup and so I can focus on the semantic meaning of the elements.
Some pointers how to markup things:
*
*menu - use the UL (unordered list) element, since that's exactly what a menu is. an unordered list of choices. example:
<ul id="menu">
<li id="home"><a href="/" title="Go to Homepage">Home</a></li>
<li id="about"><a href="/about" title="More about us">About</a></li>
</ul>
if you'd like an horizontal menu you could do this:
#menu li {
display: block;
float: left;
}
*Logo - use a H1 (heading) element for the logo instead of an image.Example:
<div id="header">
<h1>My website</h1>
</div>
And the CSS (same technique can be applied to the menu above if you would like a menu with graphical items):
#header h1 {
display: block;
text-indent: -9999em;
width: 200px;
height: 100px;
background: transparent url(images/logo.png) no-repeat;
}
*IDs and classes - use IDs to identify elements that you only have one instance of. Use class for identifying elements that you got several instances of.
*Use a textual browser (for instance, lynx). If it makes sense to navigate in this way, you've done good when it comes to accessibility.
I hope this helps :)
A: I essentially do the same thing Jon does, but here are a few other ideas:
*
*Use Guides in Photoshop (and lock to them). Figure out all of your dimensions for each box/ region ahead of time.
*Collect all of your dimensions and color hex values into an info file (I use a txt file) that you can easily reference. This will reduce your alt-tab tax and selecting colors in Photoshop multiple times.
*After all my Guides are in place, I slice out the entire website into my images folder, starting with photos and grouped elements, and ending with the various background tiles/images, should they exist. (Tip: Use ctrl-click on the layer preview to select that layer's content).
Notes on using Photoshop:
*
*Use Guides or the Grid.
*Use the Notes feature for any pertinent information
*Always use Layer Groups for similar elements. We need to be able to turn entire regions off in one click. Put all 'header' content in one Layer Group.
*Always name your layers.
*You can put each page template in one PSD file and use nested Layer Groups to organize them. This way we don't have to setup all of our guides and notes for each page template on a site.
A: I have a fairly natural way of coding. The key is to treat the page like a document or an article. If you think of it like this the following becomes logically clear:
*
*The page title is a top level heading
*
*Whether you make the site title or actual page title the h1 is up to you - personally I'd make About Us the h1 rather than Stack Overflow.
*The navigation is a table of contents, and thus an ordered list - you may as well use an ol over a ul.
*Section headers are h2, sections within those sections are h3s etc. Stack them up.
*Use blockquotes and quotes where possible. Don't just surround it with ".
*Don't use b and i. Use strong and em. This is because HTML is structural rather than presentational markup. Strong and emphasis tags should be used where you'd put emphasis on the word.
*<label> your form elements.
*Use <acronym>s and <abbr>s where possible, but only in the first instance.
*The easiest: always, always give your images some alternate text.
*There's lots of HTML tags you could use that you probably haven't - address for postal addresses, screen code output. Have a look at HTML Dog for some, it's my favourite reference.
That's just a few pointers, I'm sure I could think of more.
Oh, and if you want a challenge write your XHTML first, then write the CSS. When CSS-ing you aren't allowed to touch the HTML. It's actually harder than you think (but I've found it's made me quicker).
A: No shortcuts :) but everybody works slightly differently.
This tutorial that popped up in my feedreader yesterday shows the process from start to finish and might help people who have never done it before but as you are an old hand it's just about streamlining your own methods.
EDIT:
The listapart link certainly is more automated for 'flat' designs where both imageready and fireworks have had pretty good support from day one and it's got better and more semantic with every release but if you have a more complex design it's the twiddly bits that make the design what it is and these have to be done by hand.
A: I just thought it was worth pointing out that in addition to the excellent advice you've had so far I'd recommend getting a printed version of the design, using a red pen to mark up all the block elements on the design you think you can spot and sitting down with the designer for half an hour and talking through how they envisioned their design working for the use cases that don't fit the static design.
*
*What happens when more text is put in the navigation?
*Is this width fixed or fluid?
*Is this content pane to the right fixed height or fluid? If it's fluid why did you put a background on it that can't be repeated?
*You have a border extending down the page that breaks two otherwise connected elements. Visually it makes sense, but semantically I not can't just use an li to house both those elements. What do you think is more important?
It'll also help you spot potential problems that you might otherwise not have realised were going to be issues until your elbow deep in css.
Not only does it make your job easier after a few times doing it your designer will get a much stronger sense of what is involved in marking up their work - some designers have real trouble comprehending why something they think looks visually very simple will take a few days of css tweaking to make work.
A: Some of the designers i know, usually uses Illustrator to make the design elements.
A: This page shows how to do it a little more automated.
A: Also, get to know the "Layer Comps" feature. I use this for changing button states.
*
*Create layer comps for normal, hover, and active.
*In each of these, set up the effects/color overlays and visible layers which belong with that state.
*Save for web: go to a different folder for each state, unless it's easier to rename each slice (otherwise your hover button slices will overwrite your regular slices).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: mod_rewrite rule to redirect all requests except for one specific path I'm trying to redirect all requests to my domain to another domain using mod_rewrite in an Apache 2.2 VirtualHost declaration. There is one exception to this -- I'd like all requests to the /audio path not to be redirected.
I've written a RewriteCond and RewriteRule to do this but it's not quite right and I can't figure out why. The regular expression contains a negative lookahead for the string "/audio", but for some reason this isn't matching. Here's the definition:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.net(?!/audio) [NC]
RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301]
If I change the RewriteCond to:
RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.example/(?!audio) [NC]
(i.e. put the forward slash outside of the negative lookahead part) then it works, but the downside of this is that requests to mydomain.example without a trailing slash will not be redirected.
Can anyone point out what I'm doing wrong?
Here are the rules:
<VirtualHost *:80>
ServerAdmin [email protected]
DocumentRoot "/var/www/mydomain.example/htdocs"
ServerName www.mydomain.example
ServerAlias mydomain.example
RewriteEngine on
RewriteCond {REQUEST_URI} !^/audio
RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301]
RewriteLog logs/mod_rewrite_log
RewriteLogLevel 3
ErrorLog logs/error_log
CustomLog logs/access_log common
</VirtualHost>
Thanks @mercutio -- that makes perfect sense but it still doesn't seem to work.
Here's what the mod_rewrite log says when I make a request to http://mydomain.example/audio/something.mp3:
(2) init rewrite engine with requested uri /audio/something.mp3
(3) applying pattern '^(.*)$' to uri '/audio'
(2) rewrite '/audio' -> 'http://www.newdomain.example/'
(2) explicitly forcing redirect with http://www.newdomain.example
(1) escaping http://www.newdomain.example for redirect
(1) redirect to http://www.newdomain.example [REDIRECT/301]
Since the REQUEST_URI does start with /audio I would expect the RewriteRule to be ignored.
A: The HTTP_HOST only contains the host name, not the path of the URL requested.
RewriteCond %{REQUEST_URI} !^/audio
Should be all you need.
Further, you can get debug info from the rewrite engine with the following, which is really useful to see how your conditions and rules are being matched:
RewriteLog /path/to/log/file
RewriteLogLevel 3
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: SQL2005: Linking a table to multiple tables and retaining Ref Integrity? Here is a simplification of my database:
Table: Property
Fields: ID, Address
Table: Quote
Fields: ID, PropertyID, BespokeQuoteFields...
Table: Job
Fields: ID, PropertyID, BespokeJobFields...
Then we have other tables that relate to the Quote and Job tables individually.
I now need to add a Message table where users can record telephone messages left by customers regarding Jobs and Quotes.
I could create two identical tables (QuoteMessage and JobMessage), but this violates the DRY principal and seems messy.
I could create one Message table:
Table: Message
Fields: ID, RelationID, RelationType, OtherFields...
But this stops me from using constraints to enforce my referential integrity. I can also forsee it creating problems with the devlopment side using Linq to SQL later on.
Is there an elegant solution to this problem, or am I ultimately going to have to hack something together?
Burns
A: Create one Message table, containing a unique MessageId and the various properties you need to store for a message.
Table: Message
Fields: Id, TimeReceived, MessageDetails, WhateverElse...
Create two link tables - QuoteMessage and JobMessage. These will just contain two fields each, foreign keys to the Quote/Job and the Message.
Table: QuoteMessage
Fields: QuoteId, MessageId
Table: JobMessage
Fields: JobId, MessageId
In this way you have defined the data properties of a Message in one place only (making it easy to extend, and to query across all messages), but you also have the referential integrity linking Quotes and Jobs to any number of messages. Indeed, both a Quote and Job could be linked to the same message (I'm not sure if that is appropriate to your business model, but at least the data model gives you the option).
A: About the only other way I can think of is to have a base Message table, with both an Id and a TypeId. Your subtables (QuoteMessage and JobMessage) then reference the base table on both MessageId and TypeId - but also have CHECK CONSTRAINTS on them to enforce only the appropiate MessageTypeId.
Table: Message
Fields: Id, MessageTypeId, Text, ...
Primary Key: Id, MessageTypeId
Unique: Id
Table: MessageType
Fields: Id, Name
Values: 1, "Quote" : 2, "Job"
Table: QuoteMessage
Fields: Id, MessageId, MessageTypeId, QuoteId
Constraints: MessageTypeId = 1
References: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId)
QuoteId = Quote.QuoteId
Table: JobMessage
Fields: Id, MessageId, MessageTypeId, JobId
Constraints: MessageTypeId = 2
References: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId)
JobId = Job.QuoteId
What does this buy you, as compared to just a JobMesssage and QuoteMessage table? It elevates a Message to a first class citizen, so that you can read all Messages from a single table. In exchange, your query path from a Message to it's relevant Quote or Job is 1 more join away. It kind of depends on your app flow whether that's a good tradeoff or not.
As for 2 identical tables violating DRY - I wouldn't get hung up on that. In DB design, it's less about DRY, and more about normalization. If the 2 things you're modeling have the same attributes (columns), but are actually different things (tables) - then it's reasonable to have multiple tables with similar schemas. Much better than the reverse of munging different things together.
A: @burns
Ian's answer (+1) is correct [see note]. Using a many to many table QUOTEMESSAGE to join QUOTE to MESSAGE is the most correct model, but will leave orphaned MESSAGE records.
This is one of those rare cases where a trigger can be used. However, caution needs to be applied to ensure that the a single MESSAGE record cannot be associated with both a QUOTE and a JOB.
create trigger quotemessage_trg
on quotemessage
for delete
as
begin
delete
from [message]
where [message].[msg_id] in
(select [msg_id] from Deleted);
end
Note to Ian, I think there is a typo in the table definition for JobMessage, where the columns should be JobId, MessageId (?). I would edit your quote but it might take me a few years to gain that level of reputation!
A: Why not just have both QuoteId and JobId fields in the message table? Or does a message have to be regarding either a quote or a job and not both?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is anybody using the Specter BDD Framework? I was reading the example chapter from the book by Ayende and on the website of the Boo language I saw a reference to the Specter BDD Framework.
I am wondering if anybody is using it in their project, how that works out and if there are more examples and/or suggested readings.
Just in case you are wondering, I'm a C# developer and so I plan to use it in a C#/.NET environment.
A few year later visiting this question. I think we can safely assume Specflow and some others like NSpec became the tools we are using.
A: I'm not using it, but I've seen demos of it. It's very nice.
Boo has a lot of interesting extensibility points in parsing and interpreting the language itself that make it ideal for writing frameworks like Specter. The end result is much nicer looking than you'd be able to get with languages like C#.
Unfortunately, the fact that Boo isn't "in the box" and can't simply be something you check into your source tree and use really holds it back here. It's a much heavier adoption cost than just picking a framework like NSpec.
A: I have used it a little, I'm starting a new project right now and I plan on using specter. I'm really enjoying it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Do you use virtualized desktops for legacy/seldom used applications? I wondered if anyone uses virtualized desktop PCs (running WinXP Pro or older) to have some old applications that are seldom used available for some ongoing tasks.
Say you have a really old project that every once in a while needs a document update in a database system or something like that. The database application is running on a virtualized desktop that is only started when needed.
I think we could save energy, hardware and space if we would virtualize some of those old boxes. Any setups in your company?
edit Licensing could be of concern, but I guess you have a valid license for the old desktop box. Maybe the license isn't valid in a VM environment, I'd definitly check that before.
Sure enough, if the application is performance critic, virtualization could hurt. But I'm thinking about some kind of outdated application that is still used to perform, say a calculation every 12 weeks for a certain customer/service.
A: I use virtualized desktops for:
*
*Support that requires VPN software I do not want on my own desktop. This also lets a whole team share the support computer for a specific customer.
*A legacy system which we use several different versions of (depending on customer's version) and they're not really compatible so its good to have a virtualized desktop for each version.
A: We use virtualisation to test on a variety of Operating Systems - the server application runs under linux, and we have a production (real) server, and a couple of test servers, which are all VMs.
The client runs under Windows, which, being an OS X user I have to run in a VM, and the other developer I work with runs an XP VM on his 8-core Vista box.
(I also have a seperate VM for running CAD software, but that's not really programming)
A: It depends on the requirements of the legacy systems. Very often if a system is relient on a certain clock frequency, then it better and morereliable to keep the older OS systems running as Virtulized OS' can do funy things to performance.
If the legacy systems aren't that critical, then go for it! One piece of advice I would give is to ensure that the system works FULLY before chucking out your old 3.11 systems as I have been stung before! To fully perform the testing can cost more money then you might save, but its up to anyone who make the decisions to ensure that is considered.
A: We use virtualisation for testing out applications on Vista. Or rather customers do the testing and we use virtualisation to reproduce the bugs they complain about.
I guess the thing that would stop me from using lots of virtual instances of my favourite proprietary OS would be licencing. I presume Microsoft would want me to have a licence for every installation, virtual or otherwise?
A: We use VMWare with a virtual windows XP here at work to run some old development tools with very expensive licenses that don't run at all on Vista. So VMWare saved us about $5000 in licenses.
A: Since my last machine upgrade I have been running virtualised OS's for a number of tasks. For example I use a different set of Visual Studio plugins for managed and c++ unmanaged development. Some things I found:
*
*Run your vmware setup on a machine with plenty of resources. I'll repeat...plenty of resources! A fast quad and 8GB of memory is what my current machine is running and it runs sweet (warning you need a 64bit OS for the 8GB!).
*I wouldn't worry about app performance if your current physical hardware is old (2+ years). With a decent machine I find the virtualized apps run faster than on the legacy hardware!
*When upgrading to a new workstation, p2v your old workstation. No need to worry about synergy or a KVM in the transition period any more!
A: I've used virtualisation so I could take my development environment around with me while travelling. As long as I could install MS Virtual PC, (and the PC/laptop had generous enough RAM) then I could access all my tools, VPN, Remote desktop links, SQL databases etc...
Worked fairly well, just a little slower than I like. I could have carted a laptop around, but found a small portable harddrive to be lighter/easier and just as effective.
However, consulting for several clients - all with different VPN requirements/passwords/databases/versions of frameworks & tools etc, I've found that having a Virtualised support environment for each is well worth it. Then multiple users have access to what is needed when supporting each client - they just need to either remote desktop (or run directly) the virtualised instance.
A: I've used VMs to handle work-related tasks that I didn't want / couldn't do on the company-issued laptop. Specifically, I needed to have several editions of the JRE running at the same time, which Java doesn't really like.
To get around this, I built several VMs that each ran the one tool I needed in trimmed-down XP instances.
Another thing to consider is that if you have a 5-yr-old server running some app, it's probably going to run just fine on a VM on new hardware. So, if you have a rack of old devices, buying one or two "real" servers, installing something like ESX (I'm most familiar with that tool, though Xen and others exist), then use a physical-to-virtual conversion tool to get those old devices switched to VMs so you can reduce your electricity consumption, management headaches, and worries about a critical device failing and not being able to find hardware for it.
A: We use VM for legacy apps, and have retired old machines that served up those apps. It eliminated the concern of matching drivers from NT to Win2k3. From a disaster recovery perspective this also helped as we couldn't find boxes to support the old apps at the DR data center.
A: The likes of VMWare are invaluable tools for browser testing of web applications. You can pretty easily test many combinations of OS and browser without having rank upon rank of physical machines running that software.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Enforcing required function call I have a "Status" class in C#, used like this:
Status MyFunction()
{
if(...) // something bad
return new Status(false, "Something went wrong")
else
return new Status(true, "OK");
}
You get the idea.
All callers of MyFunction should check the returned Status:
Status myStatus = MyFunction();
if ( ! myStatus.IsOK() )
// handle it, show a message,...
Lazy callers however can ignore the Status.
MyFunction(); // call function and ignore returned Status
or
{
Status myStatus = MyFunction();
} // lose all references to myStatus, without calling IsOK() on it
Is it possible to make this impossible? e.g. an throw exception
In general: is it possible to write a C# class on which you have to call a certain function?
In the C++ version of the Status class, I can write a test on some private bool bIsChecked in the destructor and ring some bells when someone doesn't check this instance.
What is the equivalent option in C#?
I read somewhere that "You don't want a destructor in your C# class"
Is the Dispose method of the IDisposable interface an option?
In this case there are no unmanaged resources to free.
Additionally, it is not determined when the GC will dispose the object.
When it eventually gets disposed, is it still possible to know where and when you ignored that specific Status instance?
The "using" keyword does help, but again, it is not required for lazy callers.
A: If you really want to require the user to retrieve the result of MyFunction, you might want to void it instead and use an out or ref variable, e.g.,
void MyFunction(out Status status)
{
}
It might look ugly but at least it ensures that a variable is passed into the function that will pick up the result you need it to pick up.
@Ian,
The problem with exceptions is that if it's something that happens a little too often, you might be spending too much system resources for the exception. An exception really should be used for exceptional errors, not totally expected messages.
A: Even System.Net.WebRequest throws an exception when the returned HTTP status code is an error code. The typical way to handle it is to wrap a try/catch around it. You can still ignore the status code in the catch block.
You could, however, have a parameter of Action< Status> so that the caller is forced to pass a callback function that accepts a status and then checking to see if they called it.
void MyFunction(Action<Status> callback)
{ bool errorHappened = false;
if (somethingBadHappend) errorHappened = true;
Status status = (errorHappend)
? new Status(false, "Something went wrong")
: new Status(true, "OK");
callback(status)
if (!status.isOkWasCalled)
throw new Exception("Please call IsOK() on Status").
}
MyFunction(status => if (!status.IsOK()) onerror());
If you're worried about them calling IsOK() without doing anything, use Expression< Func< Status,bool>> instead and then you can analyse the lambda to see what they do with the status:
void MyFunction(Expression<Func<Status,bool>> callback)
{ if (!visitCallbackExpressionTreeAndCheckForIsOKHandlingPattern(callback))
throw new Exception
("Please handle any error statuses in your callback");
bool errorHappened = false;
if (somethingBadHappend) errorHappened = true;
Status status = (errorHappend)
? new Status(false, "Something went wrong")
: new Status(true, "OK");
callback.Compile()(status);
}
MyFunction(status => status.IsOK() ? true : onerror());
Or forego the status class altogether and make them pass in one delegate for success and another one for an error:
void MyFunction(Action success, Action error)
{ if (somethingBadHappened) error(); else success();
}
MyFunction(()=>;,()=>handleError());
A: I am fairly certain you can't get the effect you want as a return value from a method. C# just can't do some of the things C++ can. However, a somewhat ugly way to get a similar effect is the following:
using System;
public class Example
{
public class Toy
{
private bool inCupboard = false;
public void Play() { Console.WriteLine("Playing."); }
public void PutAway() { inCupboard = true; }
public bool IsInCupboard { get { return inCupboard; } }
}
public delegate void ToyUseCallback(Toy toy);
public class Parent
{
public static void RequestToy(ToyUseCallback callback)
{
Toy toy = new Toy();
callback(toy);
if (!toy.IsInCupboard)
{
throw new Exception("You didn't put your toy in the cupboard!");
}
}
}
public class Child
{
public static void Play()
{
Parent.RequestToy(delegate(Toy toy)
{
toy.Play();
// Oops! Forgot to put the toy away!
});
}
}
public static void Main()
{
Child.Play();
Console.ReadLine();
}
}
In the very simple example, you get an instance of Toy by calling Parent.RequestToy, and passing it a delegate. Instead of returning the toy, the method immediately calls the delegate with the toy, which must call PutAway before it returns, or the RequestToy method will throw an exception. I make no claims as to the wisdom of using this technique -- indeed in all "something went wrong" examples an exception is almost certainly a better bet -- but I think it comes about as close as you can get to your original request.
A: I know this doesn't answer your question directly, but if "something went wrong" within your function (unexpected circumstances) I think you should be throwing an exception rather than using status return codes.
Then leave it up to the caller to catch and handle this exception if it can, or allow it to propogate if the caller is unable to handle the situation.
The exception thrown could be of a custom type if this is appropriate.
For expected alternative results, I agree with @Jon Limjap's suggestion. I'm fond of a bool return type and prefixing the method name with "Try", a la:
bool TryMyFunction(out Status status)
{
}
A: Using Status as a return value remembers me of the "old days" of C programming, when you returned an integer below 0 if something didn't work.
Wouldn't it be better if you throw an exception when (as you put it) something went wrong? If some "lazy code" doesn't catch your exception, you'll know for sure.
A: Instead of forcing someone to check the status, I think you should assume the programmer is aware of this risks of not doing so and has a reason for taking that course of action. You don't know how the function is going to be used in the future and placing a limitation like that only restricts the possibilities.
A: You can throw an exception by:
throw MyException;
[global::System.Serializable]
public class MyException : Exception
{
//
// For guidelines regarding the creation of new exception types, see
// http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/cpconerrorraisinghandlingguidelines.asp
// and
// http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncscol/html/csharp07192001.asp
//
public MyException () { }
public MyException ( string message ) : base( message ) { }
public MyException ( string message, Exception inner ) : base( message, inner ) { }
protected MyException (
System.Runtime.Serialization.SerializationInfo info,
System.Runtime.Serialization.StreamingContext context )
: base( info, context ) { }
}
The above exception is fully customizable to your requirements.
One thing I would say is this, I would leave it to the caller to check the return code, it is their responsability you just provide the means and interface. Also, It is a lot more efficient to use return codes and check the status with an if statement rather than trhowing exceptions. If it really is an Exceptional circumstance, then by all means throw away... but say if you failed to open a device, then it might be more prudent to stick with the return code.
A: That would sure be nice to have the compiler check that rather than through an expression. :/
Don't see any way to do that though...
A: GCC has a warn_unused_result attribute which is ideal for this sort of thing. Perhaps the Microsoft compilers have something similar.
A: @Paul you could do it at compile time with Extensible C#.
A: One pattern which may sometimes be helpful if the object to which code issues requests will only be used by a single thread(*) is to have the object keep an error state, and say that if an operation fails the object will be unusable until the error state is reset (future requests should fail immediately, preferably by throwing an immediate exception which includes information about both the previous failure and the new request). In cases where calling code happens to anticipate a problem, this may allow the calling code to handle the problem more cleanly than if an exception were thrown; problems which are not ignored by the calling code will generally end up triggering an exception pretty soon after they occur.
(*) If a resource will be accessed by multiple threads, create a wrapper object for each thread, and have each thread's requests go through its own wrapper.
This pattern is usable even in contexts where exceptions aren't, and may sometimes be very practical in such cases. In general, however, some variation of the try/do pattern is usually better. Have methods throw exception on failure unless the caller explicitly indicates (by using a TryXX method) that failures are expected. If callers say failures are expected but don't handle them, that's their problem. One could combine the try/do with a second layer of protection using the scheme above, but I'm not sure whether it would be worth the cost.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Loading System.ServiceModel configuration section using ConfigurationManager Using C# .NET 3.5 and WCF, I'm trying to write out some of the WCF configuration in a client application (the name of the server the client is connecting to).
The obvious way is to use ConfigurationManager to load the configuration section and write out the data I need.
var serviceModelSection = ConfigurationManager.GetSection("system.serviceModel");
Appears to always return null.
var serviceModelSection = ConfigurationManager.GetSection("appSettings");
Works perfectly.
The configuration section is present in the App.config but for some reason ConfigurationManager refuses to load the system.ServiceModel section.
I want to avoid manually loading the xxx.exe.config file and using XPath but if I have to resort to that I will. Just seems like a bit of a hack.
Any suggestions?
A: Thanks to the other posters this is the function I developed to get the URI of a named endpoint. It also creates a listing of the endpoints in use and which actual config file was being used when debugging:
Private Function GetEndpointAddress(name As String) As String
Debug.Print("--- GetEndpointAddress ---")
Dim address As String = "Unknown"
Dim appConfig As Configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None)
Debug.Print("app.config: " & appConfig.FilePath)
Dim serviceModel As ServiceModelSectionGroup = ServiceModelSectionGroup.GetSectionGroup(appConfig)
Dim bindings As BindingsSection = serviceModel.Bindings
Dim endpoints As ChannelEndpointElementCollection = serviceModel.Client.Endpoints
For i As Integer = 0 To endpoints.Count - 1
Dim endpoint As ChannelEndpointElement = endpoints(i)
Debug.Print("Endpoint: " & endpoint.Name & " - " & endpoint.Address.ToString)
If endpoint.Name = name Then
address = endpoint.Address.ToString
End If
Next
Debug.Print("--- GetEndpointAddress ---")
Return address
End Function
A: http://mostlytech.blogspot.com/2007/11/programmatically-enumerate-wcf.html
// Automagically find all client endpoints defined in app.config
ClientSection clientSection =
ConfigurationManager.GetSection("system.serviceModel/client") as ClientSection;
ChannelEndpointElementCollection endpointCollection =
clientSection.ElementInformation.Properties[string.Empty].Value as ChannelEndpointElementCollection;
List<string> endpointNames = new List<string>();
foreach (ChannelEndpointElement endpointElement in endpointCollection)
{
endpointNames.Add(endpointElement.Name);
}
// use endpointNames somehow ...
Appears to work well.
A: The <system.serviceModel> element is for a configuration section group, not a section. You'll need to use System.ServiceModel.Configuration.ServiceModelSectionGroup.GetSectionGroup() to get the whole group.
A: This is what I was looking for thanks to @marxidad for the pointer.
public static string GetServerName()
{
string serverName = "Unknown";
Configuration appConfig = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ServiceModelSectionGroup serviceModel = ServiceModelSectionGroup.GetSectionGroup(appConfig);
BindingsSection bindings = serviceModel.Bindings;
ChannelEndpointElementCollection endpoints = serviceModel.Client.Endpoints;
for(int i=0; i<endpoints.Count; i++)
{
ChannelEndpointElement endpointElement = endpoints[i];
if (endpointElement.Contract == "MyContractName")
{
serverName = endpointElement.Address.Host;
}
}
return serverName;
}
A: GetSectionGroup() does not support no parameters (under framework 3.5).
Instead use:
Configuration config = System.Configuration.ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ServiceModelSectionGroup group = System.ServiceModel.Configuration.ServiceModelSectionGroup.GetSectionGroup(config);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Best TinyMce editor Image Manager / File upload for Asp.net Mvc What is the best Image Manager to integrate in TinyMce editor apart the official Moxiecode commercial ones?
I'm looking to integrate a light texteditor in an asp.net mvc application and I choosed the Tinymce solution (and not the classic FCKEditor as this seems more lightweight and more jquery friendly).
Sadly TinyMce doesn't come with the Image Manager or Document Manager integrated like FCKeditor but you must buy them as plugins form Moxiecode.
I've looked other plugins but till now I've not find any decend and light solution that works with asp.net mvc framework.
Any suggestions?
A: There are a couple of open source plugins on SourceForge,
http://sourceforge.net/tracker/?group_id=103281&atid=738747
(search for image)
The plugin architecture is easy to understand if you know Javascript.
If you have the time you could roll out your own.
A: Ajax File Manager http://filemanager.3ntar.net/
free and cooool
A: This is an integration of TinyMCE with FCKEditor File Upload Manager in ASP.NET MVC 3, should give it a try: http://tinymcefckfilemanger.codeplex.com/
A: http://www.ilyax.com/imagebrowser/ free and best :)
A: You can try: http://tinymcefckfilemanger.codeplex.com/
However, you must have some customizes to make it work!
:)
A: I think this is the best solution
http://www.andyarndt.net/TinyFileManager.aspx#sthash.4MgLV1Oi.dpbs
A: Carlton : Alfresco seems to be a Java based solution.
Ta: I've looked into the plugin folders but none was really good for asp.net mvc.
What I'm now testing is a mix between Tiny with the image uploader of FCKEditor:
this is the pho version but I think it is pretty easy to convert to .net [Tinyfck][1]
[1]: this: http://p4a2.crealabsfoundation.org/tinyfck
A: I just started a project on codeplex that integrates nicely with ASP.NET MVC 2. Let me know if anyone wants to help out... I'm looking to integrate cropping (via JCrop) and resizing soon.
http://aspnetadvimage.codeplex.com/
You can download the sample project on the "Source Code" tab.
A: This one works for asp.net mvc
http://aspnetadvimage.codeplex.com/SourceControl/list/changesets
A: Old question. However, it would be helpful to someone.
http://www.andyarndt.net/TinyFileManager.aspx is a .net web application. Works fine with webforms as well. You can do bit customization to get it worked with MVC as well.
Edit:
You can refer to the sample application provided in Github TinyFileManager.NET to how to configure and refer the documentation mentioned in above mentioned page.
Custom CSS to avoid some conflicts with Bootstrap ver. 3.x.x:
div.mce-fullscreen
{
z-index: 1030;
}
div.mce-edit-area
{
border-width: 1px !important;
border-left-width: 0 !important;
border-bottom-width: 0 !important;
}
.mce-combobox .mce-btn
{
width: 44px !important;
height: auto !important;
}
.mce-combobox .mce-btn button
{
padding-right: 0;
padding-left: 0;
}
ASP.net Control:
<asp:TextBox ID="txtAnnouncements" runat="server" TextMode="MultiLine" AutoComplete="off"
CssClass="form-control elm1"></asp:TextBox>
TinyMCE Javascript:
tfm_path = '/fileman';
tinymce.init({
// document_base_url: "http://localhost:58841/",
// relative_urls: true,
selector: "textarea.elm1",
mode: "specific_textareas",
editor_selector: "tinymce",
theme: "modern",
// width: 300,
height: 300,
plugins: [
"advlist autolink lists link image charmap print preview hr anchor pagebreak",
"searchreplace wordcount visualblocks visualchars code fullscreen",
"insertdatetime media nonbreaking save table contextmenu directionality",
"emoticons template paste textcolor "
],
// content_css: "css/content.css",
toolbar1: "insertfile undo redo | styleselect | bold italic | alignleft aligncenter alignright alignjustify | forecolor backcolor emoticons | bullist numlist outdent indent | link image | print preview media fullscreen ",
image_advtab: true,
encoding: "xml",
setup: function (editor) {
editor.on("SaveContent", function (i) {
i.content = i.content.replace(/'/g, "&apos");
});
}
});
When saving the database, save directly, and when re-binding, to the textbox, use HttpUtility.HtmlDecode().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to patch on Windows? Given a (source) patch file, what's the easiest way to apply this patch on the source files under Windows?
A GUI tool where I can visually compare the unchanged-changed source lines would be great.
A: Not that since Git 2.3.3 (March 2015), you can use git apply --unsafe-paths to use git apply outside a git repo.
See commit 5244a31 by Junio C Hamano (gitster)
"git apply" was not very careful about reading from, removing, updating and creating paths outside the working tree (under --index/--cached) or the current directory (when used as a replacement for GNU patch).
The documentation now includes:
--unsafe-paths:
By default, a patch that affects outside the working area (either a Git controlled working tree, or the current working directory when "git apply" is used as a replacement of GNU patch) is rejected as a mistake (or a mischief).
When git apply is used as a "better GNU patch", the user can pass the --unsafe-paths option to override this safety check.
This option has no effect when --index or --cached is in use.
So if you have git installed, git apply could help, even outside of any git repo.
A: Patch for Windows is what you're looking for.
A: A good way to apply a patch file under Windows OS is using Git.
As I understood, Git is a version control solution like SVN.
Here is a guideline to apply a patch :
*
*First of all, download the latest release of the Windows Git Edition here :
GIT
*With the cmd prompt, change directory to the patch file and files to patch
*Now you can use the following command line :
git apply --ignore-space-change --ignore-whitespace --whitespace=nowarn file.patch
A: WinMerge is awesome.
http://winmerge.org/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Is Visual C++ memory managed by the Dot Net framework Recently, I've been dealing with an error with accessing MAPI via the .NET framework (as described in this article). I am now left with a series of memory access violation errors.
To get past the issues, I have been trying to use this 3rd party component, which has a Visual C++ core. Unfortunately - we are still having the same errors.
I've personally never used Visual C++, but my question is: if the C++ library is compiled using Visual Studio 2005, using Visual C++ - does the memory of the project become managed by the .NET framework, as well, which would therefore make it subject to the same issues as the .NET libraries we're using? Or am I barking up the wrong tree?
A: The two previous answers have mentioned "Managed C++", this is an old bolt-on that they did to allow you to use managed C++ in a .NET environment. It wasn't a first class citizen - unlike C++/CLI (link text. But to answer your original question, no, Visual C++ is not managed by the .NET runtime. Managed C++ & C++/CLI are.
A: Unless you are using Managed C++ (which it doesn't sound like you are) then no, the memory is not managed by the CLR.
The recommended method of talking to Exchange in .Net is via WebDAV.
A: I'm not entirely sure what you're asking, but i'll give it a shot.
Visual C++ is a pure C/C++ compiler so has none of .NET's memory management, nor any of its runtime -- You have to manually call new and delete.
.NET also provides C++/CLI, which is a slightly modified version of C++ that targets the .NET runtime, and is GC aware -- eg. its memory is managed by the .NET runtime.
Without more details about your bug I can't really make any suggestions, beyond suggesting that you make sure you use the appropriate GC guards, and the provide finalizers in any place they are needed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Introducing Python The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development.
But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now.
How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company.
Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
A: If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?
A: @darkdog:
Using a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.
I'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.
If you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.
A: I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments ("can you parse the stats in these files into a CSV file organized by date and site?", etc) and had a quick turnaround time on all of them.
I also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.
Eventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.
This has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.
So if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.
A: I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.
From what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.
The real issue is at the maintenance and management level.
How will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.
A: It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.
I would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.
A: Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django ..
A: I don't think it's a matter of a programming language as such.
What is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.
A: I love Python and Django, and use both to develop the our core webapps.
That said, it's hard to make a business case for switching at this point. Specifically:
*
*Any new platform is risky compared to staying with the tried and true
*You'll have the developer fragmentation you mentioned
*It's far easier to find PHP programmers than python programmers
Moreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.
That said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.
In conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to find an implementation of a C# interface in the current assembly with a specific name? I have an Interface called IStep that can do some computation (See "Execution in the Kingdom of Nouns"). At runtime, I want to select the appropriate implementation by class name.
// use like this:
IStep step = GetStep(sName);
A: Your question is very confusing...
If you want to find types that implement IStep, then do this:
foreach (Type t in Assembly.GetCallingAssembly().GetTypes())
{
if (!typeof(IStep).IsAssignableFrom(t)) continue;
Console.WriteLine(t.FullName + " implements " + typeof(IStep).FullName);
}
If you know already the name of the required type, just do this
IStep step = (IStep)Activator.CreateInstance(Type.GetType("MyNamespace.MyType"));
A: If the implementation has a parameterless constructor, you can do this using the System.Activator class. You will need to specify the assembly name in addition to the class name:
IStep step = System.Activator.CreateInstance(sAssemblyName, sClassName).Unwrap() as IStep;
http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx
A: Based on what others have pointed out, this is what I ended up writing:
///
/// Some magic happens here: Find the correct action to take, by reflecting on types
/// subclassed from IStep with that name.
///
private IStep GetStep(string sName)
{
Assembly assembly = Assembly.GetAssembly(typeof (IStep));
try
{
return (IStep) (from t in assembly.GetTypes()
where t.Name == sName && t.GetInterface("IStep") != null
select t
).First().GetConstructor(new Type[] {}
).Invoke(new object[] {});
}
catch (InvalidOperationException e)
{
throw new ArgumentException("Action not supported: " + sName, e);
}
}
A: Well Assembly.CreateInstance would seem to be the way to go - the only problem with this is that it needs the fully qualified name of the type, i.e. including the namespace.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Fast(er) way to get file inode using PHP To grab the inode of a file in PHP, you can use this:
$fs = stat($file);
echo $fs['ino'];
The problem with this is EVERYWHERE says it's slow and you should avoid it. So the question becomes what's the fast(er) way to do it?
A: You could use fileinode() but you should run benchmarks if you think it is slow.
A: I think you should benchmark and take a look at what you are doing to determine if stat() is the slowest part of your code. Stating 1 file on each request on a server that gets about 100 hits/day is not a problem. Stating every file could be a problem when you have to eek out a few more requests a second.
You can avoid stating the same file repeatedly by caching the results via memcached, apc or some other in-memory caching system.
Premature optimization is the root of all evil. - Donald Knuth
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Interlocked.Exchange, but not for booleans? Is there an equivalent for Interlocked.Exchange for boolean?
Such as an atomic exchange of values that returns the previous value and doesn't require locks?
A: No; use integers instead of booleans.
In principle such a thing could be written (cmpxchg, the underlying processor instruction, can operate on 8, 16, 32, and 64-bit operands on x86, 8, 16, 32, 64, and 128-bit operands on x64), but in practice most APIs stick to pointer and double pointer (32 and 64-bit on x86, 64 and 128-bit on x64) operands, because they're all you really need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How do I secure my new web server (Server 2008)? I've just put my new server up on an IP address with a domain pointing to it. I need to be able to remote admin it. I've opened the firewall for Remote Desktop and HTTP traffic. Is this going to be secure enough? I guess I should probably rename the administrator user...
A: Should be sufficient, as long as you use a crazy-complex password for the admin account, and make sure your http server is security-patched and up-to-date.
Also, I hope firewall != Windows Firewall.
Edit: +1 for EHaskin's suggestion of changing RD port, if only to reduce the bruteforce spam that your FW will have to endure, but never think that security == obscurity.
A: The absolute minimum you should do is change the Remote Desktop port, change the Admin username, and have a very strong admin password.
A: Any chance you can set up your server as a VPN endpoint? Then you would only have the VPN ports and the HTTP ports open. When you want to RDP to the server, you would connect to the VPN first and then you're good to go.
Only reason is, if my memory serves me right, RDP traffic is not encrypted.
This is how I run my IIS server at home, works very well.
A: Windows Server 2008 supports VPN capabilities. You can configure your remote access policies by using the Network Policy and Access Services. I believe this needs to be installed as a role before you can use it. Also, simply changing the RDP port on your firewall will not prevent an experienced hacker from still getting to your server. A simple port scan would reveal open ports.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I unregister COM dlls initially added with RegSvr32 when the /u arg doesn't work? Right, initially ran:
c:\regsvr32 Amazing.dll
then, (accidentally - I might add) I must have run it again, and (indeed) again when new versions of 'Amazing.dll' were released. Yes - I know now I should've run:
c:\regsvr32 /u Amazing.dll
beforehand - but hey! I forgot.
To cut to the chase, when add the COM reference in VS, I can see 3 instances of 'Amazing' all pointing to the same location (c:\Amazing.dll), running regsvr32 /u removes one of the references, the second time - does nothing...
How do I get rid of these references?
Am I looking at a regedit scenario? - If so - what exactly happens if I delete one of the keys???
Cheers
A: There's a tool by MS that is still floating around and has been since Win95 days which scans the registry and does stuff like finds COM keys that aren't pointing at a valid file anymore etc called RegClean (I found it here: http://downloads.zdnet.com/abstract.aspx?assetid=881470&node=2094) which I've seen some places still using particularly when messing with legacy COM stuff in VB which are generating new COM GUIDs after every build.
So if you got that, then unreg'd and deleted or moved the file, run the app and it will clean out the "orphaned" entries.
If you do decide to remove the keys using RegEdit, you might need to remove the class ids as well as the guid entries.
A: Your object's GUID's should not be changing. In other words, once you register the COM object, re-registering shouldn't be adding anything additional to the registry.
Unless you added additional COM interfaces or objects to the project.
In any case, if this is a one time deal (and it sounds like it is), open regedit and delete the unneeded keys manually.
A: I've got myself into a horrible mess with COM before. I had to pick my way though the registry deleting each reference, unfortunately.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What is the best strategy for retainment of large data sets? I'm leading a project where we'll be recording metrics data. I'd like to retain the data for years. However, I'd also like to keep the primary table from becoming bloated with data that, while necessary for long term trending, isn't required for short term reporting.
What is the best strategy for handling this situation? Simply archive the old data to another table? Or "roll it up" via some consolidation of the data itself (and then store it off to a different table)? Or something else entirely?
Additional info: we are using SQL Server 2005.
A: We use both methods at my work, but slightly different, we keep all sales data in the primary table for 30 days, then at night (part of the nightly jobs) the days sales are rolled up into summaries (n qty of x product sold today ect) in a separate table for reporting reasons, and sales over 30 days are archived into a different database, then once a year (we go on tax years) a new archive database is started. not exactly perfect but..
this way we get the summaries data fast, keep all current sales data at hand and have an unlimited space for the detailed archive data. we did try keeping it all in one database (in different tables) but the file size of the database (interbase) would grow so large that it would drag the system down.
the only real problem we have is accessing detailed data that spans several database, as connecting and disconnecting is slow, and analysis has to be done in code rather than sql
A: If you are using SQL server 2005, this may be a good candidate for using partitioned tables.
A: @Jason - I don't see how keeping data in plain old text files will allow you to do long term trending analysis easily on the data.
@Jason - I guess my point is that if any sort of ad-hoc analysis (i.e. trending) needs to be done on the data by business people, rolling up or archiving the data to text files really doesn't solve any problems. Of course writing code to consume a text file is easy in many languages, but that problem has been solved. Also, I would argue that today's RDBMS's are all extremely durable when setup and maintained properly. If they weren't why would you run a business on top of one (let alone archive data to it)? I just don't see the point of archiving to a plain text file because of the claim that durability of text files is superior to that of databases.
A: Depending on constraints like budget, etc, this sound like a perfect candidate for a data warehouse application. This would typically introduce a new server for use as a data warehouse. SQL Server 2005 supports a lot of this activity out of the box, further you might be able to utilize additional SQL Server services (e.g. Analysis Services, Reporting Services) to provide additional value to your users. (see http://www.microsoft.com/technet/prodtechnol/sql/2005/dwsqlsy.mspx)
A: Either of those options are excellent, but it really depends on the problem domain. For things like cash balances or statistical data, I think that rolling up records and consolidating them is the best way, you can then move the rolled up records into a parallel archive table, keying them in such a way that you can "unroll" if necessary. This keeps your primary data table clean and quick, but allows you to retain the extra data for auditing or whatever. The key question is, how do you implement the "roll-up" process. Either automatically, via a trigger or server side process, or by user intervention at the application level?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Trigger without a transaction? Is it possible to create a trigger that will not be in a transaction?
I want to update data on a linked server with a trigger but due to firewall issues we can't create a distributed transaction between the two servers.
A: What you probably want is a combination of a queue that contains updates for the linked server and a process that reads data from the queue and updates the remote server. The trigger will then insert a message into the queue as part of the normal transaction. This data will be read by the separate process and used to update the remote server. Logic will needed in the process handle errors (and possibly retries).
The queue can be implemented with one or more tables.
A: I know it's not helpful, so I'll probably get downvoted for this, but really, the solution is to fix the firewall problem.
I think if you use remote (not linked) servers (which are not the preferred option these days) then you can use SET REMOTE_PROC_TRANSACTIONS OFF to prevent the use of DTC for remote transactions, which might do the right thing here. But that probably doesn't help you with a linked server anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Views in separate assemblies in ASP.NET MVC I'm trying to create a webapplication where I want to be able to plug-in separate assemblies. I'm using MVC preview 4 combined with Unity for dependency injection, which I use to create the controllers from my plugin assemblies. I'm using WebForms (default aspx) as my view engine.
If I want to use a view, I'm stuck on the ones that are defined in the core project, because of the dynamic compiling of the ASPX part. I'm looking for a proper way to enclose ASPX files in a different assembly, without having to go through the whole deployment step. Am I missing something obvious? Or should I resort to creating my views programmatically?
Update: I changed the accepted answer. Even though Dale's answer is very thorough, I went for the solution with a different virtual path provider. It works like a charm, and takes only about 20 lines in code altogether I think.
A: It took me way too long to get this working properly from the various partial samples, so here's the full code needed to get views from a Views folder in a shared library structured the same as a regular Views folder but with everything set to build as embedded resources. It will only use the embedded file if the usual file does not exist.
The first line of Application_Start:
HostingEnvironment.RegisterVirtualPathProvider(new EmbeddedViewPathProvider());
The VirtualPathProvider
public class EmbeddedVirtualFile : VirtualFile
{
public EmbeddedVirtualFile(string virtualPath)
: base(virtualPath)
{
}
internal static string GetResourceName(string virtualPath)
{
if (!virtualPath.Contains("/Views/"))
{
return null;
}
var resourcename = virtualPath
.Substring(virtualPath.IndexOf("Views/"))
.Replace("Views/", "OrangeGuava.Common.Views.")
.Replace("/", ".");
return resourcename;
}
public override Stream Open()
{
Assembly assembly = Assembly.GetExecutingAssembly();
var resourcename = GetResourceName(this.VirtualPath);
return assembly.GetManifestResourceStream(resourcename);
}
}
public class EmbeddedViewPathProvider : VirtualPathProvider
{
private bool ResourceFileExists(string virtualPath)
{
Assembly assembly = Assembly.GetExecutingAssembly();
var resourcename = EmbeddedVirtualFile.GetResourceName(virtualPath);
var result = resourcename != null && assembly.GetManifestResourceNames().Contains(resourcename);
return result;
}
public override bool FileExists(string virtualPath)
{
return base.FileExists(virtualPath) || ResourceFileExists(virtualPath);
}
public override VirtualFile GetFile(string virtualPath)
{
if (!base.FileExists(virtualPath))
{
return new EmbeddedVirtualFile(virtualPath);
}
else
{
return base.GetFile(virtualPath);
}
}
}
The final step to get it working is that the root Web.Config must contain the right settings to parse strongly typed MVC views, as the one in the views folder won't be used:
<pages
validateRequest="false"
pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<controls>
<add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
</controls>
</pages>
A couple of additional steps are required to get it working with Mono. First, you need to implement GetDirectory, since all files in the views folder get loaded when the app starts rather than as needed:
public override VirtualDirectory GetDirectory(string virtualDir)
{
Log.LogInfo("GetDirectory - " + virtualDir);
var b = base.GetDirectory(virtualDir);
return new EmbeddedVirtualDirectory(virtualDir, b);
}
public class EmbeddedVirtualDirectory : VirtualDirectory
{
private VirtualDirectory FileDir { get; set; }
public EmbeddedVirtualDirectory(string virtualPath, VirtualDirectory filedir)
: base(virtualPath)
{
FileDir = filedir;
}
public override System.Collections.IEnumerable Children
{
get { return FileDir.Children; }
}
public override System.Collections.IEnumerable Directories
{
get { return FileDir.Directories; }
}
public override System.Collections.IEnumerable Files
{
get {
if (!VirtualPath.Contains("/Views/") || VirtualPath.EndsWith("/Views/"))
{
return FileDir.Files;
}
var fl = new List<VirtualFile>();
foreach (VirtualFile f in FileDir.Files)
{
fl.Add(f);
}
var resourcename = VirtualPath.Substring(VirtualPath.IndexOf("Views/"))
.Replace("Views/", "OrangeGuava.Common.Views.")
.Replace("/", ".");
Assembly assembly = Assembly.GetExecutingAssembly();
var rfl = assembly.GetManifestResourceNames()
.Where(s => s.StartsWith(resourcename))
.Select(s => VirtualPath + s.Replace(resourcename, ""))
.Select(s => new EmbeddedVirtualFile(s));
fl.AddRange(rfl);
return fl;
}
}
}
Finally, strongly typed views will almost but not quite work perfectly. Model will be treated as an untyped object, so to get strong typing back you need to start your shared views with something like
<% var Model2 = Model as IEnumerable<AppModel>; %>
A: An addition to all you who are still looking for the holy grail: I've come a bit closer to finding it, if you're not too attached to the webforms viewengine.
I've recently tried out the Spark viewengine. Other than being totally awesome and I wouldn't go back to webforms even if I was threathened, it also provides some very nice hooks for modularity of an application. The example in their docs is using Windsor as an IoC container, but I can't imagine it to be a lot harder if you want to take another approach.
A: Essentially this is the same issue as people had with WebForms and trying to compile their UserControl ASCX files into a DLL. I found this http://www.codeproject.com/KB/aspnet/ASP2UserControlLibrary.aspx that might work for you too.
A: protected void Application_Start()
{
WebFormViewEngine engine = new WebFormViewEngine();
engine.ViewLocationFormats = new[] { "~/bin/Views/{1}/{0}.aspx", "~/Views/Shared/{0}.aspx" };
engine.PartialViewLocationFormats = engine.ViewLocationFormats;
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(engine);
RegisterRoutes(RouteTable.Routes);
}
Set the 'Copy to output' property of your view to 'Copy always'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: How do I make a list with checkboxes in Java Swing? What would be the best way to have a list of items with a checkbox each in Java Swing?
I.e. a JList with items that have some text and a checkbox each?
A: Better solution for Java 7 and newer
I stumbled upon this question and realized that some of the answers are pretty old and outdated. Nowadays, JList is generic and thus there are better solutions.
My solution of the generic JCheckBoxList:
import java.awt.Component;
import javax.swing.*;
import javax.swing.border.*;
import java.awt.event.*;
@SuppressWarnings("serial")
public class JCheckBoxList extends JList<JCheckBox> {
protected static Border noFocusBorder = new EmptyBorder(1, 1, 1, 1);
public JCheckBoxList() {
setCellRenderer(new CellRenderer());
addMouseListener(new MouseAdapter() {
public void mousePressed(MouseEvent e) {
int index = locationToIndex(e.getPoint());
if (index != -1) {
JCheckBox checkbox = (JCheckBox) getModel().getElementAt(index);
checkbox.setSelected(!checkbox.isSelected());
repaint();
}
}
});
setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
}
public JCheckBoxList(ListModel<JCheckBox> model){
this();
setModel(model);
}
protected class CellRenderer implements ListCellRenderer<JCheckBox> {
public Component getListCellRendererComponent(
JList<? extends JCheckBox> list, JCheckBox value, int index,
boolean isSelected, boolean cellHasFocus) {
JCheckBox checkbox = value;
//Drawing checkbox, change the appearance here
checkbox.setBackground(isSelected ? getSelectionBackground()
: getBackground());
checkbox.setForeground(isSelected ? getSelectionForeground()
: getForeground());
checkbox.setEnabled(isEnabled());
checkbox.setFont(getFont());
checkbox.setFocusPainted(false);
checkbox.setBorderPainted(true);
checkbox.setBorder(isSelected ? UIManager
.getBorder("List.focusCellHighlightBorder") : noFocusBorder);
return checkbox;
}
}
}
For dynamically adding JCheckBox lists you need to create your own ListModel or add the DefaultListModel.
DefaultListModel<JCheckBox> model = new DefaultListModel<JCheckBox>();
JCheckBoxList checkBoxList = new JCheckBoxList(model);
The DefaultListModel are generic and thus you can use methods specified by JAVA 7 API here like this:
model.addElement(new JCheckBox("Checkbox1"));
model.addElement(new JCheckBox("Checkbox2"));
model.addElement(new JCheckBox("Checkbox3"));
A: I recommend you use a JPanel with a GridLayout of 1 column. Add the checkBoxes to the JPanel, and set the JPanel as the data source of a JScrollPane. And to get the selected CheckBoxes, just call the getComponents() of the JPanel to get the CheckBoxes.
A: A wonderful answer is this CheckBoxList. It implements Telcontar's answer (though 3 years before :)... I'm using it in Java 1.6 with no problems. I've also added an addCheckbox method like this (surely could be shorter, haven't used Java in a while):
public void addCheckbox(JCheckBox checkBox) {
ListModel currentList = this.getModel();
JCheckBox[] newList = new JCheckBox[currentList.getSize() + 1];
for (int i = 0; i < currentList.getSize(); i++) {
newList[i] = (JCheckBox) currentList.getElementAt(i);
}
newList[newList.length - 1] = checkBox;
setListData(newList);
}
I tried out the demo for the Jidesoft stuff, playing with the CheckBoxList I encountered some problems (behaviors that didn't work). I'll modify this answer if I find problems with the CheckBoxList I linked to.
A: Odds are good w/ Java that someone has already implemented the widget or utility you need. Part of the benefits of a large OSS community. No need to reinvent the wheel unless you really want to do it yourself. In this case it would be a good learning exercise in CellRenderers and Editors.
My project has had great success with JIDE. The component you want, a Check Box List, is in the JIDE Common Layer (which is OSS and hosted on java.net). The commercial stuff is good too, but you don't need it.
http://www.jidesoft.com/products/oss.htm
https://jide-oss.dev.java.net/
A: I don't like the solutions that put a Checkbox into the model. The model should only contain data not display elements.
I found this http://www.java2s.com/Tutorials/Java/Swing_How_to/JList/Create_JList_of_CheckBox.htm
which I optimized a bit. The ACTIVE flag represents the Checkbox, the SELECTED flag shows what entry the cursor sits on.
my version requires a renderer
import java.awt.Component;
import javax.swing.JCheckBox;
import javax.swing.JList;
import javax.swing.ListCellRenderer;
class CheckListRenderer extends JCheckBox implements ListCellRenderer<Entity> {
@Override
public Component getListCellRendererComponent(JList<? extends Entity> list,
Entity value, int index, boolean isSelected, boolean cellHasFocus) {
setEnabled(list.isEnabled());
setSelected(value.isActive()); // sets the checkbox
setFont(list.getFont());
if (isSelected) { // highlights the currently selected entry
setBackground(list.getSelectionBackground());
setForeground(list.getSelectionForeground());
} else {
setBackground(list.getBackground());
setForeground(list.getForeground());
}
setText(value.toString()+" - A" + value.isActive()+" - F"+cellHasFocus+" - S"+isSelected );
return this;
}
}
and an entity that got the active field:
public class Entity {
private boolean active = true;
public boolean isActive() {
return active;
}
public void setActive(boolean isActive) {
this.active = isActive;
}
}
Now you only have to add this to your JList:
list = new JList<Entity>();
list.setModel(new DefaultListModel<Entity>());
list.setCellRenderer(new CheckListRenderer());
list.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
list.addMouseListener(new MouseAdapter() {
@Override
public void mouseClicked(MouseEvent event) {
if (event.getX() < 20) {
// Quick and dirty: only change the tick if clicked into the leftmost pixels
@SuppressWarnings("unchecked")
JList<Entity> list = ((JList<Entity>) event.getSource());
int index = list.locationToIndex(event.getPoint());// Get index of item clicked
if (index >= 0) {
Entity item = (Entity) list.getModel().getElementAt(index);
item.setActive(!item.isActive()); // Toggle selected state
list.repaint(list.getCellBounds(index, index));// Repaint cell
}
}
}
});
A: Create a custom ListCellRenderer and asign it to the JList.
This custom ListCellRenderer must return a JCheckbox in the implementantion of getListCellRendererComponent(...) method.
But this JCheckbox will not be editable, is a simple paint in the screen is up to you to choose when this JCheckbox must be 'ticked' or not,
For example, show it ticked when the row is selected (parameter isSelected), but this way the check status will no be mantained if the selection changes. Its better to show it checked consulting the data below the ListModel, but then is up to you to implement the method who changes the check status of the data, and notify the change to the JList to be repainted.
I Will post sample code later if you need it
ListCellRenderer
A: Just implement a ListCellRenderer
public class CheckboxListCellRenderer extends JCheckBox implements ListCellRenderer {
public Component getListCellRendererComponent(JList list, Object value, int index,
boolean isSelected, boolean cellHasFocus) {
setComponentOrientation(list.getComponentOrientation());
setFont(list.getFont());
setBackground(list.getBackground());
setForeground(list.getForeground());
setSelected(isSelected);
setEnabled(list.isEnabled());
setText(value == null ? "" : value.toString());
return this;
}
}
and set the renderer
JList list = new JList();
list.setCellRenderer(new CheckboxListCellRenderer());
this will result in
Details at Custom swing component renderers.
PS: If you want radio elements just replace extends JCheckbox with extends JRadioButton.
A: I'd probably be looking to use a JTable rather than a JList and since the default rendering of a checkbox is rather ugly, I'd probably be looking to drop in a custom TableModel, CellRenderer and CellEditor to represent a boolean value. Of course, I would imagine this has been done a bajillion times already. Sun has good examples.
A: All of the aggregate components in Swing--that is, components made up other components, such as JTable, JTree, or JComboBox--can be highly customized. For example, a JTable component normally displays a grid of JLabel components, but it can also display JButtons, JTextFields, or even other JTables. Getting these aggregate components to display non-default objects is the easy part, however. Making them respond properly to keyboard and mouse events is a much harder task, due to Swing's separation of components into "renderers" and "editors." This separation was (in my opinion) a poor design choice and only serves to complicate matters when trying to extend Swing components.
To see what I mean, try enhancing Swing's JList component so that it displays checkboxes instead of labels. According to Swing philosophy, this task requires implementing two interfaces: ListCellRenderer (for drawing the checkboxes) and CellEditor (for handling keyboard and mouse events on the checkboxes). Implementing the ListCellRenderer interface is easy enough, but the CellEditor interface can be rather clumsy and hard to understand. In this particular case, I would suggest forgetting CellEditor entirely and to handle input events directly, as shown in the following code.
import javax.swing.*;
import javax.swing.border.*;
import java.awt.*;
import java.awt.event.*;
public class CheckBoxList extends JList
{
protected static Border noFocusBorder = new EmptyBorder(1, 1, 1, 1);
public CheckBoxList()
{
setCellRenderer(new CellRenderer());
addMouseListener(new MouseAdapter()
{
public void mousePressed(MouseEvent e)
{
int index = locationToIndex(e.getPoint());
if (index != -1) {
JCheckBox checkbox = (JCheckBox)
getModel().getElementAt(index);
checkbox.setSelected(
!checkbox.isSelected());
repaint();
}
}
}
);
setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
}
protected class CellRenderer implements ListCellRenderer
{
public Component getListCellRendererComponent(
JList list, Object value, int index,
boolean isSelected, boolean cellHasFocus)
{
JCheckBox checkbox = (JCheckBox) value;
checkbox.setBackground(isSelected ?
getSelectionBackground() : getBackground());
checkbox.setForeground(isSelected ?
getSelectionForeground() : getForeground());
checkbox.setEnabled(isEnabled());
checkbox.setFont(getFont());
checkbox.setFocusPainted(false);
checkbox.setBorderPainted(true);
checkbox.setBorder(isSelected ?
UIManager.getBorder(
"List.focusCellHighlightBorder") : noFocusBorder);
return checkbox;
}
}
}
Here, I intercept mouse clicks from the listbox and simulate a click on the appropriate checkbox. The result is a "CheckBoxList" component that is both simpler and smaller than an equivalent component using the CellEditor interface. To use the class, simply instantiate it, then pass it an array of JCheckBox objects (or subclasses of JCheckBox objects) by calling setListData. Note that the checkboxes in this component will not respond to keypresses (i.e. the spacebar), but you could always add your own key listener if needed.
Source: DevX.com
A: Here is just a little addition to the JCheckBoxList by Rawa. This will add the ability to select using space bar. If multiple items are selected, all will be set to inverted value of the first item.
addKeyListener(new KeyAdapter() {
@Override
public void keyPressed(KeyEvent e) {
int index = getSelectedIndex();
if (index != -1 && e.getKeyCode() == KeyEvent.VK_SPACE) {
boolean newVal = !((JCheckBox) (getModel()
.getElementAt(index))).isSelected();
for (int i : getSelectedIndices()) {
JCheckBox checkbox = (JCheckBox) getModel()
.getElementAt(i);
checkbox.setSelected(newVal);
repaint();
}
}
}
});
A: this is yet another example of making list with checkboxes
class JCheckList<T> extends JList<T> {
protected static Border noFocusBorder = new EmptyBorder(1, 1, 1, 1);
public void setSelected(int index) {
if (index != -1) {
JCheckBox checkbox = (JCheckBox) getModel().getElementAt(index);
checkbox.setSelected(
!checkbox.isSelected());
repaint();
}
}
protected static class CellListener
extends DefaultListModel
implements ListDataListener {
ListModel ls;
public CellListener(ListModel ls) {
ls.addListDataListener(this);
int i = ls.getSize();
for (int v = 0; v < i; v++) {
var r = new JCheckBox();
r.setText(ls.getElementAt(v).toString());
this.addElement(r);
}
this.ls = ls;
}
@Override
public void intervalAdded(ListDataEvent e) {
int begin = e.getIndex0();
int end = e.getIndex1();
for (; begin <= end; begin++) {
var r = new JCheckBox();
r.setText(ls.getElementAt(begin).toString());
this.add(begin, r);
}
}
@Override
public void intervalRemoved(ListDataEvent e) {
int begin = e.getIndex0();
int end = e.getIndex1();
for (; begin <= end; end--) {
this.remove(begin);
}
}
@Override
public void contentsChanged(ListDataEvent e) {
}
}
public JCheckList() {
setCellRenderer(new CellRenderer());
addMouseListener(new MouseAdapter() {
public void mousePressed(MouseEvent e) {
int index = locationToIndex(e.getPoint());
setSelected(index);
}
}
);
addKeyListener(new KeyListener(){
@Override
public void keyTyped(KeyEvent e) {
}
@Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_SPACE){
int index = JCheckList.this.getSelectedIndex();
setSelected(index);
}
}
@Override
public void keyReleased(KeyEvent e) {
}
});
setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
}
@Override
public void setModel(ListModel<T> d) {
var r = new CellListener(d);
d.addListDataListener(r);
super.setModel(r);
}
protected class CellRenderer implements ListCellRenderer {
public Component getListCellRendererComponent(
JList list, Object value, int index,
boolean isSelected, boolean cellHasFocus) {
JCheckBox checkbox = (JCheckBox) value;
checkbox.setBackground(isSelected
? getSelectionBackground() : getBackground());
checkbox.setForeground(isSelected
? getSelectionForeground() : getForeground());
checkbox.setEnabled(isEnabled());
checkbox.setFont(getFont());
checkbox.setFocusPainted(false);
checkbox.setBorderPainted(true);
checkbox.setBorder(isSelected
? UIManager.getBorder(
"List.focusCellHighlightBorder") : noFocusBorder);
return checkbox;
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: CMD.exe replacement Does anyone know of a good Command Prompt replacement? I've tried bash/Cygwin, but that does not really meet my needs at work because it's too heavy. I'd like a function-for-function identical wrapper on cmd.exe, but with highlighting, intellisense, and (critically) a tabbed interface. Powershell is okay, but the interface is still lacking.
A: Edited: I've been using ConEmu (http://conemu.github.io/) for quite some time now. This one is a wrapper too, since it is not really possible to replace the Windows console without rewriting the whole command interpreter. Below the line is my original answer for an earlier alternative.
Not exactly a replacement (actually, it's a prettifying wrapper) but you might try Console (http://sourceforge.net/projects/console/)
A: I use Take Command 9.0. I have used JPSoft's products for years. It has a tabbed interface. I have Take Command start with Take Command, Powershell, and CMD.exe each in their own tab. It doesn't do syntax highlighting. Take Command is syntactically compatible with CMD.exe and enhances each command quite a bit and adds many more.
PowerShell isn't a complete replacement for CMD.exe or Take Command. I find myself using both. You might ask why I would still use CMD.exe and it is because I will use Take Command to test a batch file that is limited to commands that work in CMD.exe and I then need to deploy the batch file on a workstation/server that doesn't have Take Command on it. I can create/test in Take Command and then verify it works in CMD.exe before deploying it.
I don't know of any IDE's that provide Intellisense for batch files specifically. If they did it would only be for a few keywords anyway. Most of the time in batch files you are running commands that are external to the batch language and wouldn't be included in the Intellisense.
I use Textpad to edit my batch files. Take Command has a debugger and it has logging capabilities which makes it very easy to test your batch files.
A: PowerCmd is a trial-ware wrapper for cmd.exe and costs 30$
It offers:
*
*tabs
*a "normal" selection mode
*copy'n'paste
*highlighting
*auto complete
*buttons to start Python, Powershell and others
A: If you want a more feature-rich UI for Powershell, try PowerGUI.
http://powergui.org/index.jspa
A: NYAOS
"NYAOS" is the tcsh-like enhanced commandline shell for Windows and OS/2 !
http://www.nyaos.org/
A: For decent completion and command history, try the PyCmd wrapper at https://sourceforge.net/projects/pycmd/
A: I've been using JPSoft's products a long time (starting back with 4OS2 and 4DOS), and currently use Take Command 9. It works with existing batch files, has it's own improvements on top, a tabbed interface, and filename coloring options.
Looks like their site is having some problems right now, but you can find them at: http://jpsoft.com/
A: I use 4NT from the above mentioned JPSoft. It works great and has great added functionality. It is being replaced by TakeCommand but I do not need that much extra functionality.
Update:
It's no longer known as 4NT. Now its name is TCC/LE.
A: Nick, I know you asked this a long while ago but I've just found it while searching for something related. I have been using PromptPal and its been great. I got it about a year ago, early 2008, and it was through this discount software thing called BitsDujour. I just went there and noticed they had a deal for 51% off that product only a few days ago. Keep your eyes on that site and maybe the discount will come up again soon. Its well worth the $30 but I got 2 licenses for 1/2 off, one for each of my PCs...
A: If you want not to use cmd.exe totally then go for ZOC.exe terminal.ZOC is enterprise application.
Else just add some feature to your Command prompt by installing GOW.EXE.GOW is opensource application.
But always you can go for Git which is giving mostly commands in bash. Just add the bin folder to your environment path. Your command prompt will work as unix terminal.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "90"
} |
Q: Huge Web App With Memory Leak in IE 6 I have a huge web app that is having issues with memory leak in IE 6.
Fixing a memory leak in a 5 line code sample that demonstrates the problem is easy.
But if I have a very huge application, where should a start from?
A: Check out Drip. That usually takes the guesswork out of IE memory leaks.
If for some reason Drip doesn't find it, take a close look at any JavaScript code that works with events. That is almost always the source of any significant memory leak in a browser.
Destroying a DOM element with handlers attached to it, without removing those handlers first, will prevent the memory associated with those handlers from being recovered.
A: Does the application use a lot of JavaScript?
If it does, then one thing I've found that helps for avoiding memory leaks is to make sure you're using a JavaScript framework such as Prototype or jQuery because they have tried and tested event-handling code that doesn't leak memory.
*
*IE6 can also leak memory if you have circular references to DOM objects
*Also try this JavaScript Memory Leak Detector and see if you can diagnose where the problem is
A: Here is how I solved the memory leak problem in IE7. The idea is to dispose/set to null all expando-properties on all DOM nodes at unloading the page. This worked for me. You may find it useful.
<!--[if lt IE 8]>
<script type="text/javascript">
function disposeAll() {
if (window.document.all) {
for (var index = 0; index < window.document.all.length; index++) {
try { dispose(window.document.all[index], []); } catch (e) { debugger; }
}
}
dispose(window.document.body, []);
dispose(window.document, []);
dispose(window, []);
window.disposeAll = null;
window.dispose = null;
window.onunload = null;
}
function dispose(something, map) {
if (something == null) return;
if (something.dispose && typeof (something.dispose) == 'function') {
try { something.dispose(); } catch (e) { debugger; }
}
map.push(something);
for (var key in something) {
var value = null;
try { value = something[key]; } catch (e) { };
if (value == null || value == dispose || value == disposeAll) continue;
var processed = null;
for (var index = 0; index < map.length; index++) {
if (map[index] === value) {
processed = value;
break;
}
}
if (processed != null) continue;
var constructor = value.constructor;
if (constructor == Object || constructor == Array) {
try { dispose(value, map); } catch (e) { debugger; }
}
if (constructor == Object || constructor == Array || constructor == Function) {
try { something[key] = null; } catch (e) { debugger; }
}
}
map.pop();
}
(function() {
var previousUnloadHandler = window.onunload;
if (previousUnloadHandler == null) {
window.onunload = disposeAll;
} else {
window.onunload = function() {
previousUnloadHandler.apply(this, arguments); // <== HERE YOU MAY WANT TO HAVE AN "IF" TO MAKE SURE THE ORIGINAL UNLOAD EVENT WASN'T CANCELLED
disposeAll();
previousUnloadHandler = null;
};
}
}());
</script>
<![endif]-->
You may want to remove all "debugger;" statements if you don't feel like dealing with some occasional exceptions.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Local Currency String conversion I am maintaining an app for a client that is used in two locations. One in England and one in Poland.
The database is stored in England and uses the format £1000.00 for currency, but the information is being gathered locally in Poland where 1000,00 is the format.
My question is, in VB6 is there a function that takes a currency string in a local format and converts to another, or will I just have to parse the string and replace , or . ?
BTW I have looked at CCur, but not sure if that will do what I want.
A: The data is not actually stored as the string "£1000.00"; it's stored in some numeric format.
Sidebar: Usually databases are set up to store money amounts using either the decimal data type (also called money in some DBs), or as a floating point number (also called double).
The difference is that when it's stored as decimal certain numbers like 0.01 are represented exactly whereas in double those numbers can only be stored approximately, causing rounding errors.
The database appears to be storing the number as "£1000.00" because something is formatting it for display. In VB6, there's a function FormatCurrency which would take a number like 1000 and return a string like "£1000.00".
You'll notice that the FormatCurrency function does not take an argument specifying what type of currency to use. That's because it, along with all the other locale-specific functions in VB, figures out the currency from the current locale of the system (from the Windows Control Panel).
That means that on my system,
Debug.Print FormatCurrency(1000)
will print $1,000.00, but if I run that same program on a Windows computer set to the UK locale, it will probably print £1,000.00, which, of course, is something completely different.
Similarly, you've got some code, somewhere, I can't tell where, in Poland, it seems, that is responsible for parsing the user's string and converting it to a number. And if that code is in Visual Basic, again, it's relying on the control panel to decide whether "." or "," is the thousands separator and whether "," or "." is the decimal point.
The function CDbl converts its argument to a number. So for example on my system in the US
Debug.Print CDbl("1.200")
produces the number one point two, on a system with the Control Panel set to European formatting, it would produce the number one thousand, two hundred.
It's possible that the problem is that you have someone sitting a computer with the regional control panel set to use "." as the decimal separator, but they're typing "," as the decimal separator.
A: What database are you using? And what data type is the amount stored in?
As long as you are always converting from one format to another, you do not need to do any parsing, just replace "." with "," or the other way around. You may need to remove the "£"-sign as well if that is stored in your string.
A: There's probably a correct answer dealing with culture objects and such, but the easiest way would be to taken the input from the polish input, and replace the , with a ., and then store it in your database as type "money" or "decimal". If you know they (possibly configurable per user) are always entering numbers in either Polish or English, you could have a function that you run all the input numbers through to convert the string to a proper "decimal" typed variable. Also, for display purposes you could run it through another similar function to ensure that the user always sees the number format they are comfortable with. The key here is to switch it to a decimal as soon as you get it from the user, and only switch it back to a string at the last step before sending it out to the user.
A: @KiwiBastard yes i would think so. Are you storing your amount in an "(n)varchar" field or are you using a currency/decimal type field? If the latter is the case, the currency-symbols and separators are added by your client, and there would be no need to replace anything in the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How Do I Find a File in a Subversion Repository History? Is it possible to look back through the history of a Subversion repository for files of a certain name (even better would be for them to have a wildcard search)?
I want to see if a .bat file has been committed to the repository at some point in the past but has since been removed in later updates. Even a dump of the file history at each revision would work, as I could just grep the output. I have looked through the manual but could not see a good way to do this.
The logs for each commit are descriptive, so I cannot just look through the log messages to see what modifications were done. I presume Subversion does have a way of retrieving this?
A: I assume you are using the SVN command line client. Give TortoiseSVN a try. Its "Show Log" dialog allows searching for comments, filenames and authors.
http://tortoisesvn.net/downloads
PS: Windows only.
A: TortoiseSVN can search the logs very easily, and on my system I can enter ".plg" in the search box and find all adds, modifies, and deletes for those files.
Without Tortoise, the only way I can think of doing that would be to grep the full logs or parse the logs and do your own searching for 'A' and 'D' indicators on the file you are looking for (use svn log --verbose to get file paths).
svn log --verbose | grep .bat
A: TortoiseSVN is completely sweet. I can't imagine dealing with Subversion without it.
Also, as a long shot, if you're using Eclipse I'd recommend the Subclipse plug-in.
A: Personally I'd use
svnadmin dump -r1:HEAD /path/to/repo/
Pipe it into less and search or grep with some context.
A: svn log -v .bat
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Display rows in multiple columns in Asp.net Gridview By default each row of a Gridview maps to each row in a datatable or dataset attached to its datasource. But what if I want to display these rows in multiple columns. For example if it has 10 rows, 5 rows each should be displayed in 2 columns side by side. Also can I do this with the Infragistics grid. Is this possible?
A: You can use a DataList control instead. It has a RepeatColumns property that you can define the number of columns you want to display.
In .NET Framework 3.5, there is an even better solution, the ListView control. You can find further information about how to use the ListView control here.
A: If this is a pure coding exercise, then bind to the RowDataBound event of the Gridview. That way, you can do:
e.Row.Cells(2).Text = e.Row.Cells(1).Text
This would place the text from column 1 in column 2 after it has been pulled from the database. You can also dynamically create columns using a similar method.
Re-reading, I think I misunderstand your problem though.
A: Can't you just put two identical bound columns one after the other?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Profiling/Optimizing (Sharepoint 2007) Web Parts I just wonder what options there are to properly measure/profile/optimize ASP.net 2.0 Web Parts, especially the ones for Sharepoint 2007?
As Web Parts are a layer on another layer of technology, getting resource usage, open handles and stuff only for the web part seems to be a bit difficult.
Does anyone know some good tools or practices for profiling and optimizing web parts?
A: I've had success profiling SharePoint 2010 with EQATEC Profiler. Bonus is that they have a free edition. Since it worked in SharePoint 2010, I expect it will work with SharePoint 2007.
Here's how I got it working with SharePoint 2010: http://blogs.visigo.com/chriscoulson/performance-profiling-a-sharepoint-2010-project-using-eqatec-profiler/
A: I have found seperating out all the business logic in to a seperate DLL that is easily unit testable has been the easiest method for me. But to be honest there is really no good way that I have found, besides what I have just mentioned. The same has been true for me with Facebook applications recently. I think this is common for any application that runs inside of another platform. Especially when performance and testing where never a goal when the platform developers started to build the system.
A: Back when we started with SP2003, we used to worry about not closing connection in apps or web parts. We used the following query to check if the base number of connections (not counting the initial spike) would increase as the app is used on the development server:
SELECT hostname, sysdatabases.name , sysprocesses.status, last_batch from sysprocesses, sysdatabases where sysprocesses.dbid = sysdatabases.dbid and nt_username = 'SP Service Account' and (hostname='WFE1' or hostname='WFE2') and sysprocesses.dbid = 10 order by last_batch desc
(replace the bolded values with those appropriate for your environment)
We haven't tried this since the upgrade to MOSS though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits