text
stringlengths
8
267k
meta
dict
Q: Getting selected text in a browser, cross-platform One of the things I'd like to do in my browser-based application is allow the user to select some text (not in a <textarea>, just plain ol' text!), and have my application pop up a small toolbar that then can interact with the next (in my case, add annotations). I've found a lot of stuff on google that seems to be focused on writing WYSIWYG editors, but that isn't what I want, and most of it worked in IE but not in FF2 or 3. Ideally, I'd like some function that can return the currently selected text in the browser window that works in IE7 (and 6 if possible), FireFox 2 & 3 and Safari 2. If it works in Opera, that'd be a bonus, but it's not a requirement. Anyone have a function that does this? Or an idea of where to start? A: Have a look at jQuery and the wrapSelection plugin. It may be what you are looking for. A: These days this method should be enough: function getSelectedText() { return window.getSelection ? window.getSelection().toString() : ''; } It will return '' in rare occasions of really old browsers and may be in the case of Opera Mini (to be tested, though, this may be outdated) + see note for UC Browser for Android. A: That jQuery plugin is cool but it accomplishes a very specific task: wrap the text you highlight with a tag. This may be just what you want. But if you don't want to (or are in a situation where you can't) add any extraneous markup to your page, you might try the following solution instead: function getSelectedText() { var txt = ''; if (window.getSelection) { txt = window.getSelection(); } else if (document.getSelection) { txt = document.getSelection(); } else if (document.selection) { txt = document.selection.createRange().text; } else return; return txt; } This function returns an object representing the text selection. It works across browsers (though I suspect the objects it returns will be slightly different depending on the browser and only dependable for the actual text of the result rather than any of the additional properties). Note: I originally discovered that code fragment here: http://www.codetoad.com/javascript_get_selected_text.asp A: Introduction to Range has some details on how different browsers give you access to the text selection. My experience is that working with these different APIs directly is quite clumsy so if wrapSelection works for you I'd go with that. A: The behaviour of individual browsers with regard to selection is outlined here. A: This code works in Safari, IE and Firefox - hope it's of some help var str = (window.getSelection) ? window.getSelection() : document.selection.createRange(); str = str.text || str; str = str + ''; // the best way to make object a string...
{ "language": "en", "url": "https://stackoverflow.com/questions/10478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Unit test execution speed (how many tests per second?) What kind of execution rate do you aim for with your unit tests (# test per second)? How long is too long for an individual unit test? I'd be interested in knowing if people have any specific thresholds for determining whether their tests are too slow, or is it just when the friction of a long running test suite gets the better of you? Finally, when you do decide the tests need to run faster, what techniques do you use to speed up your tests? Note: integration tests are obviously a different matter again. We are strictly talking unit tests that need to be run as frequently as possible. Response roundup: Thanks for the great responses so far. Most advice seems to be don't worry about the speed -- concentrate on quality and just selectively run them if they are too slow. Answers with specific numbers have included aiming for <10ms up to 0.5 and 1 second per test, or just keeping the entire suite of commonly run tests under 10 seconds. Not sure whether it's right to mark one as an "accepted answer" when they're all helpful :) A: The goal is 100s of tests per second. The way you get there is by following Michael Feather's rules of unit tests. An important point that came up in a past CITCON discussion is that if your tests aren't this fast it is quite likely that you aren't getting the design benefits of unit testing. A: If we're talking strictly unit tests, I'd aim more for completeness than speed. If the run time starts to cause friction, separate the test into different project/classes etc., and only run the tests related to what you're working on. Let the Integration server run all the tests on checkin. A: All unit tests should run in under a second (that is all unit tests combined should run in 1 second). Now I'm sure this has practical limits, but I've had a project with a 1000 tests that run this fast on a laptop. You'll really want this speed so your developers don't dread refactoring some core part of the model (i.e., Lemme go get some coffee while I run these tests...10 minutes later he comes back). This requirement also forces you to design your application correctly. It means that your domain model is pure and contains zero references to any type of persistance (File I/O, Database, etc). Unit tests are all about testing those business relatonships. Now that doesn't mean you ignore testing your database or persistence. But these issues are now isolated behind repositories that can be separately tested with integration tests that is located in a separate project. You run your unit tests constantly when writing domain code and then run your integration tests once on check in. A: I tend to focus more on readability of my tests than speed. However, I still try to make them reasonably fast. I think if they run on the order of milliseconds, you are fine. If they run a second or more per test... then you might be doing something that should be optimized. Slow tests only become a problem as the system matures and causes the build to take hours, at which point you are more likely running into an issue of a lot of kind of slow tests rather than one or 2 tests that you can optimize easily... thus you should probably pay attention RIGHT AWAY if you see lots of tests running hundreds of milliseconds each (or worse, seconds each), rather than wait till it gets to the hundreds of tests taking that long point (at which point it is going to be really hard to solve the problem). Even so, it will only reduce the time between when your automated build issues errors... which is ok if it is an hour later (or even a few hours later), I think. The problem is running them before you check in, but this can be avoided by selecting a small subset of tests to run that are related to what you are working on. Just make sure to fix the build if you check in code that breaks tests you didn't run! A: We're currently at 270 tests in around 3.something seconds. There are probably around 8 tests that perform file IO. These are run automatically upon a successful build of our libraries on every engineers machine. We have more extensive (and time consuming) smoke-testing that is done by the build machine every night, or can be started manually on an engineers machine. As you can see we haven't yet reached the problem of tests being too time consuming. 10 seconds for me is the point where it starts to become intrusive, when we start to approach that it'll be something we'll take a look at. We'll likely move the lower level libraries, which are more robust since they change infrequently and have few dependencies, into the nightly builds, or a configuration where they're only executed by the build machine. If you find it's taking more than a few seconds to run a hundred or so tests you may need to examine what you are classifying as a unit test and whether it would be better treated as a smoke test. your mileage will obviously be highly variable depending on your area of development. A: Data Point -- Python Regression Tests Here are the numbers on my laptop for running "make test" for Python 2.5.2: * *number of tests: 3851 (approx) *execution time: 9 min, 6 sec *execution rate: 7 tests / sec A: One of the most important rules about unit tests is they should run fast. How long is too long for an individual unit test? Developers should be able to run the whole suite of unit tests in seconds, and definitely not in minutes and minutes. Developers should be able to quickly run them after changing the code in anyway. If it takes too long, they won't bother running them and you lose one of the main benefits of the tests. What kind of execution rate do you aim for with your unit tests (# test per second)? You should aim for each test to run in an order of milliseconds, anything over 1 second is probably testing too much. We currently have about 800 tests that run in under 30 seconds, about 27 tests per second. This includes the time to launch the mobile emulator needed to run them. Most of them each take 0-5ms (if I remember correctly). We have one or two that take about 3 seconds, which are probably candidates for checking, but the important thing is the whole test suite doesn't take so long that it puts off developers running it, and doesn't significantly slow down our continuous integration build. We also have a configurable timeout limit set to 5 seconds -- anything taking longer will fail. A: I judge my unit tests on a per test basis, not by by # of tests per second. The rate I aim for is 500ms or less. If it is above that, I will look into the test to find out why it is taking so long. When I think a test is to slow, it usually means that it is doing too much. Therefore, just refactoring the test by splitting it up into more tests usually does the trick. The other times that I have noticed my tests running slow is when the test shows a bottleneck in my code, then a refactoring of the code is in order. A: How long is too long for an individual unit test? I'd say it depends on the compile speed. One usually executes the tests at every compile. The objective of unit testing is not to slow down, but to bring a message "nothing broken, go on" (or "something broke, STOP"). I do not bother about test execution speed until this is something that starts to get annoying. The danger is to stop running the tests because they're too slow. Finally, when you do decide the tests need to run faster, what techniques do you use to speed up your tests? First thing to do is to manage to find out why they are too slow, and wether the issue is in the unit tests or in the code under test ? I'd try to break the test suite into several logical parts, running only the part that is supposedly affected by the code I changed at every compile. I'd run the other suites less often, perhaps once a day, or when in doubt I could have broken something, and at least before integrating. A: Some frameworks provide automatic execution of specific unit tests based on heuristics such as last-modified time. For Ruby and Rails, AutoTest provides much faster and responsive execution of the tests -- when I save a Rails model app/models/foo.rb, the corresponding unit tests in test/unit/foo_test.rb get run. I don't know if anything similar exists for other platforms, but it would make sense.
{ "language": "en", "url": "https://stackoverflow.com/questions/10486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Oracle - What TNS Names file am I using? Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions. A: Oracle provides a utility called tnsping: R:\>tnsping someconnection TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:38:07 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora TNS-03505: Failed to resolve name R:\> R:\>tnsping entpr01 TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:39:22 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = **) (PROTOCOL = TCP) (Host = ****) (Port = 1521))) (CONNECT_DATA = (SID = ENTPR0 1))) OK (40 msec) R:\> This should show what file you're using. The utility sits in the Oracle bin directory. A: There is another place where the TNS location is stored: If you're using Windows, open regedit and navigate to My HKEY Local Machine/Software/ORACLE/KEY_OraClient10_home1 where KEY_OraClient10_home1 is your Oracle home. If there is a string entry called TNS_ADMIN, then the value of that entry will point to the TNS file that Oracle is using on your computer. A: On my development machine I have three different versions of Oracle client software. I manage the tnsnames.ora file in one of them. In the other two, I have entered in the tnsnames.ora file: ifile=path_to_tnsnames.ora_file/tnsnames.ora This way, if for some reason the wrong tnsnames.ora file is used by a client, it will always end up at the up-to-date version. A: For Windows: Filemon from SysInternals will show you what files are being accessed. Remember to set your filters so you are not overwhelmed by the chatty file system traffic. Added: Filemon does not work with newer Windows versions, so you might have to use Process Monitor. A: Codeslave asks "Shouldn't it always be "$ORACLE_ HOME/network/admin/tnsnames.ora"? The answer is no, it isn't. Consider these two invocations of tnsping on the same machine: C:\Documents and Settings\me>D:\Oracle\10.2.0_DB\BIN\tnsping orcl TNS Ping Utility for 32-bit Windows: Version 10.2.0.4.0 - Production on 09-OCT-2 008 14:30:12 Copyright (c) 1997, 2007, Oracle. All rights reserved. Used parameter files: D:\Oracle\10.2.0_DB\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx )(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCL))) OK (40 msec) C:\Documents and Settings\me>tnsping orcl TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 09-OCT-2 008 14:30:21 Copyright (c) 1997, 2005, Oracle. All rights reserved. Used parameter files: D:\oracle\10.2.0_Client\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = XXXX)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = ORCL))) OK (20 msec) C:\Documents and Settings\me> Note the two different parameter file locations, that are dependent on which tnsping executable you're running (and perhaps where it's being run from). For tnsnames-based oracle networking, using the TNS_ADMIN variable is the only way to ensure you're getting a consistent tnsnames.ora file. (NOTE: Windows-centric answer) A: For linux: $ strace sqlplus -L scott/tiger@orcl 2>&1| grep -i 'open.*tnsnames.ora' shows something like this: open("/opt/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora",O_RDONLY)=7 Changing to $ strace sqlplus -L scott/tiger@orcl 2>&1| grep -i 'tnsnames.ora' will show all the file paths that are failing. A: By default, tnsnames.ora is located in the $ORACLE_HOME/network/admin directory on UNIX operating systems and in the ORACLE_HOME\network\admin directory on Windows operating systems. tnsnames.ora can also be stored the following locations: The directory specified by the TNS_ADMIN environment variable (or registry value) On UNIX operating systems, the global configuration directory. For example, on the Solaris Operating System, this directory is /var/opt/oracle If you have multiple ORACLE_HOMES, be aware of which one you are using, as the location of the tnsnames.ora file can vary from one ORACLE_HOME to the next. For the person who mentioned the TWO_TASK environment variable, that is used to set a default database service name to connect to (which could be a database on another server). The service name you set TWO_TASK to is then looked up in the tnsnames.ora file when you connect. A: Shouldn't it always be "$ORACLE_ HOME/network/admin/tnsnames.ora"? Then you can just do "echo $oracle_ home" or the *nix equivalent. @Pete Holberton You are entirely correct. Which reminds me, there's another monkey wrench in the works called TWO_ TASK According http://www.orafaq.com/wiki/TNS_ADMIN TNS_ADMIN is an environment variable that points to the directory where the SQL*Net configuration files (like sqlnet.ora and tnsnames.ora) are located. A: strace sqlplus -L scott/tiger@orcl helps to find .tnsnames.ora file on /home/oracle to find the file it takes instead of $ORACLE_HOME/network/admin/tnsnames.ora file. Thanks for the posting. A: Not direct answer to your question, but I've been quite frustrated myself trying find and update all of the tnsnames files, as I had several oracle installs: Client, BI tools, OWB, etc, each of which had its own oracle home. I ended up creating a utility called TNSNamesSync that will update all of the tnsnames in all of the oracle homes. It's under the MIT license, free to use here https://github.com/artybug/TNSNamesSync/releases The docs are here: https://github.com/artchik/TNSNamesSync/blob/master/README.md This is for Windows only, though. A: The easiest way is probably to check the PATH environment variable of the process that is connecting to the database. Most likely the tnsnames.ora file is in first Oracle bin directory in path..\network\admin. TNS_ADMIN environment variable or value in registry (for the current Oracle home) may override this. Using filemon like suggested by others will also do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/10499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: Weird DB2 issue with DBUnit I am having a strange DB2 issue when I run DBUnit tests. My DBUnit tests are highly customized, but I don't think it is the issue. When I run the tests, I get a failure: SQLCODE: -1084, SQLSTATE: 57019 which translates to SQL1084C Shared memory segments cannot be allocated. It sounds like a weird memory issue, though here's the big strange thing. If I ssh to the test database server, then go in to db2 and do "connect to MY_DB", the tests start succeeding! This seems to have no relation to the supposed memory error that is being reported. I have 2 tests, and the first one actually succeeds, the second one is the one that fails. However, it fails in the DBUnit setup code, when it is obtaining the connection to the DB server to load my xml dataset. Any ideas what might be going on? A: Well, I think I fixed it by doing the following: db2stop force db2start At least, things seem to be working now..... A: In my case it was an expired DB/2 license. You can see your licenses by issuing db2licm -l If you have a license file you can install it by, for example: db2licm -a db2ese.lic See also
{ "language": "en", "url": "https://stackoverflow.com/questions/10506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What do I need to run PHP applications on IIS? Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for IIS on windows. A: Make sure you get the FastCGI extension for IIS 6.0 or IIS 7.0. It is the single most important thing you can have when running PHP under IIS. Also this article should get you setup: http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/ Everything beyond this is simple, MySQL and what not. A: We just rolled out PHP 5.2.6 + FastCGI on our shared hosting platform without any problems. As long as you follow the steps outlined in the article Nick linked to then you should be just fine. My only additional piece of advice would be to forget about using the fcgiconfig.js script to modify the fcgiext.ini file, it's more of a hindrance than a help. Just edit it by hand, you also learn more about how it works. If you're installing PHP onto IIS 7 then this link should be worth a read though: Using FastCGI to Host PHP Applications on IIS 7 A: @pix0r That actually annoyed the hell out of me too and nothing came close to Apache mod_rewrite. Because they all have this overly complex XML structure. So I actually took the time and wrote my own rewriter for IIS 6.0 and IIS 7.0. Non-.NET applications only works in IIS 7.0. http://www.managedfusion.com/products/url-rewriter/ http://www.codeplex.com/urlrewriter A: One of the major sticking points I've had with IIS is the lack of Apache's mod_rewrite. There are other work-arounds and work-alikes depending on what you're doing, but just keep in mind that you'll need to change things up a bit to work with IIS if you're using mod rewrite extensively. A: Since you're moving from LAMP (a somewhat cool acronym) to WIMP (a less cool one), you may need to mentally affirm yourself. Otherwise, I've had very little trouble with PHP on Windows. ISAPI rewrite (http://www.isapirewrite.com/) is $99 and has worked very well for me for URL rewriting. A: Why not go with Apache on Windows? A: If your using iis 7 keep an eye on this project, http://phpmanager.codeplex.com/.
{ "language": "en", "url": "https://stackoverflow.com/questions/10515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Expression.Invoke in Entity Framework? The Entity Framework does not support the Expression.Invoke operator. You receive the following exception when trying to use it: "The LINQ expression node type 'Invoke' is not supported in LINQ to Entities. Has anyone got a workaround for this missing functionality? I would like to use the PredicateBuilder detailed here in an Entity Framework context. Edit 1 @marxidad - I like your suggestion, however it does baffle me somewhat. Can you give some further advice on your proposed solution? Edit 2 @marxidad - Thanks for the clarification. A: PredicateBuilder and LINQKit now support Entity Framework. Sorry, guys, for not doing this earlier! A: The Entity framework converts LINQ expressions into Entity Command trees and within that only its canonical functions are supported. You'd have to use the command trees with canonical functions to do something like PredicateBuilder.
{ "language": "en", "url": "https://stackoverflow.com/questions/10524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Using C# with OpenOffice through reflection I'm working on some code to paste into the currently active OpenOffice document directly from C#. I can't include any of the OpenOffice libraries, because we don't want to package them, so we're using reflection to get access to the OpenOffice API. My question involves using a dispatcher through reflection. I can't figure out the correct parameters to pass to it, giving me a lovely "TargetInvocationException" due to mismatched types. object objframe = GetProperty<object>(objcontroller, "frame"); if (objframe != null) { object[] paramlist = new object[2] {".uno:Paste", objframe}; InvokeMethod<object>(objdispatcher, "executeDispatch", paramlist); } How can I fix it? A: Is it just me or are your parameters the wrong way around? Also, do you have the right number of parameters? I could be missing something though, so sorry if you've already checked this stuff: The documentation says: dispatcher.executeDispatch(document, ".uno:Paste", "", 0, Array()) Which would indicate to me that you need to have your parameter list defined as object[] paramlist = new object[5] {objframe, ".uno:Paste", "", 0, null};
{ "language": "en", "url": "https://stackoverflow.com/questions/10531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the preferred way to connect to a postgresql database from PHP? I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better? A: PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+. There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend ADODB. You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again! A: Using Zend Db: require_once 'Zend/Db.php'; $DB_ADAPTER = 'Pdo_Pgsql'; $DB_CONFIG = array( 'username' => 'app_db_user', 'password' => 'xxxxxxxxx', 'host' => 'localhost', 'port' => 5432, 'dbname' => 'mydb' ); $db = Zend_Db::factory($DB_ADAPTER, $DB_CONFIG); A: I, personally, use PDO for all my database work when I have the choice. Prepared statements make my life easy, and it is seamless between database systems - handy if you have to work with one you're not used to. If you want to roll your own abstraction, or go with the procedural model, here's the Postgre functions: http://ca.php.net/manual/en/ref.pgsql.php A: There are also the pg_whatever functions, but don't use them. They use older, unmaintained database drivers. PDO is the way to go. A: I would also suggest creating an inherited PDO class or a wrapper class if you decide not to use PDO. This would provide you with a lot more flexibility in the future. ie. Calculating query execution time. A: Depending on the scale of your application, you might wish to consider the number of connections going to the backend. The consensus seem to be that PHP persistent connections and PostgreSQL don't work well together, so something like pgpool-|| should be used as an intermediary.
{ "language": "en", "url": "https://stackoverflow.com/questions/10532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Parsing attributes with regex in Perl Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and ..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have "x=someval and y=anotherval" at the beginning, but some don't. I have three things I need to do. * *Validate the strings to be certain that they have x and y. *Actually parse the values for x and y. *Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and ..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference. A: Assuming you also want to do something with the other name=value pairs this is how I would do it ( using Perl version 5.10 ): use 5.10.0; use strict; use warnings; my %hash; while( $string =~ m{ (?: ^ | \G ) # start of string or previous match \s* (?<key> \w+ ) # word characters = (?<value> \S+ ) # non spaces \s* # get to the start of the next match (?: and )? }xgi ){ $hash{$+{key}} = $+{value}; } # to make sure that x & y exist die unless exists $hash{x} and exists $hash{y}; On older Perls ( at least Perl 5.6 ); use strict; use warnings; my %hash; while( $string =~ m{ (?: ^ | \G ) # start of string or previous match \s* ( \w+ ) = ( \S+ ) \s* # get to the start of the next match (?: and )? }xgi ){ $hash{$1} = $2; } # to make sure that x & y exist die unless exists $hash{x} and exists $hash{y}; These have the added benefit of continuing to work if you need to work with more data. A: I'm not the best at regular expressions, but this seems pretty close to what you're looking for: /x=(.+) and y=([^ ]+)( and (.*))?/ Except you use $1, $2, and $4. In use: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy"); foreach (@strs) { if ($_ =~ /x=(.+) and y=([^ ]+)( and (.*))?/) { $x = $1; $y = $2; $remainder = $4; print "x: $x; y: $y; remainder: $remainder\n"; } else { print "Failed.\n"; } } Output: x: 1; y: abc; remainder: z=c4g and w=v4l x: yes; y: no; remainder: Failed. This of course leaves out plenty of error checking, and I don't know everything about your inputs, but this seems to work. A: As a fairly simple modification to Rudd's version, /^x=(.+) and y=([^ ]+)(?: and (.*))?/ will allow you to use $1, $2 and $3 (the ?: makes it a noncapturing group), and will ensure that the string starts with "x=" rather than allowing a "not_x=" to match If you have better knowledge of what the x and y values will be, this should be used to tighten the regex further: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy", "not-x=nox and y=present", "x=yes and w='there is no and y=something arg here'"); foreach (@strs) { if ($_ =~ /^x=(.+) and y=([^ ]+)(?: and (.*))?/) { $x = $1; $y = $2; $remainder = $3; print "x: {$x}; y: {$y}; remainder: {$remainder}\n"; } else { print "$_ Failed.\n"; } } Output: x: {1}; y: {abc}; remainder: {z=c4g and w=v4l} x: {yes}; y: {no}; remainder: {} z=nox and w=noy Failed. not-x=nox and y=present Failed. x: {yes and w='there is no}; y: {something}; remainder: {} Note that the missing part of the last test is due to the current version of the y test requiring no spaces, if the x test had the same restriction that string would have failed. A: Rudd and Cebjyre have gotten you most of the way there but they both have certain problems: Rudd suggested: /x=(.+) and y=([^ ]+)( and (.*))?/ Cebjyre modified it to: /^x=(.+) and y=([^ ]+)(?: and (.*))?/ The second version is better because it will not confuse "not_x=foo" with "x=foo" but will accept things such as "x=foo z=bar y=baz" and set $1 = "foo z=bar" which is undesirable. This is probably what you are looking for: /^x=(\w+) and y=(\w+)(?: and (.*))?/ This disallows anything between the x= and y= options, places and allows and optional " and..." which will be in $3 A: Here's basically what I did to solve this: ($x_str, $y_str, $remainder) = split(/ and /, $str, 3); if ($x_str !~ /x=(.*)/) { # error } $x = $1; if ($y_str !~ /y=(.*)/) { # error } $y = $1; I've omitted some additional validation and error handling. This technique works, but it's not as concise or pretty as I would have liked. I'm hoping someone will have a better suggestion for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/10533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I set up an editor to work with Git on Windows? I'm trying out Git on Windows. I got to the point of trying "git commit" and I got this error: Terminal is dumb but no VISUAL nor EDITOR defined. Please supply the message using either -m or -F option. So I figured out I need to have an environment variable called EDITOR. No problem. I set it to point to Notepad. That worked, almost. The default commit message opens in Notepad. But Notepad doesn't support bare line feeds. I went out and got Notepad++, but I can't figure out how to get Notepad++ set up as the %EDITOR% in such a way that it works with Git as expected. I'm not married to Notepad++. At this point I don't mind what editor I use. I just want to be able to type commit messages in an editor rather than the command line (with -m). Those of you using Git on Windows: What tool do you use to edit your commit messages, and what did you have to do to make it work? A: Thanks to the Stack Overflow community ... and a little research I was able to get my favorite editor, EditPad Pro, to work as the core editor with msysgit 1.7.5.GIT and TortoiseGit v1.7.3.0 over Windows XP SP3... Following the advice above, I added the path to a Bash script for the code editor... git config --global core.editor c:/msysgit/cmd/epp.sh However, after several failed attempts at the above mentioned solutions ... I was finally able to get this working. Per EditPad Pro's documentation, adding the '/newinstance' flag would allow the shell to wait for the editor input... The '/newinstance' flag was the key in my case... #!/bin/sh "C:/Program Files/JGsoft/EditPadPro6/EditPadPro.exe" //newinstance "$*" A: Edit .gitconfig file in c:\Users\YourUser folder and add: [core] editor = 'C:\\Program files\\path\\to\\editor.exe' A: This is the one symptom of greater issues. Notably that you have something setting TERM=dumb. Other things that don't work properly are the less command which says you don't have a fully functional terminal. It seems like this is most commonly caused by having TERM set to something in your global Windows environment variables. For me, the issue came up when I installed Strawberry Perl some information about this is on the msysgit bug for this problem as well as several solutions. The first solution is to fix it in your ~/.bashrc by adding: export TERM=msys You can do this from the Git Bash prompt like so: echo "export TERM=msys" >> ~/.bashrc The other solution, which ultimately is what I did because I don't care about Strawberry Perl's reasons for adding TERM=dumb to my environment settings, is to go and remove the TERM=dumb as directed in this comment on the msysgit bug report. Control Panel/System/Advanced/Environment Variables... (or similar, depending on your version of Windows) is where sticky environment variables are set on Windows. By default, TERM is not set. If TERM is set in there, then you (or one of the programs you have installed - e.g. Strawberry Perl) has set it. Delete that setting, and you should be fine. Similarly if you use Strawberry Perl and care about the CPAN client or something like that, you can leave the TERM=dumb alone and use unset TERM in your ~/.bashrc file which will have a similar effect to setting an explicit term as above. Of course, all the other solutions are correct in that you can use git config --global core.editor $MYFAVORITEEDITOR to make sure that Git uses your favorite editor when it needs to launch one for you. A: Update September 2015 (6 years later) The last release of git-for-Windows (2.5.3) now includes: By configuring git config core.editor notepad, users can now use notepad.exe as their default editor. Configuring git config format.commitMessageColumns 72 will be picked up by the notepad wrapper and line-wrap the commit message after the user edits it. See commit 69b301b by Johannes Schindelin (dscho). And Git 2.16 (Q1 2018) will show a message to tell the user that it is waiting for the user to finish editing when spawning an editor, in case the editor opens to a hidden window or somewhere obscure and the user gets lost. See commit abfb04d (07 Dec 2017), and commit a64f213 (29 Nov 2017) by Lars Schneider (larsxschneider). Helped-by: Junio C Hamano (gitster). (Merged by Junio C Hamano -- gitster -- in commit 0c69a13, 19 Dec 2017) launch_editor(): indicate that Git waits for user input When a graphical GIT_EDITOR is spawned by a Git command that opens and waits for user input (e.g. "git rebase -i"), then the editor window might be obscured by other windows. The user might be left staring at the original Git terminal window without even realizing that s/he needs to interact with another window before Git can proceed. To this user Git appears hanging. Print a message that Git is waiting for editor input in the original terminal and get rid of it when the editor returns, if the terminal supports erasing the last line Original answer I just tested it with git version 1.6.2.msysgit.0.186.gf7512 and Notepad++5.3.1 I prefer to not have to set an EDITOR variable, so I tried: git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\"" # or git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\" %*" That always gives: C:\prog\git>git config --global --edit "c:\Program Files\Notepad++\notepad++.exe" %*: c:\Program Files\Notepad++\notepad++.exe: command not found error: There was a problem with the editor '"c:\Program Files\Notepad++\notepad++.exe" %*'. If I define a npp.bat including: "c:\Program Files\Notepad++\notepad++.exe" %* and I type: C:\prog\git>git config --global core.editor C:\prog\git\npp.bat It just works from the DOS session, but not from the git shell. (not that with the core.editor configuration mechanism, a script with "start /WAIT..." in it would not work, but only open a new DOS window) Bennett's answer mentions the possibility to avoid adding a script, but to reference directly the program itself between simple quotes. Note the direction of the slashes! Use / NOT \ to separate folders in the path name! git config --global core.editor \ "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" Or if you are in a 64 bit system: git config --global core.editor \ "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" But I prefer using a script (see below): that way I can play with different paths or different options without having to register again a git config. The actual solution (with a script) was to realize that: what you refer to in the config file is actually a shell (/bin/sh) script, not a DOS script. So what does work is: C:\prog\git>git config --global core.editor C:/prog/git/npp.bat with C:/prog/git/npp.bat: #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst "$*" or #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst -notabbar -nosession -noPlugin "$*" With that setting, I can do 'git config --global --edit' from DOS or Git Shell, or I can do 'git rebase -i ...' from DOS or Git Shell. Bot commands will trigger a new instance of notepad++ (hence the -multiInst' option), and wait for that instance to be closed before going on. Note that I use only '/', not \'. And I installed msysgit using option 2. (Add the git\bin directory to the PATH environment variable, but without overriding some built-in windows tools) The fact that the notepad++ wrapper is called .bat is not important. It would be better to name it 'npp.sh' and to put it in the [git]\cmd directory though (or in any directory referenced by your PATH environment variable). See also: * *How do I view ‘git diff’ output with visual diff program? for the general theory *How do I setup DiffMerge with msysgit / gitk? for another example of external tool (DiffMerge, and WinMerge) lightfire228 adds in the comments: For anyone having an issue where N++ just opens a blank file, and git doesn't take your commit message, see "Aborting commit due to empty message": change your .bat or .sh file to say: "<path-to-n++" .git/COMMIT_EDITMSG -<arguments>. That will tell notepad++ to open the temp commit file, rather than a blank new one. A: Vim/gVim works well for me. >echo %EDITOR% c:\Vim\Vim71\vim.exe A: I needed to do both of the following to get Git to launch Notepad++ in Windows: * *Add the following to .gitconfig: editor = 'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin *Modify the shortcut to launch the Git Bash shell to run as administrator, and then use that to launch the Git Bash shell. I was guessing that the context menu entry "Git Bash here" was not launching Notepad++ with the required permissions. After doing both of the above, it worked. A: Anyway, I've just been playing around with this and found the following to work nicely for me: git config --global core.editor "'C:/Program Files/TextPad 5/TextPad.exe' -m" I don't think CMD likes single-quotes so you must use double quotes "to specify the space embedded string argument". Cygwin (which I believe is the underlying platform for Git's Bash) on the other hand likes both ' and "; you can specify a CMD-like paths, using / instead of \, so long as the string is quoted i.e. in this instance, using single-quotes. The -m overrides/indicates the use of multiple editors and there is no need for a %* tacked on the end. A: I had PortableGit 1.6 working fine, but after upgrading to the PortableGit 1.7 Windows release, I had problems. Some of the Git commands opens up the Notepad++.exe fine, but some don't, especially Git rebase behaves differently. The problem is some commands run the Windows cmd process and some use the Unix cmd process. I want to give startup attributes to Notepad++ editor, so I need to have a customized script. My solution is this. * *Create a script to run an appropriate text editor. The script looks weird, but it handles both the Windows and Unix variation. c:/PortableGit/cmd/git-editor.bat #!/bin/sh # Open a new instance function doUnix() { "c:\program files\notepad++\notepad++.exe" -multiInst -nosession -notabbar $* exit } doUnix $* :WINCALL "c:\program files\notepad++\notepad++.exe" -multiInst -nosession -notabbar %* *Set the global core.editor variable The script was saved to git/cmd folder, so it's already in a gitconsole path. This is mandatory as a full path may not work properly. git config --global core.editor "git-editor.bat" Now I can run the git commit -a and git rebase -i master commands. Give it a try if you have problems in the Git Windows tool. A: I use Git on multiple platforms, and I like to use the same Git settings on all of them. (In fact, I have all my configuration files under release control with Git, and put a Git repository clone on each machine.) The solution I came up with is this: I set my editor to giteditor git config --global core.editor giteditor Then I create a symbolic link called giteditor which is in my PATH. (I have a personal bin directory, but anywhere in the PATH works.) That link points to my current editor of choice. On different machines and different platforms, I use different editors, so this means that I don't have to change my universal Git configuration (.gitconfig), just the link that giteditor points to. Symbolic links are handled by every operating system I know of, though they may use different commands. For Linux, you use ln -s. For Windows, you use the cmd built-in mklink. They have different syntaxes (which you should look up), but it all works the same way, really. A: Based on VonC's suggestion, this worked for me (was driving me crazy): git config --global core.editor "'C:/Program Files (x86)/Sublime Text 3/subl.exe' -wait" Omitting -wait can cause problems, especially if you are working with Gerrit and change ids that have to be manually copied to the bottom of your commit message. A: Building on Darren's answer, to use Notepad++ you can simply do this (all on one line): git config --global core.editor "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" Obviously, the C:/Program Files/Notepad++/notepad++.exe part should be the path to the Notepad++ executable on your system. For example, it might be C:/Program Files (x86)/Notepad++/notepad++.exe. It works like a charm for me. Article How to set Notepad++ as the default Git editor for commits instead of Vim explains parameters of the command. A: I use Cygwin on Windows, so I use: export EDITOR="emacs -nw" The -nw is for no-windows, i.e. tell Emacs not to try and use X Window. The Emacs keybindings don't work for me from a Windows shell, so I would only use this from a Cygwin shell... (rxvt is recommended.) A: This is my setup to use Geany as an editor for Git: git config --global core.editor C:/path/to/geany.bat with the following content in geany.bat: #!/bin/sh "C:\Program Files\Geany\bin\Geany.exe" --new-instance "$*" It works in both a DOS console and msysgit. A: Say you want to configure VsCode to be your editor. Do the following: Add the following lines to your .gitconfig file: The default location of .gitconfig file is C:\Users\USER_NAME\.gitconfig [core] editor = code -w -n [diff] tool = vscode [difftool "vscode"] cmd = code -w -n --diff $LOCAL $REMOTE [merge] tool = vscode [mergetool "vscode"] cmd = code -w -n $MERGED NOTE: * *-w is mandatory, and tells git to wait for vscode to load. *-n is optional, and tells git to open vscode in a new-window. In case you want to configure a custom path to the editor in Windows: You need to replace the word code with Path to '.exe' of VsCode. For example: [core] editor = "'C:/Users/Tal/AppData/Local/Programs/Microsoft VS Code/Code.exe'" -w -n [diff] tool = vscode [difftool "vscode"] cmd = "'C:/Users/Tal/AppData/Local/Programs/Microsoft VS Code/Code.exe'" -w -n --diff $LOCAL $REMOTE [merge] tool = vscode [mergetool "vscode"] cmd = "'C:/Users/Tal/AppData/Local/Programs/Microsoft VS Code/Code.exe'" -w -n $MERGED Note: * *You need to surround the path with single-quotes ''. *The slashes in the path should be forward-slashes /. Or another example: [core] editor = \"C:\\Users\\Tal\\AppData\\Local\\Programs\\Microsoft VS Code\\Code.exe\" -w -n [diff] tool = vscode [difftool "vscode"] cmd = \"C:\\Users\\Tal\\AppData\\Local\\Programs\\Microsoft VS Code\\Code.exe\" -w -n --diff $LOCAL $REMOTE [merge] tool = vscode [mergetool "vscode"] cmd = \"C:\\Users\\Tal\\AppData\\Local\\Programs\\Microsoft VS Code\\Code.exe\" -w -n $MERGED UPDATE: VsCode supports now "3-way merges"! The update was done in versions 1.69.0 and 1.70.0. So now you can enable the VsCode "mergetool" to view 3-way merges. To do so, you need to update the line: [mergetool "vscode"] cmd = code -w -n $MERGED with the new line: [mergetool "vscode"] cmd = code -w -n --merge $REMOTE $LOCAL $BASE $MERGED A: Edit: After updating to Vim 7.3, I've come to the conclusion that the cleanest and easiest way to do this is: * *Add Vim's main folder to your path (right click on My Computer → Properties → Advanced → Environment Variables) *Run this: git config --global core.editor "gvim --nofork '%*'" If you do it this way, then I am fairly sure it will work with Cygwin as well. Original answer: Even with a couple of Vim-related answers, I was having trouble getting this to work with gVim under Windows (while not using a batch file or %EDITOR% or Cygwin). What I eventually arrived at is nice and clean, and draws from a few of the solutions here: git config --global core.editor \ "'C:/Program Files/Vim/vim72/gvim.exe' --nofork '%*'" One gotcha that took me a while is these are not the Windows-style backslashes. They are normal forward slashes. A: It seems as if Git won't find the editor if there are spaces in the path. So you will have to put the batch file mentioned in Patrick's answer into a non-whitespace path. A: I prefer to use Emacs. Getting it set up can be a little tricky. * *Download Emacs and unpack it somewhere like c:\emacs. *Run c:\emacs\bin\addpm.exe. You need to right-click and "Run as Administrator" if you are using Windows Vista or above. This will put the executables in your path. *Add (server-start) somewhere in your .emacs file. See the Emacs Windows FAQ for advice on where to put your .emacs file. *git config --global core.editor emacsclientw Git will now open files within an existing Emacs process. You will have to run that existing process manually from c:\emacs\bin\runemacs.exe. A: Notepad++ works just fine, although I choose to stick with Notepad, -m, or even sometimes the built-in "edit." The problem you are encountering using Notepad++ is related to how Git is launching the editor executable. My solution to this is to set environment variable EDITOR to a batch file, rather than the actual editor executable, that does the following: start /WAIT "E:\PortableApps\Notepad++Portable\Notepad++Portable.exe" %* /WAIT tells the command line session to halt until the application exits, thus you will be able to edit to your heart's content while Git happily waits for you. %* passes all arguments to the batch file through to Notepad++. C:\src> echo %EDITOR% C:\tools\runeditor.bat A: For Atom you can do git config --global core.editor "atom --wait" and similar for Visual Studio Code git config --global core.editor "code --wait" which will open up an Atom or Visual Studio Code window for you to commit through, or for Sublime Text: git config --global core.editor "subl -n -w" A: WordPad! I'm happy using Vim, but since I'm trying to introduce Git to the company I wanted something that we'd all have, and found that WordPad seems to work okay (i.e. Git does wait until you're finished editing and close the window). git config core.editor '"C:\Program Files\Windows NT\Accessories\wordpad.exe"' That's using Git Bash on msysgit; I've not tried from the Windows command prompt (if that makes any difference). A: I also use Cygwin on Windows, but with gVim (as opposed to the terminal-based Vim). To make this work, I have done the following: * *Created a one-line batch file (named git_editor.bat) which contains the following: "C:/Program Files/Vim/vim72/gvim.exe" --nofork "%*" *Placed git_editor.bat on in my PATH. *Set GIT_EDITOR=git_editor.bat With this done, git commit, etc. will correctly invoke the gVim executable. NOTE 1: The --nofork option to gVim ensures that it blocks until the commit message has been written. NOTE 2: The quotes around the path to gVim is required if you have spaces in the path. NOTE 3: The quotes around "%*" are needed just in case Git passes a file path with spaces. A: I've had difficulty getting Git to cooperate with WordPad, Komodo Edit and pretty much every other editor I give it. Most open for editing, but Git clearly doesn't wait for the save/close to happen. As a crutch, I've just been doing i.e. git commit -m "Fixed the LoadAll method" to keep things moving. It tends to keep my commit messages a little shorter than they probably should be, but clearly there's some work to be done on the Windows version of Git. The GitGUI also isn't that bad. It takes a little bit of orientation, but after that, it works fairly well. A: I've just had the same problem and found a different solution. I was getting error: There was a problem with the editor 'ec' I've got VISUAL=ec, and a batch file called ec.bat on my path that contains one line: c:\emacs\emacs-23.1\bin\emacsclient.exe %* This lets me edit files from the command line with ec <filename>, and having VISUAL set means most unixy programs pick it up too. Git seems to search the path differently to my other commands though - when I looked at a git commit in Process Monitor I saw it look in every folder on the path for ec and for ec.exe, but not for ec.bat. I added another environment variable (GIT_EDITOR=ec.bat) and all was fine. A: I managed to get the environment version working by setting the EDITOR variable using quotes and /: EDITOR="c:/Program Files (x86)/Notepad++/notepad++.exe" A: I'm using GitHub for Windows which is a nice visual option. But I also prefer the command line, so to make it work when I open a repository in a Git shell I just set the following: git config --global core.editor vim which works great. A: This works for PowerShell and cmder 1.2 (when used with PowerShell). In file ~/.gitconfig: [core] editor = 'c:/program files/sublime text 3/subl.exe' -w How can I make Sublime Text the default editor for Git? A: I found a a beautifully simple solution posted here - although there may be a mistake in the path in which you have to copy over the "subl" file given by the author. I am running Windows 7 x64, and I had to put the "subl" file in my /Git/cmd/ folder to make it work. It works like a charm, though. A: Atom and Windows 10 * *I right clicked the Atom icon at the desktop and clicked on properties. *Copied the "Start in" location path *Looked over there with Windows Explorer and found "atom.exe". *I typed this in Git Bash: git config --global core.editor C:/Users/YOURNAMEUSER/AppData/Local/atom/app-1.7.4/atom.exe" Note: I changed all \ for / . I created a .bashrc at my home directory and used / to set my home directory and it worked, so I assumed / will be the way to go. atom-editor git git-bash windows-10 A: to add sublime git config --global core.editor "'C:\Program Files\Sublime Text 3\sublime_text.exe'" A: I solved a similar issue using GIT_EDITOR variable and notepad2 as editor. Solution 1: Set the environment variable GIT_EDITOR to C:/tools/notepad2.exe. This works nicely, but git complains if the commit message has non-ASCII characters. Solution 2: Set GIT_EDITOR to C:/tools/notepad2.exe //utf8. Notice the double slash in front of the program switch. BTW: -utf8 would have worked as well. A: When using a remotely mounted homedrive (Samba share, NFS, ...) your ~/.git folder is shared across all systems which can lead to several problems. Thus I prefer a script to determine the right editor for the right system: #!/usr/bin/perl # Detect which system I'm on and choose the right editor $unamea = `uname -a`; if($unamea =~ /mingw/i){ if($unamea =~ /devsystem/i){#Check hostname exec('C:\Program Files (x86)\Notepad++\notepad++.exe', '-multiInst', '-nosession', @ARGV); } if($unamea =~ /testsystem/i){ exec('C:\Program Files\Notepad++\notepad++.exe', '-multiInst', '-nosession', @ARGV); } } $MCEDIT=`which mcedit`; if($MCEDIT =~ /mcedit/){ exec($MCEDIT, @ARGV); } $NANO=`which nano`; if($NANO =~ /nano/){ exec($NANO, @ARGV); } die "You don't have a suitable editor!\n"; One might consider a plain shell script, but I used Perl as it is shipped with msysgit and your Unix-like systems will usually provide one as well. Putting the script in /home/username/bin, which should be added to PATH in .bashrc or .profile. Once added with git config --global core.editor giteditor.pl you have the right editor, wherever you are. A: This is working for me using Cygwin and TextPad 6 (EDIT: it is also working with TextPad 5 as long as you make the obvious change to the script), and presumably the model could be used for other editors as well: File ~/.gitconfig: [core] editor = ~/script/textpad.sh File ~/script/textpad.sh: #!/bin/bash APP_PATH=`cygpath "c:/program files (x86)/textpad 6/textpad.exe"` FILE_PATH=`cygpath -w $1` "$APP_PATH" -m "$FILE_PATH" This one-liner works as well: File ~/script/textpad.sh (option 2): "`cygpath "c:/program files (x86)/textpad 6/textpad.exe"`" -m "`cygpath -w $1`" A: This worked for me: * *Add the directory which contains the editor's executable to your PATH variable. (E.g. "C:\Program Files\Sublime Text 3\") *Reboot your computer. *Change the core.editor global Git variable to the name of the editor executable without the extension '.exe' (e.g. git config --global core.editor sublime_text) That's it! NOTE: Sublime Text 3 is the editor I used for this example. A: Here is a solution with Cygwin: #!/bin/dash -e if [ "$1" ] then k=$(cygpath -w "$1") elif [ "$#" != 0 ] then k= fi Notepad2 ${k+"$k"} * *If no path, pass no path *If path is empty, pass empty path *If path is not empty, convert to Windows format. Then I set these variables: export EDITOR=notepad2.sh export GIT_EDITOR='dash /usr/local/bin/notepad2.sh' * *EDITOR allows script to work with Git *GIT_EDITOR allows script to work with Hub commands Source A: Those of you using Git on Windows: What tool do you use to edit your commit messages, and what did you have to do to make it work? The tool that I find the most useful as both my git editor and my general-purpose code editor, in both Windows and Linux, is Sublime Text 3. It works really well, but requires a little bit of setup to get it just right, so I've fully documented that fully here: * *How do I make Git use the editor of my choice for commits? - Best settings for Sublime Text 3 as your Git editor (Windows & Linux instructions) Side note about my main editor: for big projects I use Eclipse as my primary editor, and Sublime Text 3 as my git editor and additional file editor when I need to make use of its advanced features such as multi-cursor mode, vertical/column selection mode, etc. For small to medium projects I use just Sublime Text 3 by itself. For setup instructions for Eclipse, see my PDF document here. A: I just use TortoiseGit straight out the box. It integrates beautifully with my PuTTY public keys. It has a perfect editor for commit messages.
{ "language": "en", "url": "https://stackoverflow.com/questions/10564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "642" }
Q: What is the difference between Early and Late Binding? What is the difference between early and late binding? A: Similar but more detailed answer from Herbert Schildt C++ book:- Early binding refers to events that occur at compile time. In essence, early binding occurs when all information needed to call a function is known at compile time. (Put differently, early binding means that an object and a function call are bound during compilation.) Examples of early binding include normal function calls (including standard library functions), overloaded function calls, and overloaded operators. The main advantage to early binding is efficiency. Because all information necessary to call a function is determined at compile time, these types of function calls are very fast. The opposite of early binding is late binding. Late binding refers to function calls that are not resolved until run time. Virtual functions are used to achieve late binding. As you know, when access is via a base pointer or reference, the virtual function actually called is determined by the type of object pointed to by the pointer. Because in most cases this cannot be determined at compile time, the object and the function are not linked until run time. The main advantage to late binding is flexibility. Unlike early binding, late binding allows you to create programs that can respond to events occurring while the program executes without having to create a large amount of "contingency code." Keep in mind that because a function call is not resolved until run time, late binding can make for somewhat slower execution times. However today, fast computers have significantly reduced the execution times related to late binding. A: The short answer is that early (or static) binding refers to compile time binding and late (or dynamic) binding refers to runtime binding (for example when you use reflection). A: Taken directly from http://word.mvps.org/fAQs/InterDev/EarlyvsLateBinding.htm There are two ways to use Automation (or OLE Automation) to programmatically control another application. Late binding uses CreateObject to create and instance of the application object, which you can then control. For example, to create a new instance of Excel using late binding: Dim oXL As Object Set oXL = CreateObject("Excel.Application") On the other hand, to manipulate an existing instance of Excel (if Excel is already open) you would use GetObject (regardless whether you're using early or late binding): Dim oXL As Object Set oXL = GetObject(, "Excel.Application") To use early binding, you first need to set a reference in your project to the application you want to manipulate. In the VB Editor of any Office application, or in VB itself, you do this by selecting Tools + References, and selecting the application you want from the list (e.g. “Microsoft Excel 8.0 Object Library”). To create a new instance of Excel using early binding: Dim oXL As Excel.Application Set oXL = New Excel.Application In either case, incidentally, you can first try to get an existing instance of Excel, and if that returns an error, you can create a new instance in your error handler. A: In interpreted languages, the difference is a little more subtle. Ruby: # early binding: def create_a_foo(*args) Foo.new(*args) end my_foo = create_a_foo # late binding: def create_something(klass, *args) klass.new(*args) end my_foo = create_something(Foo) Because Ruby is (generally) not compiled, there isn't a compiler to do the nifty up-front stuff. The growth of JRuby means that more Ruby is compiled these days, though, making it act more like Java, above. The issue with IDEs still stands: a platform like Eclipse can look up class definitions if you hard-code them, but cannot if you leave them up to the caller. Inversion-of-control is not terribly popular in Ruby, probably because of its extreme runtime flexibility, but Rails makes great use of late binding to reduce the amount of configuration necessary to get your application going. A: In compiled languages, the difference is stark. Java: //early binding: public create_a_foo(*args) { return new Foo(args) } my_foo = create_a_foo(); //late binding: public create_something(Class klass, *args) { klass.new_instance(args) } my_foo = create_something(Foo); In the first example, the compiler can do all sorts of neat stuff at compile time. In the second, you just have to hope that whoever uses the method does so responsibly. (Of course, newer JVMs support the Class<? extends Foo> klass structure, which can greatly reduce this risk.) Another benefit is that IDEs can hotlink to the class definition, since it's declared right there in the method. The call to create_something(Foo) might be very far from the method definition, and if you're looking at the method definition, it might be nice to see the implementation. The major advantage of late binding is that it makes things like inversion-of-control easier, as well as certain other uses of polymorphism and duck-typing (if your language supports such things). A: public class child() { public void method1() { System.out.println("child1"); } public void method2() { System.out.println("child2"); } } public class teenager extends child() { public void method3() { System.out.println("teenager3"); } } public class adult extends teenager() { public void method1() { System.out.println("adult1); super.method1(); } } //In java public static void main(String []args) { ((teenager)var).method1(); } This will print out adult1 child1 In early binding the compiler will have access to all of the methods in child and teenager but in late binding (at runtime), it will check for methods that are overridden at runtime. Hence method1(from child -- early binding) will be overridden by the method1 from adult at runtime(late binding) Then it will implement method1 from child since there is no method1 in method1 in teenager. Note that if child did not have a method1 then the code in the main would not compile. A: The compile time polymorphism also called as the overloading or early binding or static binding when we have the same method name with different behaviors. By implementing the multiple prototype of the same method and different behavior occurs in it. Early binding refers first compilation of the program . But in late binding object is runtime occurs in program. Also called as Dynamic binding or overriding or Runtime polymorphism. A: The easiest example in java: Early (static or overloading) binding: public class Duck { public static void quack(){ System.out.println("Quack"); } } public class RubberDuck extends Duck { public static void quack(){ System.out.println("Piiiiiiiiii"); } } public class EarlyTest { public static void main(String[] args) { Duck duck = new Duck(); Duck rubberduck = new RubberDuck(); duck.quack(); rubberduck.quack(); //early binding - compile time } } Result is: Quack Quack while for Late (dynamic or overriding) binding: public class Duck { public void quack(){ System.out.println("Quack"); } } public class RubberDuck extends Duck { public void quack(){ System.out.println("Piiiiiiiiii"); } } public class LateTest { public static void main(String[] args){ Duck duck = new Duck(); Duck rubberduck = new RubberDuck(); duck.quack(); rubberduck.quack(); //late binding - runtime } } result is: Quack Piiiiiiiiii Early binding happens in compile time, while late binding during runtime.
{ "language": "en", "url": "https://stackoverflow.com/questions/10580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Small modification to an XML document using StAX I'm currently trying to read in an XML file, make some minor changes (alter the value of some attributes), and write it back out again. I have intended to use a StAX parser (javax.xml.stream.XMLStreamReader) to read in each event, see if it was one I wanted to change, and then pass it straight on to the StAX writer (javax.xml.stream.XMLStreamReader) if no changes were required. Unfortunately, that doesn't look to be so simple - The writer has no way to take an event type and a parser object, only methods like writeAttribute and writeStartElement. Obviously I could write a big switch statement with a case for every possible type of element which can occur in an XML document, and just write it back out again, but it seems like a lot of trouble for something which seems like it should be simple. Is there something I'm missing that makes it easy to write out a very similar XML document to the one you read in with StAX? A: After a bit of mucking around, the answer seems to be to use the Event reader/writer versions rather than the Stream versions. (i.e. javax.xml.stream.XMLEventReader and javax.xml.stream.XMLEventWriter) See also http://www.devx.com/tips/Tip/37795, which is what finally got me moving. A: StAX works pretty well and is very fast. I used it in a project to parse XML files which are up to 20MB. I don't have a thorough analysis, but it was definitely faster than SAX. As for your question: The difference between streaming and event-handling, AFAIK is control. With the streaming API you can walk through your document step by step and get the contents you want. Whereas the event-based API you can only handle what you are interested in. A: I know this is rather old question, but if anyone else is looking for something like this, there is another alternative: Woodstox Stax2 extension API has method: XMLStreamWriter2.copyEventFromReader(XMLStreamReader2 r, boolean preserveEventData) which copies the currently pointed-to event from stream reader using stream writer. This is not only simple but very efficient. I have used it for similar modifications with success. (how to get XMLStreamWriter2 etc? All Woodstox-provided instances implement these extended versions -- plus there are wrappers in case someone wants to use "basic" Stax variants, as well)
{ "language": "en", "url": "https://stackoverflow.com/questions/10586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I write those cool command line code generators (rails, sproutcore, webgen) I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command. A: Check out rubigen. You can also view a presentation by dr nic. A: So what you want is to able to issue a command that will generate an entire directory tree? cp -r <template> <destination> Or am I misunderstanding? If you want to generate a consistent directory structure, your best bet is to simply copy it from a template. Fast, easy, done.
{ "language": "en", "url": "https://stackoverflow.com/questions/10595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Web page field validation I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users. What's the best/easiest solution. BTW: easiest scores 65% of total points Edit: What if best was 65% of total points? A: If you would like to use JavaScript then it has built in date validation functions. However, if you do not want to go the JavaScript route, you could change the UI to dropdown controls which would limit the users ability to enter invalid data. You would still need to check server side to ensure nobody submits Feb 30th. A: Check out this javascript date validation function. It uses javascript, regular expressions and the 'onblur' event of a text input. A: @David H. Aust Using onblur for validation is problematic, because some folks use the enter key, not the mouse, to submit a form. Using onblur and the form's onsubmit event in conjunction could be a better solution. Back when I did JS validation for forms a lot more, I would run against keyup events. This gave the user instant feedback on whether or not their entry was correct. You can (and I did) also put checks in place so that the user doesn't receive an "incorrect" message until they've left the field (since you shouldn't tell them they're incorrect if they aren't done yet). A: I would recommend using drop-downs for dates, as indicated above. I can't really think of any reason not to--you want the user to choose from pre-defined data, not give you something unique that you can't anticipate. You can avoid February 30 with a little bit of Javascript (make the days field populate dynamically based on the month). A: @Brian Warshaw That is a really good point you make about not forgetting the users who navigate via the keyboard (uh, me). Thanks for bringing our attention to that. A: A simple javascript method that reads what's in the input field on submit and validates it. If it's not valid, return false so that the form is not submitted to the server. ... onSubmit="return validateForm();" ... Make sure you validate on the server side too, it's easy to bypass javascript validation. A: If you're using ASP.NET, it has validator controls that you can point to textboxes which you can then use to validate proper date/time formats. A: There are a couple of date widgets available out in the aether. Then you can allow only valid input. A: Looks like there's a great video about the ASP.NET AJAX Control Toolkit provides the MaskedEdit control and the MaskedEditValidator control that works great. Not easy for beginners but VERY good and instant feedback. Thanks for all the answers though! asp.net Unfortunately I can't accept this answer. A: I've used this small bit of js code in a few projects, it'll do date quickly and easily along with a few others. Link
{ "language": "en", "url": "https://stackoverflow.com/questions/10599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Automatically measure all SQL queries In Maybe Normalizing Isn't Normal Jeff Atwood says, "You're automatically measuring all the queries that flow through your software, right?" I'm not but I'd like to. Some features of the application in question: * *ASP.NET *a data access layer which depends on the MS Enterprise Library Data Access Application Block *MS SQL Server A: In addition to Brad's mention of SQL Profiler, if you want to do this in code, then all your database calls need to funnelled through a common library. You insert the timing code there, and voila, you know how long every query in your system takes. A single point of entry to the database is a fairly standard feature of any ORM or database layer -- or at least it has been in any project I've worked on so far! A: SQL Profiler is the tool I use to monitor traffic flowing to my SQL Server. It allows you to gather detailed data about your SQL Server. SQL Profiler has been distributed with SQL Server since at least SQL Server 2000 (but probably before that also). Highly recommended. A: Take a look at this chapter Jeff Atwood and I wrote about performance optimizations for websites. We cover a lot of stuff, but there's a lot of stuff about database tracing and optimization: Speed Up Your Site: 8 ASP.NET Performance Tips A: The Dropthings project on CodePlex has a class for timing blocks of code. The class is named TimedLog. It implements IDisposable. You wrap the block of code you wish to time in a using statement. A: If you use rails it automatically logs all the SQL queries, and the time they took to execute, in your development log file. I find this very useful because if you do see one that's taking a while, it's one step to just copy and paste it straight off the screen/logfile, and put 'explain' in front of it in mysql. You don't have to go digging through your code and reconstruct what's happening. Needless to say this doesn't happen in production as it'd run you out of disk space in about an hour. A: If you define a factory that creates SqlCommands for you and always call it when you need a new command, you can return a RealProxy to an SqlCommand. This proxy can then measure how long ExecuteReader / ExecuteScalar etc. take using a StopWatch and log it somewhere. The advantage to using this kind of method over Sql Server Profiler is that you can get full stack traces for each executed piece of SQL.
{ "language": "en", "url": "https://stackoverflow.com/questions/10604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Pre-built regular expression patterns or Regex Libraries? Does anyone use have a good regex library that they like to use? Most of the regexes that you find online either contain bugs or are so focused on the edge cases that it turns into a competition to validate whatever spec 100%. Of course you can write your own, but when you are billing by the hour its handy to have a library around. A: Boost, for c++ A: You can search for regular expression in regexlib. A: Besides being pretty much the best Regex tool on the market (seriously), RegexBuddy is about the only tool I know of that lets you switch amongst different Regex rendering engines. http://www.regexbuddy.com/ See info here: http://en.wikipedia.org/wiki/RegexBuddy RegexBuddy's proprietary regular expression engine allows the software to emulate the rules and limitations of numerous popular regular expression flavors. A: Lately, I do all my text parsing in Perl. If I needed regex's in another language, I'd go with PCRE. The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5. PCRE has its own native API, as well as a set of wrapper functions that correspond to the POSIX regular expression API. The PCRE library is free, even for building commercial software. PCRE was originally written for the Exim MTA, but is now used by many high-profile open source projects, including Apache, PHP, KDE, Postfix, Analog, and Nmap. PCRE has also found its way into some well known commercial products, like Apple Safari. Some other interesting projects using PCRE include Chicken, Ferite, Onyx, Hypermail, Leafnode, Askemos, and Wenlin. PCRE is mature, and has the support of numerous projects. Apache and Apple both have a vested interest in making it high-quality. I doubt that any other RE library is likely to surpass it in both functionality and quality (or possibly either) anytime soon. A: One nice source that provides commonly requested regular expressions is Perl's Regexp::Common. Currently provides patterns for the following (from the home page): Regexp::Common::balanced Provides regexes for strings with balanced parenthesized delimiters. Regexp::Common::comment Provides regexes for comments of various languages (43 languages currently). Regexp::Common::delimited Provides regexes for delimited strings. Regexp::Common::lingua Provides regexes for palindromes. Regexp::Common::list Provides regexes for lists. Regexp::Common::net Provides regexes for IPv4 addresses and MAC addresses. Regexp::Common::number Provides regexes for numbers (integers and reals). Regexp::Common::profanity Provides regexes for profanity. Regexp::Common::whitespace Provides regexes for leading and trailing whitespace. Regexp::Common::zip Provides regexes for zip codes. A: e-texteditor hilights what you're searching for as you type it. This is incredibly useful, as you can paste your 'sample text' into a file, and just type your regex into the search field, and see what it's matching right in front of you. None of these 'visual regex builder' things are substitutes for actually LEARNING regular expressions.
{ "language": "en", "url": "https://stackoverflow.com/questions/10610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Differences between MySQL and SQL Server I'm an ASP.NET developer who has used Microsoft SQL Server for all my database needs (both at work and for personal projects). I am considering trying out the LAMP stack for some of my personal projects. What are some of the main differences between MySQL and SQL Server? Is using stored procedures a common practice in MySQL? Any advice or resources you'd recommend to help me with the switch? To those who have experience with both, are there any missing features from MySQL? A: Everything in MySQL seems to be done closer to the metal than in MSSQL, And the documentation treats it that way. Especially for optimization, you'll need to understand how indexes, system configuration, and the optimizer interact under various circumstances. The "optimizer" is more a parser. In MSSQL your query plan is often a surprise (usually good, sometimes not). In MySQL, it pretty much does what you asked it to do, the way you expected it to. Which means you yourself need to have a deep understanding of the various ways it might be done. Not built around a good TRANSACTION model (default MyISAM engine). File-system setup is your problem. All the database configuration is your problem - especially various cache sizes. Sometimes it seems best to think of it as an ad-hoc, glorified isam. Codd and Date don't carry much weight here. They would say it with no embarrassment. A: Frankly, I can't find a single reason to use MySQL rather than MSSQL. The issue before used to be cost but SQL Server 2005 Express is free and there are lots of web hosting companies which offer full hosting with sql server for less than $5.00 a month. MSSQL is easier to use and has many features which do not exist in MySQL. A: I think one of the major things to watch out for is that versions prior to MySQL 5.0 did not have views, triggers, and stored procedures. More of this is explained in the MySQL 5.0 Download page. A: Both are DBMS's Product Sql server is an commercial application while MySql is an opensouces application.Both the product include similar feature,however sql server should be used for an enterprise solution ,while mysql might suit a smaller implementation.if you need feature like recovery,replication,granalar security and significant,you need sql server MySql takes up less spaces on disk, and uses less memory and cpu than does sql server A: Lots of comments here sound more like religious arguments than real life statements. I've worked for years with both MySQL and MSSQL and both are good products. I would choose MySQL mainly based on the environment that you are working on. Most open source projects use MySQL, so if you go into that direction MySQL is your choice. If you develop something with .Net I would choose MSSQL, not because it's much better, but just cause that is what most people use. I'm actually currently on a Project that uses ASP.NET with MySQL and C#. It works perfectly fine. A: @abdu The main thing I've found that MySQL has over MSSQL is timezone support - the ability to nicely change between timezones, respecting daylight savings is fantastic. Compare this: mysql> SELECT CONVERT_TZ('2008-04-01 12:00:00', 'UTC', 'America/Los_Angeles'); +-----------------------------------------------------------------+ | CONVERT_TZ('2008-04-01 12:00:00', 'UTC', 'America/Los_Angeles') | +-----------------------------------------------------------------+ | 2008-04-01 05:00:00 | +-----------------------------------------------------------------+ to the contortions involved at this answer. As for the 'easier to use' comment, I would say that the point is that they are different, and if you know one, there will be an overhead in learning the other. A: Anyone have any good experience with a "port" of a database from SQL Server to MySQL? This should be fairly painful! I switched versions of MySQL from 4.x to 5.x and various statements wouldn't work anymore as they used to. The query analyzer was "improved" so statements which previously were tuned for performance would not work anymore as expected. The lesson learned from working with a 500GB MySQL database: It's a subtle topic and anything else but trivial! A: @Cebjyre. The IDE whether Enterprise Manager or Management Studio is better than anything I have seen so far for MySQL. I say 'easier to use' because I can do many things in MSSQL where MySQL has no counterparts. In MySQL I have no idea how to tune the queries by simply looking at the query plan or looking at the statistics. The index tuning wizard in MSSQL takes most of the guess work on what indexes are missing or misplaced. One shortcoming of MySQL is there's no max size for a database. The database would just increase in size till it fills up the disk. Imagine if this disk is sharing databases with other users and suddenly all of their queries are failing because their databases can't grow. I have reported this issue to MySQL long time ago. I don't think it's fixed yet. A: I can't believe that no one mentioned that MySQL doesn't support Common Table Expressions (CTE) / "with" statements. It's a pretty annoying difference. A: MySQL is more likely to have database corruption issues, and it doesn't fix them automatically when they happen. I've worked with MSSQL since version 6.5 and don't remember a database corruption issue taking the database offline. The few times I've worked with MySQL in a production environment, a database corruption issue took the entire database offline until we ran the magic "please fix my corrupted index" thing from the commandline. MSSQL's transaction and journaling system, in my experience, handles just about anything - including a power cycle or hardware failure - without database corruption, and if something gets messed up it fixes it automatically. This has been my experience, and I'd be happy to hear that this has been fixed or we were doing something wrong. http://dev.mysql.com/doc/refman/6.0/en/corrupted-myisam-tables.html http://www.google.com/search?q=site%3Abugs.mysql.com+index+corruption A: One thing you have to watch out for is the fairly severe differences in the way SQL Server and MySQL implement the SQL syntax. Here's a nice Comparison of Different SQL Implementations. For example, take a look at the top-n section. In MySQL: SELECT age FROM person ORDER BY age ASC LIMIT 1 OFFSET 2 In SQL Server (T-SQL): SELECT TOP 3 WITH TIES * FROM person ORDER BY age ASC A: Spending some time working with MySQL from the MSSQL to MySQL syntax POV I kept finding myself limited in what I could do. There are bizzare limits on updating a table while refrencing the same table during an update. Additionally UPDATE FROM does not work and last time I checked they don't support the Oracle MERGE INTO syntax either. This was a show stopper for me and I stopped thinking I would get anywhere with MySQL after that.
{ "language": "en", "url": "https://stackoverflow.com/questions/10616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "149" }
Q: Should I switch from nant to msbuild? I currently use nant, ccnet (cruise control), svn, mbunit. I use msbuild to do my sln build just because it was simpler to shell out. Are there any merits to switching my whole build script to MSBuild? I need to be able to run tests, watir style tests, xcopy deploy. Is this easier? Update: Any compelling features that would cause me to shift from nant to msbuild? A: I feel that MSBuild and Nant are fairly comparable. If you are using one of these, I generally wouldn't switch between them unless there was a compelling feature that was missing in the product you had selected. I personally use MSBuild for any new project, but your mileage may vary. Hope that helps! Edit: @ChanChan - @Jon mentions that Nant doesn't build .NET 3.5 applications. This may be enough of a reason to either change, or at least use them in parallel. As I've moved more towards MSBuild, I am probably not the most informed person to highlight any other showstoppers with either technology. Edit: It appears Nant now builds .NET 3.5 Applications. A: I like MSBuild. One reason is that .csproj files are msbuild files, and building in VS is just like building at the command line. Another reason is the good support from TeamCity which is the CI server I've been using. If you start using MSBuild, and you want to do more custom things in your build process, get the MSBuild Community Tasks. They give you a bunch of nice extra tasks. I haven't used NAnt for several years now, and I haven't regretted it. Also, as Ruben mentions, there are the SDC Tasks tasks on CodePlex. For even more fun, there is the MSBuild Extension Pack on CodePlex, which includes a twitter task. A: NAnt has been around longer, and is a considerably more mature product, and also IMO easier to use. There is a lot of community know-how out there to tap into, and it is also cross-platform, should you be interested in building apps that can run under Mono as well as .NET and Silverlight. Out of the box, it does a whole lot more than MSBuild does. Oh yes, and you can call MSBuild from NAnt (OK, from NAntContrib) :-) On the negative side, NAnt and its sister project NAntContrib do seem to have stagnated, with the most recent update being late 2007. The main advantages that I see of MSBuild is that it ships with the .NET Framework, so it's one less product to install; and there is more active development going on (albeit in places to catch up with the older NAnt). Personally, I find its syntax a little more difficult to pick up, but then I'm sure continued exposure to ti would make things easier. Conclusion? If you're working with existing NAnt scripts, stick with them, it's not worth the hassle of porting. If you're starting a new project, and you're feeling adventurous, then give MSBuild a go. A: My advice is just the opposite - Avoid MSBuild like the plague. NANT is far far easier to set up your build to do automatic testing, deploy to multiple production environments, integrate with cruisecontrol for an entry environment, integrate with source control. We've gone through so much pain with TFS/MSBuild (Using TFSDeployer, custom powershell scripts, etc) to get it to do what we were able to do with NANT out of the box. Don't waste your time. A: We also switched from nant to msbuild. If Your build is pretty standard, then You won't have much problems setting it up, but if You have a lot of specific build tasks, You will have to write custom ms build tasks, as there are way less custom tasks for msbuild. If you want to display reasonable build results, You will have to mess with custom loggers etc. The whole team build is not as ripe as nant is. But the real benefit is integration with TFS source control and reporting services. If You are not using TFS as Your source control system, it's not worth it. A: * *Don't switch unless you have a very convincing reason (at least). *NAnt is open source and if it weren't I wouldn't be able to customize our build system, MSBuild is not. *NAnt can easily run MSBuild, I'm not sure about the other way around. *MSBuild scripts are already written for you if you use VS2005 or newer (the project files are MSBuild files.) *If you use NAnt, and you use VS to edit project files, settings and configurations, you'll have to write a converter/sync tool to update your NAnt files from the VS Project files. A: The most compelling reason to use MSBuild (at least in .NET 3.5 and beyond) - the build engine can build concurrently. This means a huge speed up in your builds in you have multiple cores/processors. Previous to 3.5, MSBuild didnt do parallel builds. A: @Brad Leach I generally wouldn't switch between them unless there was a compelling feature that was missing what are the compelling reasons to use msbuild? are there cons? So far I'm getting a pretty good, "no don't bother" from your answer. A: I think they're relatively comparable both in features and ease of use. Just from being C# based I find msbuild easier to work with than nants, though that's hardly a compelling reason to switch. What exactly is nant not doing for you? Or are you just hoping there's some cool feature you may be missing out on? :) One super-nice thing about C# is that if you have the .net framework, you have everything you need to run msbuild. This is fantastic when you are working on large teams / projects and have people/hardware turnover. Personally I prefer SCons over both of them :) A: The main reason I still use nAnt over msbuild for my automated builds is that I have more granular control on my builds. Due to msbuild using the csproj has it's build file, all the source in that project is compiled into one assembly. Which causes me to have a lot of projects in my solution for large projects where I am separating logic. Well with nant, I can arrange my build where I can compile what I want into multiple assemblies from one project. I like this route, because it keeps me from having to many project files in my solution. I can have one project with folders splitting out the layers and then use nant to build each layer into it's own assembly. However, I do use both nant and msbuild in conjunction for some build tasks, like building WPF applications. It is just a lot easier to compile a WPF application with the msbuild target within nant. To end this and the point of my answer is that I like to use them side by side, but when I use msbuild in this configuration, it is usually for straight compiling, not performing any build automation tasks like copying files to a directory, generating the help documentation, or running my unit tests for example. A: I'm actually still trying to figure this question out myself, but there is one big bonus for MSBuild here: using the same build file for local continuous integration by calling msbuild.exe directly, while also being able to use VSTS's server-side continuous integration with the same build file (albeit most likely different properties/settings). i.e. as compared to TeamCity's support for MSBuild scripts, VSTS only supports MSBuild scripts! I've hacked around this in the past by exec'ing NAnt from MSBuild; I've seen others recommend this practice as well as the reverse, but it just seems kludgey to me, so I try not to do it if I can avoid it. So, when you're using "the full Microsoft stack" (VSTS and TFS), I'd suggest just sticking with MSBuild scripts. A: Nant has more features out of the box, but MSBuild has a much better fundamental structure (item metadata rocks) which makes it much easier to build reusable MSBuild scripts. MSBuild takes a while to understand, but once you do it's very nice. Learning materials: * *Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build by Sayed Ibrahim Hashimi (Jan, 2009) *Deploying .NET Applications: Learning MSBuild and ClickOnce by Sayed by Y. Hashimi (Sep, 2008) A: I don't see any reason to switch. MsBuild itself locks you into the framework you are using. If you use NAnt, you can use it across many frameworks and shell out to msbuild to actually do the building task for you. I am a fan of NAnt in this respect, because it decouples you from the framework a little bit. I have created a framework that puts conventions into automated builds and I built it on NAnt. It's called UppercuT and it is the insanely easy to use Build Framework. Automated Builds as easy as (1) solution name, (2) source control path, (3) company name for most projects! http://code.google.com/p/uppercut/ Some good explanations here: UppercuT A: MSBuild being integrated with Visual Studio gives programmers less friction to use the build system. It mainly comes down to them only having to go "Build Solution" and it all works, versus having to use Custom Build Steps and other such things, or, worse, forcing developers to build by launching some kind of external script. Now, I mostly tend to prefer MSBuild over NAnt because it's simpler. Sure, NAnt has a lot more features, is more powerful, etc., but it can quickly get out of hand. If you and your build engineers have the discipline to keep the NAnt scripts simple, then it's all good. However, I've seen too many NAnt-based systems go south to a point where nobody understands what it's doing anymore, and there's no real way to debug it besides doing the equivalent of a good ol' printf. The moment you start using some if/else statement or for loop, that's where, IMHO, it starts smelling. On the other hand, MSBuild has a solid foundation based on metadata and a less verbose syntax. Its simplicity (or lack of features... depending on how you see it) forces you to write logic in .NET code via new tasks, instead of writing logic in XML markup. This encourages re-usability and, above all things, lets you actually debug your build system in a real debugger. The only problem with MSBuild is the not-so-occasional bug (especially in the first version) or obscure (although documented) behaviour. And, if that's the kind of thing that really bothers you, being tied to Microsoft. A: I switched from NANT to MSBuild. The project is running in .Net 4.0. My experience in Nant was good. The project kind of died. And when .Net 4.0 came along, it was time to re evaluate the build process. Since Nant was last released MSBuild has come along ways. At this point, MSBuild is the way to go. It's easy to use, has many extensions. I rewrote my Nant scripts in a day and a half. The MSBuild script is 1/3 the size of the Nant scripts. Much of the work in the Nant script was setting up the different environments. In MsBuild/.Net 4.0 it's built-in. A: I use MSBuild alongside Nant, because the current version of Nant can't as yet compile .NET 3.5 applications (same was true when .NET 2.0 first came out). A: The only reason I can see for using msbuild is if you would like to use a automated build server like cruise control. If you are not going to switch, then I would leave it alone. A: I use Nant and I love it. I used MSBuild and hated it because of these: * *Microsoft forces you to follow their own build procedure that is so intrinsic to their doings that I at least was not able to make it work (I had to compile NET1.1 so I had to mix Nant and MSbuild). I know you can create your own MSBuild file, but I thought it was complex to understand and maintain. *ItemTypes to do file operations are just too hard to follow. You can have Nant do the exact same things and much easier and direct (I had to create an ItemType list and then pass to the file operations). *In MsBuild you have to create your own task dll, in Nant you can do this or you can embed C# code within your script, so its much easier to advance and just build the whole project. *Nant works with Net1.1, MsBuild doesn't. *To install nant, I can even unzip and locate inside my own repository to run it. To install MsBuild is much harder since it depends on many things from Visual Studio, etc. (maybe I'm wrong here, but that seems to be the truth). Well these are my opinions...
{ "language": "en", "url": "https://stackoverflow.com/questions/10634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Why are my PowerShell scripts not running? I wrote a simple batch file as a PowerShell script, and I am getting errors when they run. It's in a scripts directory in my path. This is the error I get: Cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about-signing". I looked in the help, but it's less than helpful. A: You need to run Set-ExecutionPolicy: Set-ExecutionPolicy Restricted <-- Will not allow any powershell scripts to run. Only individual commands may be run. Set-ExecutionPolicy AllSigned <-- Will allow signed powershell scripts to run. Set-ExecutionPolicy RemoteSigned <-- Allows unsigned local script and signed remote powershell scripts to run. Set-ExecutionPolicy Unrestricted <-- Will allow unsigned powershell scripts to run. Warns before running downloaded scripts. Set-ExecutionPolicy Bypass <-- Nothing is blocked and there are no warnings or prompts. A: Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process The above command worked for me even when the following error happens: Access to the registry key 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell' is denied. A: Also it's worth knowing that you may need to include .\ in front of the script name. For example: .\scriptname.ps1 A: Use: Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process Always use the above command to enable to executing PowerShell in the current session. A: I was able to bypass this error by invoking PowerShell like this: powershell -executionpolicy bypass -File .\MYSCRIPT.ps1 That is, I added the -executionpolicy bypass to the way I invoked the script. This worked on Windows 7 Service Pack 1. I am new to PowerShell, so there could be caveats to doing that that I am not aware of. [Edit 2017-06-26] I have continued to use this technique on other systems including Windows 10 and Windows 2012 R2 without issue. Here is what I am using now. This keeps me from accidentally running the script by clicking on it. When I run it in the scheduler I add one argument: "scheduler" and that bypasses the prompt. This also pauses the window at the end so I can see the output of PowerShell. if NOT "%1" == "scheduler" ( @echo looks like you started the script by clicking on it. @echo press space to continue or control C to exit. pause ) C: cd \Scripts powershell -executionpolicy bypass -File .\rundps.ps1 set psexitcode=%errorlevel% if NOT "%1" == "scheduler" ( @echo Powershell finished. Press space to exit. pause ) exit /b %psexitcode% A: It could be PowerShell's default security level, which (IIRC) will only run signed scripts. Try typing this: set-executionpolicy remotesigned That will tell PowerShell to allow local (that is, on a local drive) unsigned scripts to run. Then try executing your script again. A: The command set-executionpolicy unrestricted will allow any script you create to run as the logged in user. Just be sure to set the executionpolicy setting back to signed using the set-executionpolicy signed command prior to logging out. A: We can bypass execution policy in a nice way (inside command prompt): type file.ps1 | powershell -command - Or inside powershell: gc file.ps1|powershell -c - A: On Windows 10: Click change security property of myfile.ps1 and change "allow access" by right click / properties on myfile.ps1 A: It would be ideal to bypass execution policies e.g. through powershell -executionpolicy bypass -File .\MYSCRIPT.ps1 Unfortunately this can still be prevented by group policies. As a workaround, you can encode your script as Base64 by running this in PowerShell: [Convert]::ToBase64String([Text.Encoding]::Unicode.GetBytes((Get-Content .\MYSCRIPT.ps1))) Then execute the result like this: powershell.exe -EncodedCommand "put-your-base64-string-here" Caveat: This won't work with scripts that require parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/10635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114" }
Q: Any decent C# profilers out there? I need a C# profiler. Although I'm not averse to paying for one, something which is free or at least with a trial version would be ideal since it takes time to raise a purchase order. Any recommendations? A: We use Ants profiler where I work. It gives very detailed information in a simple manner. A: It's interesting that no-one mentions that there's one in the higher-end versions of Visual Studio - I've always found that to be good enough for execution profiling. For memory profiling I use Memory Profiler which has already been mentioned, but isn't what I would generally describe as 'a profiler'. What kind of profiling were you trying to do? A: We use .NET Memory Profiler. Its kinda ugly but very useful for finding dangling references. I originally tried Red Gate's ANTS profiler which is very sexy, but from a memory leak point of view it sucks for the following reasons: 1) Its ridiculously slow. It was taking half an hour to get the application into a state to start recording (takes 20 seconds without red-gate). 2) Red Gate needs to run its own tool on its own tool. It was using 900MB of memory by the time I finished two snapshots! It then crashed :( However the timing component of Red Gate ANTS was impressive. Just don't bother with the memory profiler, unless you are dealing with a trivial (small footprint) application. A: I used Ants profiler on a large c# project a year and a half ago. It really performed very nicely for what it cost, and even outperformed a few of the more expensive competitors. It calculates cost on with almost a line by line resolution. A: I have used AQtime and it has never let me down. I am sure there is a trial version. A: You can try the following: * *nprof (free but kinda old) *ProfileSharp (open source) *.Net Memory Profiler (really good for memory leaks, there's a trial version) Edit: Nprof has been replaced with SlimTune and works with .Net 4.0 applications A: The EQATEC profiler is very good and is completely free. It's easy to setup and use, and doesn't seem to add too much of an overhead to the application. I've just started using it today and have already found a couple of bottlenecks I wouldn't have spotted otherwise. A: I'll second red gate's ANTS profiler. I've used it to track down some really troubling performance issues and it was dead simple to use (low learning curve) and presented nice, detailed data in a way that was easy to understand. The price tag is worth it, but it isn't free ... A: dotTrace from JetBrains is widely used. Patrick Smacchia's awesome NDepend is excellent for providing static analysis. A: Patrick Smacchia's awesome NDepend is excellent for providing static analysis. I would thoroughly recommend NDepend for static analysis, but just be warned that you'll probably need to put aside a day or two to actually analyse the truckload of information that it provides as well as work out what all the stats actually mean in terms of your code. A: I have had good luck with the .NET memory profiler A: EQATEC profiler did the job here. A: The current release of SharpDevelop (3.1.1) has a nice integrated profiler. It's quite fast, and integrates very well into the SharpDevelop IDE and its NUnit runner. Results are displayed in a flexible Tree/List style (use LINQ to create your own selection). Doublecliking the displayed method jumps directly into the source code. A: I maintain a comprehensive list of profilers for .NET on SharpToolbox.com. You'll find there the tools suggested here and more, each with a short description of what it proposes. A: Currently don't use them, a buddy of mine raves about Ants profiler. I know its a for-pay product not sure how expensive. If you happen to staff an MVP you might be able to leverage that to get a license for free. A: AQTime (both perf and memory) or ANTS (v4 performance profiler or v5 beta memory profiler) here. A: I found the .NET Memory Profiler yesterday, and I must say that I'm very impressed by it. I'm going to order my license today. A: Although not very good to profile memory usage, the profiler included in some versions of Visual Studio does a very good job of profiling execution speed. A: What's your objective? Is it your objective to locate specific statements and get a rough idea of what they are contributing to your total execution time, so you can find ways to do them differently? For that, I swear by this method.
{ "language": "en", "url": "https://stackoverflow.com/questions/10644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: Creating iCal Files in c# I'm looking for a good method of generating an iCalendar file (*.ics) in c# (asp.net). I've found a couple resources, but one thing that has been lacking is their support for quoted-printable fields - fields that have carriage returns and line feeds. For example, if the description field isn't encoded properly, only the first line will display and possibly corrupting the rest of the information in the *.ics file. I'm looking for existing classes that can generate *.ics files and/or a class that can generate quoted-printable fields. A: I use DDay.Ical, its good stuff. Has the ability to open up an ical file and get its data in a nice object model. It says beta, but it works great for us. Edit Nov 2016 This library has been deprecated, but was picked up and re-released as iCal.NET by another dev. Notes about the release: rianjs.net/2016/07/dday-ical-is-now-ical-net Source on GitHub: github.com/rianjs/ical.net A: I wrote a shim function to handle this. It's mostly compliant--the only hangup is that the first line is 74 characters instead of 75 (the 74 is to handle the space on subsequent lines)... Private Function RFC2445TextField(ByVal LongText As String) As String LongText = LongText.Replace("\", "\\") LongText = LongText.Replace(";", "\;") LongText = LongText.Replace(",", "\,") Dim sBuilder As New StringBuilder Dim charArray() As Char = LongText.ToCharArray For i = 1 To charArray.Length sBuilder.Append(charArray(i - 1)) If i Mod 74 = 0 Then sBuilder.Append(vbCrLf & " ") Next Return sBuilder.ToString End Function I use this for the summary and description on our ICS feed. Just feed the line with the field already prepended (e.g. LongText = "SUMMARY:Event Title"). As long as you set caching decently long, it's not too expensive of an operation. A: iCal (ical 2.0) and quoted-printable don't go together. Quoted-printable is used a lot in vCal (vCal 1.0) to represent non-printable characters, e.g. line-breaks (=0D=0A). The default vCal encoding is 7-bit, so sometimes you need to use quoted-printable to represent non-ASCII characters (you can override the default encoding, but the other vCal-compliant communicating party is not required to understand it.) In iCal, special characters are represented using escapes, e.g. '\n'. The default encoding is UTF-8, all iCal-compliant parties must support it and that makes quoted-printable completely unnecessary in iCal 2.0 (and vCard 3.0, for that matter). You may need to back your customer/stakeholder to clarify the requirements. There seems to be confusion between vCal and iCal. A: I'm missing an example with custom time zones. So here a snippet that show how you can set a time zone in the ics (and send it to the browser in asp.net). //set a couple of variables for demo purposes DateTime IcsDateStart = DateTime.Now.AddDays(2); DateTime IcsDateEnd = IcsDateStart.AddMinutes(90); string IcsSummary = "ASP.Net demo snippet"; string IcsLocation = "Amsterdam (Netherlands)"; string IcsDescription = @"This snippes show you how to create a calendar item file (.ics) in ASP.NET.\nMay it be useful for you."; string IcsFileName = "MyCalendarFile"; //create a new stringbuilder instance StringBuilder sb = new StringBuilder(); //begin the calendar item sb.AppendLine("BEGIN:VCALENDAR"); sb.AppendLine("VERSION:2.0"); sb.AppendLine("PRODID:stackoverflow.com"); sb.AppendLine("CALSCALE:GREGORIAN"); sb.AppendLine("METHOD:PUBLISH"); //create a custom time zone if needed, TZID to be used in the event itself sb.AppendLine("BEGIN:VTIMEZONE"); sb.AppendLine("TZID:Europe/Amsterdam"); sb.AppendLine("BEGIN:STANDARD"); sb.AppendLine("TZOFFSETTO:+0100"); sb.AppendLine("TZOFFSETFROM:+0100"); sb.AppendLine("END:STANDARD"); sb.AppendLine("END:VTIMEZONE"); //add the event sb.AppendLine("BEGIN:VEVENT"); //with a time zone specified sb.AppendLine("DTSTART;TZID=Europe/Amsterdam:" + IcsDateStart.ToString("yyyyMMddTHHmm00")); sb.AppendLine("DTEND;TZID=Europe/Amsterdam:" + IcsDateEnd.ToString("yyyyMMddTHHmm00")); //or without a time zone //sb.AppendLine("DTSTART:" + IcsDateStart.ToString("yyyyMMddTHHmm00")); //sb.AppendLine("DTEND:" + IcsDateEnd.ToString("yyyyMMddTHHmm00")); //contents of the calendar item sb.AppendLine("SUMMARY:" + IcsSummary + ""); sb.AppendLine("LOCATION:" + IcsLocation + ""); sb.AppendLine("DESCRIPTION:" + IcsDescription + ""); sb.AppendLine("PRIORITY:3"); sb.AppendLine("END:VEVENT"); //close calendar item sb.AppendLine("END:VCALENDAR"); //create a string from the stringbuilder string CalendarItemAsString = sb.ToString(); //send the ics file to the browser Response.ClearHeaders(); Response.Clear(); Response.Buffer = true; Response.ContentType = "text/calendar"; Response.AddHeader("content-length", CalendarItemAsString.Length.ToString()); Response.AddHeader("content-disposition", "attachment; filename=\"" + IcsFileName + ".ics\""); Response.Write(CalendarItemAsString); Response.Flush(); HttpContext.Current.ApplicationInstance.CompleteRequest(); A: The easiest way I've found of doing this is to markup your HTML using microformats. If you're looking to generate iCalendar files then you could use the hCalendar microformat then include a link such as 'Add to Calendar' that points to: http://feeds.technorati.com/events/[ your page's full URL including the http:// ] The Technorati page then parses your page, extracts the hCalendar info and sends the iCalendar file to the client. A: Check out http://www.codeproject.com/KB/vb/vcalendar.aspx It doesn't handle the quoted-printable fields like you asked, but the rest of the code is there and can be modified. A: According to RFC-2445, the comment and description fields are TEXT. The rules for a test field are: [1] A single line in a TEXT field is not to exceed 75 octets. [2] Wrapping is achieved by inserting a CRLF followed by whitespace. [3] There are several characters that must be encoded including \ (reverse slash) ; (semicolon) , (comma) and newline. Using a \ (reverse slash) as a delimiter gives \ \; \, \n Example: The following is an example of the property with formatted line breaks in the property value: DESCRIPTION:Meeting to provide technical review for "Phoenix" design.\n Happy Face Conference Room. Phoenix design team MUST attend this meeting.\n RSVP to team leader. A: iCal can be complicated, so I recommend using a library. DDay is a good free solution. Last I checked it didn't have full support for recurring events, but other than that it looks really nice. Definitely test the calendars with several clients. A: i know it is too late, but it may help others. in my case i wrote following text file with .ics extension BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Calendly//EN CALSCALE:GREGORIAN METHOD:PUBLISH BEGIN:VEVENT DTSTAMP:20170509T164109Z UID:your id-11273661 DTSTART:20170509T190000Z DTEND:20170509T191500Z CLASS:PRIVATE DESCRIPTION:Event Name: 15 Minute Meeting\nDate & Time: 03:00pm - 03:15pm ( Eastern Time - US & Canada) on Tuesday\, May 9\, 2017\n\nBest Phone Number To Reach You :: xxxxxxxxx\n\nany "link": https://wwww.yahoo.com\n\n SUMMARY:15 Minute Meeting TRANSP:OPAQUE END:VEVENT END:VCALENDAR it worked for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/10658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations? We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing. Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements. My question is, is there a difference between action OutputCacheFilter and page output caching? Is the action output caching faster than page output caching? A: Internally, the OutputCacheAttribute (aka output cache filter) uses the same internal mechanism as page output caching (aka the @OutputCache directive). Therefore, it's not any faster than page output caching. However, with MVC, you really can't use page output caching via the @OutputCache directive in MVC because we render the view (aka page) after the action runs. So you would gain very little benefit. With the output cache filter, it does the correct thing and does not execute the action code if the result is in the output cache. Hope that helps. :) A: Just be aware that there currently is a bug if you call Html.RenderAction(..) on an Action that is marked to be cached. Instead of the specific action being cached, the entire page gets cached. I reported this on codeplex already and it seems to be a known issue: Calling <% HTML.RenderAction<...>(...); %> to an Action with [OutputCache(..)] causes entire page to cache.
{ "language": "en", "url": "https://stackoverflow.com/questions/10661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reading Other Process' Memory in OS X? I've been trying to understand how to read the memory of other processes on Mac OS X, but I'm not having much luck. I've seen many examples online using ptrace with PEEKDATA and such, however it doesn't have that option on BSD [man ptrace]. int pid = fork(); if (pid > 0) { // mess around with child-process's memory } How is it possible to read from and write to the memory of another process on Mac OS X? A: I know this thread is 100 years old, but for people coming here from a search engine: xnumem does exactly what you are looking for, manipulate and read inter-process memory. // Create new xnu_proc instance xnu_proc *Process = new xnu_proc(); // Attach to pid (or process name) Process->Attach(getpid()); // Manipulate memory int i = 1337, i2 = 0; i2 = process->memory().Read<int>((uintptr_t)&i); // Detach from process Process->Detach(); A: It you're looking to be able to share chunks of memory between processes, you should check out shm_open(2) and mmap(2). It's pretty easy to allocate a chunk of memory in one process and pass the path (for shm_open) to another and both can then go crazy together. This is a lot safer than poking around in another process's address space as Chris Hanson mentions. Of course, if you don't have control over both processes, this won't do you much good. (Be aware that the max path length for shm_open appears to be 26 bytes, although this doesn't seem to be documented anywhere.) // Create shared memory block void* sharedMemory = NULL; size_t shmemSize = 123456; const char* shmName = "mySharedMemPath"; int shFD = shm_open(shmName, (O_CREAT | O_EXCL | O_RDWR), 0600); if (shFD >= 0) { if (ftruncate(shFD, shmemSize) == 0) { sharedMemory = mmap(NULL, shmemSize, (PROT_READ | PROT_WRITE), MAP_SHARED, shFD, 0); if (sharedMemory != MAP_FAILED) { // Initialize shared memory if needed // Send 'shmemSize' & 'shmemSize' to other process(es) } else handle error } else handle error close(shFD); // Note: sharedMemory still valid until munmap() called } else handle error ... Do stuff with shared memory ... // Tear down shared memory if (sharedMemory != NULL) munmap(sharedMemory, shmemSize); if (shFD >= 0) shm_unlink(shmName); // Get the shared memory block from another process void* sharedMemory = NULL; size_t shmemSize = 123456; // Or fetched via some other form of IPC const char* shmName = "mySharedMemPath";// Or fetched via some other form of IPC int shFD = shm_open(shmName, (O_RDONLY), 0600); // Can be R/W if you want if (shFD >= 0) { data = mmap(NULL, shmemSize, PROT_READ, MAP_SHARED, shFD, 0); if (data != MAP_FAILED) { // Check shared memory for validity } else handle error close(shFD); // Note: sharedMemory still valid until munmap() called } else handle error ... Do stuff with shared memory ... // Tear down shared memory if (sharedMemory != NULL) munmap(sharedMemory, shmemSize); // Only the creator should shm_unlink() A: You want to do Inter-Process-Communication with the shared memory method. For a summary of other commons method, see here It didn't take me long to find what you need in this book which contains all the APIs which are common to all UNIXes today (which many more than I thought). You should buy it in the future. This book is a set of (several hundred) printed man pages which are rarely installed on modern machines. Each man page details a C function. It didn't take me long to find shmat() shmctl(); shmdt() and shmget() in it. I didn't search extensively, maybe there's more. It looked a bit outdated, but: YES, the base user-space API of modern UNIX OS back to the old 80's. Update: most functions described in the book are part of the POSIX C headers, you don't need to install anything. There are few exceptions, like with "curses", the original library. A: I have definitely found a short implementation of what you need (only one source file (main.c)). It is specially designed for XNU. It is in the top ten result of Google search with the following keywords « dump process memory os x » The source code is here but from a strict point of virtual address space point de vue, you should be more interested with this question: OS X: Generate core dump without bringing down the process? (look also this) When you look at gcore source code, it is quite complex to do this since you need to deal with treads and their state... On most Linux distributions, the gcore program is now part of the GDB package. I think the OSX version is installed with xcode/the development tools. UPDATE: wxHexEditor is an editor which can edit devices. IT CAN also edit process memory the same way it does for regular files. It work on all UNIX machines. A: Use task_for_pid() or other methods to obtain the target process’s task port. Thereafter, you can directly manipulate the process’s address space using vm_read(), vm_write(), and others. A: Matasano Chargen had a good post a while back on porting some debugging code to OS X, which included learning how to read and write memory in another process (among other things). It has to work, otherwise GDB wouldn't: It turns out Apple, in their infinite wisdom, had gutted ptrace(). The OS X man page lists the following request codes: * *PT_ATTACH — to pick a process to debug *PT_DENY_ATTACH — so processes can stop themselves from being debugged [...] No mention of reading or writing memory or registers. Which would have been discouraging if the man page had not also mentioned PT_GETREGS, PT_SETREGS, PT_GETFPREGS, and PT_SETFPREGS in the error codes section. So, I checked ptrace.h. There I found: * *PT_READ_I — to read instruction words *PT_READ_D — to read data words *PT_READ_U — to read U area data if you’re old enough to remember what the U area is [...] There’s one problem solved. I can read and write memory for breakpoints. But I still can’t get access to registers, and I need to be able to mess with EIP. A: Manipulating a process's memory behind its back is a Bad Thing and is fraught with peril. That's why Mac OS X (like any Unix system) has protected memory, and keeps processes isolated from one another. Of course it can be done: There are facilities for shared memory between processes that explicitly cooperate. There are also ways to manipulate other processes' address spaces as long as the process doing so has explicit right to do so (as granted by the security framework). But that's there for people who are writing debugging tools to use. It's not something that should be a normal — or even rare — occurrence for the vast majority of development on Mac OS X. A: In general, I would recommend that you use regular open() to open a temporary file. Once it's open in both processes, you can unlink() it from the filesystem and you'll be set up much like you would be if you'd used shm_open. The procedure is extremely similar to the one specified by Scott Marcy for shm_open. The disadvantage to this approach is that if the process that will be doing the unlink() crashes, you end up with an unused file and no process has the responsibility of cleaning it up. This disadvantage is shared with shm_open, because if nothing shm_unlinks a given name, the name remains in the shared memory space, available to be shm_opened by future processes.
{ "language": "en", "url": "https://stackoverflow.com/questions/10668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Asynchronous Remoting calls We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question. A: The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads. The Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected). You could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).
{ "language": "en", "url": "https://stackoverflow.com/questions/10670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a difference between the on_exit() and atexit() functions? Is there any difference between int on_exit(void (*function)(int , void *), void *arg); and int atexit(void (*function)(void)); other than the fact that the function used by on_exit gets the exit status? That is, if I don't care about the exit status, is there any reason to use one or the other? Edit: Many of the answers warned against on_exit because it's non-standard. If I'm developing an app that is for internal corporate use and guaranteed to run on specific configurations, should I worry about this? A: The difference is that atexit is C and on_exit is some weird extension available on GNU and who-knows-what-other Unixy systems (but NOT part of POSIX). A: You should use atexit() if possible. on_exit() is nonstandard and less common. For example, it's not available on OS X. Kernel.org - on_exit(): This function comes from SunOS 4, but is also present in libc4, libc5 and glibc. It no longer occurs in Solaris (SunOS 5). Avoid this function, and use the standard atexit(3) instead. A: According to this link I found, it seems there are a few differences. on_exit will let you pass in an argument that is passed in to the on_exit function when it is called... which might let you set up some pointers to do some cleanup work on when it is time to exit. Furthermore, it appears that on_exit was a SunOS specific function that may not be compatible on all platforms... so you may want to stick with atexit, despite it being more restrictive. A: @Nathan, I can't find any function that will return the exit code for the current running process. I expect that it isn't set yet at the point when atexit() is called, anyway. By this I mean that the runtime knows what it is, but probably hasn't reported it to the OS. This is pretty much just conjecture, though. It looks like you will either need to use on_exit() or structure your program so that the exit code doesn't matter. It would not be unreasonable to have the last statement in your main function flip a global exited_cleanly variable to true. In the function you register with atexit(), you could check this variable to determine how the program exited. This will only give you two states, but I expect that would be sufficient for most needs. You could also expand this type of scheme to support more exit states if necessary. A: @Nathan First, see if there is another API call to determine exit status... a quick glance and I don't see one, but I am not well versed in the standard C API. An easy alternative is to have a global variable that stores the exit status... the default being an unknown error cause (for if the program terminates abnormally). Then, when you call exit, you can store the exit status in the global and retrieve it from any atexit functions. This requires storing the exit status diligently before every exit call, and clearly is not ideal, but if there is no API and you don't want to risk on_exit not being on the platform... it might be the only option.
{ "language": "en", "url": "https://stackoverflow.com/questions/10680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What is a "reasonable" length of time to keep a SQL cursor open? In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database? A: I'm probably going to get flamed for this, but you really should try and avoid using cursors as they incur a serious performance hit. If you must use it, you should keep it open the absolute minimum amount of time possible so that you free up the resources being blocked by the cursor ASAP. A: transactions: minutes. Cursors: 0seconds maximum, if you use a cursor we fire you. This is not ridiculous when you consider we are in a high availability web environment, that has to run sql server, and we don't even allow stored procs because of inability to accurately version and maintain them. If we were using oracle maybe. A: @lomaxx, @ChanChan: to the best of my knowledge cursors are only a problem on SQL Server and Sybase (T-SQL variants). If your database of choice is Oracle, then cursors are your friend. I've seen a number of cases where the use of cursors has actually improved performance. Cursors are an incredibly useful mechanism and tbh, saying things like "if you use a cursor we fire you" is a little ridiculous. Having said that, you only want to keep a cursor open for the absolute minimum that is required. Specifying a maximum time would be arbitrary and pointless without understanding the problem domain. A: Generally I agree with the other answers: Avoid cursors when possible (in most cases) and close them as fast as possible. However: It all depends on the environment you're working in. * *If it is a production website environment with lots of users, make sure that the cursor goes away before someone gets a timeout. *If you're - for example - writing a "log analyzing stored procedure" (or whatever) on a proprietary machine that does nothing else: feel free to do whatever you want to do. You'll be the only person who has to wait. It's not as if the database server is going to die because you use cursors. You should consider, though, that maybe usage behaviour will change over time and at some point there might be 10 people using that application. So try to find another way ;) A: @ninesided: performance issues aside, it's also about using the right tool for the job. Given the choice to move the cursor out of your query into code, I would think 99 times out of 100 it would be better to put that looping logic into some sort of managed code. Doing so allows you to get the advantages of using a debugger, compile time error checking, type saftey etc. My answer to the question is still the same, if you're using a cursor, close it ASAP, in oracle I'd also be trying to use explicit cursors.
{ "language": "en", "url": "https://stackoverflow.com/questions/10727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best way to store a database password in a startup script / config file? So our web server apps need to connect to the database, and some other apps have startup scripts that execute at boot time. What's the best way to store the name/password for these applications, in terms of * *security, e.g. perhaps we don't want sysadmins to know the database password *maintainability, e.g. making the configuration easy to change when the password changes, etc. both windows and linux solutions appreciated! A: PostgreSQL offers a nice solution for this kind of situation in their documentation. Essentially, you use ssh to bridge a port on your machine to the PostgreSQL server port on the remote machine. This has three stages of authentication: * *Restrict access to the local port, such as only letting a particular user connect to it. *Set up password-less connection to the PostgreSQL host with ssh as a particular user. *Allow the user ssh connects as to have local access to PostgreSQL without a password. This reduces the security to whether your user accounts are secured and your ssh configuration is sound, and you have no need of a password stored anywhere. Edit: I should add that this will work with any database that listens to a TCP/IP port. It just happens to be described in PostgreSQL. And you will want iptables (or the equivalent off Linux) to do the port restrictions. See this. A: I agree with lomaxx: if somebody is already on the server or has wide ranging access to it (like a sysadmin), the game is pretty much over. So the idea would be to use a server you trust that it is secure to the degree you want it to be. Specifically: * *You need to trust the sysadmins *You need to trust anybody else who is running code on the same server (this is why shared hosting is a big no-no for me) Beyond that, environment variables seem to be a popular choice for storing these types of credentials, because this means that access to the source only (for example by compromising the dev box) doesn't reveal it directly and also it can be nicely localized for each server (dev, test, etc). A: The best way to secure your password is to stop using one. Use a trusted connection: How To: Connect to SQL Server Using Windows Authentication in ASP.NET 2.0. Then you have nothing to hide - publish your web.config and source to the world, they still can't hit your database. If that won't work for you, use the built in configuration encryption system in ASP.NET. A: plain text? If they're on your server, I would hope the server is secure enough not to allow unauthorised access. If people can access your config files on the server, something has gone wrong much earlier. A: clarification: in terms of security, maintainability (e.g. if the login needs to change, can I find it later, etc) @lomax: perhaps I might not want everyone with access to the physical server (e.g. sysadmins) to see the password. Thanks! A: In most cases, I believe it is sufficient to obfuscate the password in a plain text file (eg. with base64). You cannot completely protect a stored password against a determined sysadmin with root access, so there's not really any need to try. Simple obfuscation, however, protects against accidentally revealing the password to a shoulder surfer. A more complex alternative is to set up a dedicated secure password server that either: * *provides a password decryption service *actually stores the passwords for use by other less secure servers Depending on the network protocols used, this may not protect against a rogue sysadmin with tcpdump. And it probably won't protect against a determined sysadmin with a debugger, either. At that point, it might be time to look at something like Kerberos tickets. A: You can bake a symmetric encryption key into your binary, and have that binary read an encrypted username/password from a file on disk when it starts up. However, this is not really much more than obfuscation, since your code is likely to be stored in some source repository somewhere. I would suggest that you would be better served to control access to your servers both physically and over the network using a firewall and a private network bubble, and store the passwords in the clear (or base-64 encoded) on disk with permissions locked down to the run user for your web app. You can also lock down the database server to only accept connections from your web app machines by IP. Ultimately, your problem is that the key (your DB username/password pair) needs to be available for programmatic, unattended use by your web apps.
{ "language": "en", "url": "https://stackoverflow.com/questions/10731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the difference between integration and unit tests? I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible. For example, if I have a Word class, I will write some unit tests for the Word class. Then, I begin writing my Sentence class, and when it needs to interact with the Word class, I will often write my unit tests such that they test both Sentence and Word... at least in the places where they interact. Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes? In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature? Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests? A: When I write unit tests I limit the scope of the code being tested to the class I am currently writing by mocking dependencies. If I am writing a Sentence class, and Sentence has a dependency on Word, I will use a mock Word. By mocking Word I can focus only on its interface and test the various behaviors of my Sentence class as it interacts with Word's interface. This way I am only testing the behavior and implementation of Sentence and not at the same time testing the implementation of Word. Once I've written the unit tests to ensure Sentence behaves correctly when it interacts with Word based on Word's interface, then I write the integration test to make sure that my assumptions about the interactions were correct. For this I supply the actual objects and write a test that exercises a feature that will end up using both Sentence and Word. A: Unit Testing is a method of testing that verifies the individual units of source code are working properly. Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group. Wikipedia defines a unit as the smallest testable part of an application, which in Java/C# is a method. But in your example of Word and Sentence class I would probably just write the tests for sentence since I would likely find it overkill to use a mock word class in order to test the sentence class. So sentence would be my unit and word is an implementation detail of that unit. A: My 10 bits :D I was always told that Unit Tests is the testing of an individual component - which should be exercised to its fullest. Now, this tends to have many levels, since most components are made of smaller parts. For me, a unit is a functional part of the system. So it has to provide something of value (i.e. not a method for string parsing, but a HtmlSanitizer perhaps). Integration Tests is the next step up, its taking one or more components and making sure they work together as they should.. You are then above the intricacies of worry about how the components work individually, but when you enter html into your HtmlEditControl , it somehow magically knows wether its valid or not.. Its a real movable line though.. I'd rather focus more on getting the damn code to work full stop ^_^ A: In unit test you test every part isolated: in integration test you test many modules of your system: and this what happens when you only use unit tests (generally both windows are working, unfortunately not together): Sources: source1 source2 A: I think I would still call a couple of interacting classes a unit test provided that the unit tests for class1 are testing class1's features, and the unit tests for class2 are testing its features, and also that they are not hitting the database. I call a test an integration test when it runs through most of my stack and even hits the database. I really like this question, because TDD discussion sometimes feels a bit too purist to me, and it's good for me to see some concrete examples. A: I do the same - I call them all unit tests, but at some point I have a "unit test" that covers so much I often rename it to "..IntegrationTest" - just a name change only, nothing else changes. I think there is a continuation from "atomic tests" (testing one tiny class, or a method) to unit tests (class level) and integration tests - and then functional test (which are normally covering a lot more stuff from the top down) - there doesn't seem to be a clean cut off. If your test sets up data, and perhaps loads a database/file etc, then perhaps its more of an integration test (integration tests I find use less mocks and more real classes, but that doesn't mean you can't mock out some of the system). A: Integration tests: Database persistence is tested. Unit tests: Database access is mocked. Code methods are tested. A: The key difference, to me, is that integration tests reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected. On the opposite, a Unit test testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency. Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working. Say you have a method somewhere like this: public SomeResults DoSomething(someInput) { var someResult = [Do your job with someInput]; Log.TrackTheFactYouDidYourJob(); return someResults; } DoSomething is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to verify and communicate the feature is working or not. Feature: To be able to do something In order to do something As someone I want the system to do this thing Scenario: A sample one Given this situation When I do something Then what I get is what I was expecting for No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call Business Value. If you want to write a unit test for DoSomething you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working. In practice, you do something like: public SomeResults DoSomething(someInput) { var someResult = [Do your job with someInput]; FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log return someResults; } You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test. Suppose there's a bug in Log.DoSomething(). Fortunately, the Gherkin spec will find it and your end-to-end tests will fail. The feature won't work, because Log is broken, not because [Do your job with someInput] is not doing its job. And, by the way, [Do your job with someInput] is the sole responsibility for that method. Also, suppose Log is used in 100 other features, in 100 other methods of 100 other classes. Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: they are telling the truth. It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause. Yet, DoSomething's unit test is green, because it's using a fake Log, built to never break. And, yes: it's clearly lying. It's communicating a broken feature is working. How can it be useful? (If DoSomething()'s unit test fails, be sure: [Do your job with someInput] has some bugs.) Suppose this is a system with a broken class: A single bug will break several features, and several integration tests will fail. On the other hand, the same bug will break just one unit test. Now, compare the two scenarios. The same bug will break just one unit test. * *All your features using the broken Log are red *All your unit tests are green, only the unit test for Log is red Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing. The difference Integration tests tell what's not working. But they are of no use in guessing where the problem could be. Unit tests are the sole tests that tell you where exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work. That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes. This reply is basically a summary of what I wrote here: Unit tests lie, that's why I love them. A: Unit testing is testing against a unit of work or a block of code if you like. Usually performed by a single developer. Integration testing refers to the test that is performed, preferably on an integration server, when a developer commits their code to a source control repository. Integration testing might be performed by utilities such as Cruise Control. So you do your unit testing to validate that the unit of work you have built is working and then the integration test validates that whatever you have added to the repository didn't break something else. A: Simple Explanation with Analogies This answer will focus purely on examples. Integration Tests Integration tests check if everything is working together. Unit Tests They tell you whether one specific thing is working. Examples Consider a car: * *Integration test for a car: e.g. does the car drive to Pondicherry and back? If so, the car as whole is working. If it fails, you won't really where where: radiator, transmission, engine, or carburettor? *Unit test for a car: Is the engine is working? This tests just the engine; nothing else. If this test fails, then you can be confident that there is a bug in the engine....This ties in closely with the concept of "fakes". You might need some keys in order to start the engine - except, you don't want to go to the hassle of actually an ignition (with a lock)...instead, you would hotwire the car to start it....in other words you would use a "fake" key. Similarly, in unit testing, you would use "fakes" in order to make the engine work a particular way. And then you could simply test: "is it running". A: Unit tests use mocks The thing you're talking about are integration tests that actually test the whole integration of your system. But when you do unit testing you should actually test each unit separately. Everything else should be mocked. So in your case of Sentence class, if it uses Word class, then your Word class should be mocked. This way, you'll only test your Sentence class functionality. A: I call unit tests those tests that white box test a class. Any dependencies that class requires is replaced with fake ones (mocks). Integration tests are those tests where multiple classes and their interactions are tested at the same time. Only some dependencies in these cases are faked/mocked. I wouldn't call Controller's integration tests unless one of their dependencies is a real one (i.e. not faked) (e.g. IFormsAuthentication). Separating the two types of tests is useful for testing the system at different levels. Also, integration tests tend to be long lived, and unit tests are supposed to be quick. The execution speed distinction means they're executed differently. In our dev processes, unit tests are run at check-in (which is fine cos they're super quick), and integration tests are run once/twice per day. I try and run integration tests as often as possible, but usually hitting the database/writing to files/making rpc's/etc slows. That raises another important point, unit tests should avoid hitting IO (e.g. disk, network, db). Otherwise they slow down alot. It takes a bit of effort to design these IO dependencies out - i can't admit I've been faithful to the "unit tests must be fast" rule, but if you are, the benefits on a much larger system become apparent very quickly. A: I think when you start thinking about integration tests, you are speaking more of a cross between physical layers rather than logical layers. For example, if your tests concern itself with generating content, it's a unit test: if your test concerns itself with just writing to disk, it's still a unit test, but once you test for both I/O AND the content of the file, then you have yourself an integration test. When you test the output of a function within a service, it's a unit-test, but once you make a service call and see if the function result is the same, then that's an integration test. Technically you cannot unit test just-one-class anyway. What if your class is composed with several other classes? Does that automatically make it an integration test? I don't think so. A: using Single responsibility design, its black and white. More than 1 responsibility, its an integration test. By the duck test (looks, quacks, waddles, its a duck), its just a unit test with more than 1 newed object in it. When you get into mvc and testing it, controller tests are always integration, because the controller contains both a model unit and a view unit. Testing logic in that model, I would call a unit test. A: In my opinion the answer is "Why does it matter?" Is it because unit tests are something you do and integration tests are something you don't? Or vice versa? Of course not, you should try to do both. Is it because unit tests need to be Fast, Isolated, Repeatable, Self-Validating and Timely and integration tests should not? Of course not, all tests should be these. It is because you use mocks in unit tests but you don't use them in integration tests? Of course not. This would imply that if I have a useful integration test I am not allowed to add a mock for some part, fear I would have to rename my test to "unit test" or hand it over to another programmer to work on. Is it because unit tests test one unit and integration tests test a number of units? Of course not. Of what practical importance is that? The theoretical discussion on the scope of tests breaks down in practice anyway because the term "unit" is entirely context dependent. At the class level, a unit might be a method. At an assembly level, a unit might be a class, and at the service level, a unit might be a component. And even classes use other classes, so which is the unit? It is of no importance. Testing is important, F.I.R.S.T is important, splitting hairs about definitions is a waste of time which only confuses newcomers to testing. A: The nature of your tests A unit test of module X is a test that expects (and checks for) problems only in module X. An integration test of many modules is a test that expects problems that arise from the cooperation between the modules so that these problems would be difficult to find using unit tests alone. Think of the nature of your tests in the following terms: * *Risk reduction: That's what tests are for. Only a combination of unit tests and integration tests can give you full risk reduction, because on the one hand unit tests can inherently not test the proper interaction between modules and on the other hand integration tests can exercise the functionality of a non-trivial module only to a small degree. *Test writing effort: Integration tests can save effort because you may then not need to write stubs/fakes/mocks. But unit tests can save effort, too, when implementing (and maintaining!) those stubs/fakes/mocks happens to be easier than configuring the test setup without them. *Test execution delay: Integration tests involving heavyweight operations (such as access to external systems like DBs or remote servers) tend to be slow(er). This means unit tests can be executed far more frequently, which reduces debugging effort if anything fails, because you have a better idea what you have changed in the meantime. This becomes particularly important if you use test-driven development (TDD). *Debugging effort: If an integration test fails, but none of the unit tests does, this can be very inconvenient, because there is so much code involved that may contain the problem. This is not a big problem if you have previously changed only a few lines -- but as integration tests run slowly, you perhaps did not run them in such short intervals... Remember that an integration test may still stub/fake/mock away some of its dependencies. This provides plenty of middle ground between unit tests and system tests (the most comprehensive integration tests, testing all of the system). Pragmatic approach to using both So a pragmatic approach would be: Flexibly rely on integration tests as much as you sensibly can and use unit tests where this would be too risky or inconvenient. This manner of thinking may be more useful than some dogmatic discrimination of unit tests and integration tests. A: A little bit academic this question, isn't it? ;-) My point of view: For me an integration test is the test of the whole part, not if two parts out of ten are going together. Our integration test shows, if the master build (containing 40 projects) will succeed. For the projects we have tons of unit tests. The most important thing concerning unit tests for me is, that one unit test must not be dependent on another unit test. So for me both test you describe above are unit tests, if they are independent. For integration tests this need not to be important. A: Have these tests essentially become integration tests because they now test the integration of these 2 classes? Or is it just a unit test that spans 2 classes? I think Yes and Yes. Your unit test that spans 2 classes became an integration test. You could avoid it by testing Sentence class with mock implementation - MockWord class, which is important when those parts of system are large enough to be implemented by different developers. In that case Word is unit tested alone, Sentence is unit tested with help of MockWord, and then Sentence is integration-tested with Word. Exaple of real difference can be following 1) Array of 1,000,000 elements is easily unit tested and works fine. 2) BubbleSort is easily unit tested on mock array of 10 elements and also works fine 3) Integration testing shows that something is not so fine. If these parts are developed by single person, most likely problem will be found while unit testing BubbleSoft just because developer already has real array and he does not need mock implementation. A: In addition, it's important to remember that both unit tests and integration tests can be automated and written using, for example, JUnit. In JUnit integration tests, one can use the org.junit.Assume class to test the availability of environment elements (e.g., database connection) or other conditions. A: I get asked this a lot in interviews. Until now I'd ramble on pretentiously about my expertise and pontificate about component and acceptance testing. For years I'd understood only integration and unit tests. I could, but didn't always bother to, write unit tests as a solo developer honing my skills. Unit tests That is a crucial difference. Unit tests are easy to implement and execute, requiring, ideally, no dependencies. That is what mocks are for. It is often easier to not mock everything, particularly where you gain coverage of other functions you wrote. Easier, maybe, but that isn't the idea of unit testing. I'll reiterate, unit tests are meant to be easy to run and small. Their failure provides immediate insight into where a bug has been introduced. Here is the hierarchy of tests, from cheap and plentiful at the bottom to slow, expensive, and few, at the top: Several more layers can be conceptualised, but were omitted for clarity. Integration tests With integration tests you would consider bringing in serious external dependencies, such as VMs, virtual networks and appliances. Possibly you could use actual modems, routers, and firewalls where the expense was justified. These wouldn't be run locally but on a build server. A mixture of local Jenkins and cloud based CI providers fulfil this need. Other test terminology That is my understanding that has served me for several years in industry. We could talk about component tests, and get a definition, but if the definition isn't in common circulation then it loses value. Acceptance tests were what we would call business unit or customer requirements. These would lead the direction of everything and sit at the top of the pyramid (picture a dollar sign). E2E, or end to end testing was used synonymously with integration tests, but I noticed online it is placed above. I guess it could have more relevance to acceptance tests the integration tests, which would tend to be more detailed with less interest from stakeholders (though immense interest internally in the department). A: If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass. You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!
{ "language": "en", "url": "https://stackoverflow.com/questions/10752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "347" }
Q: Convert an asp.net application to IIS7 integrated mode What steps I need to perform in order to convert asp.net 2 application from IIS7 classic to integrated mode? A: Here is a process: Rick Strahl's blog A: Nothing really. ASP.NET 2.0 applications will run just as they have in IIS 6.0. If you want to take advantage of any of the new features then you just need to update your code. But unless you are changing the structure of the header of the response or intercepting requests for other applications you probably will not need to do anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/10782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Catching unhandled exceptions in ASP.NET UserControls I'm dynamically loading user controls adding them to the Controls collection of the web form. I'd like to hide user controls if they cause a unhandled exception while rendering. So, I tried hooking to the Error event of each UserControl but it seems that this event never fires for the UserControls as it does for Page class. Did some googling around and it doesn't seem promising. Any ideas here? A: This is an interesting problem.. I am still pretty fresh when it comes to custom controls etc, but here are my thoughts (feel free to comment/correct people!).. (I am kinda thinking/writing out loud here!) * *If an error occurs during rendering, in some cases, would it not be too late? (since some of the controls HTML may have already been sent to the Writer and output). *Therefore, would it not be best to wrap the user control's Render method, but rather than passing it the reference to the "Live" HtmlTextWriter, you pass your own, trap any Exceptions raised in this little safety "bubble", if all goes well, you then pass your resultant HTML to the actual HtmlTextWriter? *This logic could probably be slung to a generic wrapper class which you would use to dynamically load/render the controls at run time.. *If any errors do occur, you have all the information you need at your disposal! (i.e control references etc). Just my thoughts, flame away! :D ;) A: mmilic, following on from your response to my previous idea.. No additional logic required! That's the point, your doing nothing to the classes in question, just wrapping them in some instantiation bubble-wrap! :) OK, I was going to just bullet point but I wanted to see this work for myself, so I cobbled together some very rough code but the concept is there and it seems to work. APOLOGIES FOR THE LONG POST The SafeLoader This will basically be the "bubble" I mentioned.. It will get the controls HTML, catching any errors that occur during Rendering. public class SafeLoader { public static string LoadControl(Control ctl) { // In terms of what we could do here, its down // to you, I will just return some basic HTML saying // I screwed up. try { // Get the Controls HTML (which may throw) // And store it in our own writer away from the // actual Live page. StringWriter writer = new StringWriter(); HtmlTextWriter htmlWriter = new HtmlTextWriter(writer); ctl.RenderControl(htmlWriter); return writer.GetStringBuilder().ToString(); } catch (Exception) { string ctlType = ctl.GetType().Name; return "<span style=\"color: red; font-weight:bold; font-size: smaller;\">" + "Rob + Controls = FAIL (" + ctlType + " rendering failed) Sad face :(</span>"; } } } And Some Controls.. Ok I just mocked together two controls here, one will throw the other will render junk. Point here, I don't give a crap. These will be replaced with your custom controls.. BadControl public class BadControl : WebControl { protected override void Render(HtmlTextWriter writer) { throw new ApplicationException("Rob can't program controls"); } } GoodControl public class GoodControl : WebControl { protected override void Render(HtmlTextWriter writer) { writer.Write("<b>Holy crap this control works</b>"); } } The Page OK, so lets look at the "test" page.. Here I simply instantiate the controls, grab their html and output it, I will follow with thoughts on designer support etc.. Page Code-Behind protected void Page_Load(object sender, EventArgs e) { // Create some controls (BadControl will throw) string goodHtml = SafeLoader.LoadControl(new BadControl()); Response.Write(goodHtml); string badHtml = SafeLoader.LoadControl(new GoodControl()); Response.Write(badHtml); } Thoughts OK, I know what you are thinking, "these controls are instantiated programatically, what about designer support? I spent freaking hours getting these controls nice for the designer, now you're messing with my mojo". OK, so I havent really tested this yet (probably will do in a min!) but the idea here is to override the CreateChildControls method for the page, and take the instance of each control added on the form and run it through the SafeLoader. If the code passes, you can add it to the Controls collection as normal, if not, then you can create erroneous literals or something, up to you my friend. Finally.. Again, sorry for the long post, but I wanted to get the code here so we can discuss this :) I hope this helps demonstrate my idea :) Update Tested by chucking a control in on the designer and overriding the CreateChildControls method with this, works fine, may need some clean up to make things better looking, but I'll leave that to you ;) protected override void CreateChildControls() { // Pass each control through the Loader to check // its not lame foreach (Control ctl in Controls) { string s = SafeLoader.LoadControl(ctl); // If its bad, smack it downnnn! if (s == string.Empty) { ctl.Visible = false; // Prevent Rendering string ctlType = ctl.GetType().Name; Response.Write("<b>Problem Occurred Rendering " + ctlType + " '" + ctl.ID + "'.</b>"); } } } Enjoy! A: Depending on where your errors are occurring you can do something like... public abstract class SilentErrorControl : UserControl { protected override void Render( HtmlTextWriter writer ) { //call the base's render method, but with a try catch try { base.Render( writer ); } catch ( Exception ex ) { /*do nothing*/ } } } Then inherit SilentErrorControl instead of UserControl. A: Global.asax and Application_Error? http://www.15seconds.com/issue/030102.htm Or the Page_Error Event on an individual Page only: http://support.microsoft.com/kb/306355 void Page_Load(object sender, System.EventArgs e) { throw(new ArgumentNullException()); } public void Page_Error(object sender,EventArgs e) { Exception objErr = Server.GetLastError().GetBaseException(); string err = "<b>Error Caught in Page_Error event</b><hr><br>" + "<br><b>Error in: </b>" + Request.Url.ToString() + "<br><b>Error Message: </b>" + objErr.Message.ToString()+ "<br><b>Stack Trace:</b><br>" + objErr.StackTrace.ToString(); Response.Write(err.ToString()); Server.ClearError(); } Also, Karl Seguin (Hi Karl!) had a Post on using HttpHandler instead: http://codebetter.com/blogs/karlseguin/archive/2006/06/12/146356.aspx (Not sure what the permission to reproduce it, but if you want to write up an answer, you got my Upvote ☺) A: How about adding a new sub-class of UserControl that error-handles its render and load methods (so that they hide as you wish) and then inheriting from that for your user controls? A: I am not sure I understand your response.. How are you loading your controls and adding them to your controls collection? That was the whole point of the bit added in the "Update" section.. You have the flexibility to use the SafeLoader wherever you please. I am not sure why you feel you don't have access/control over the Html? The goal of the SafeLoader is that you dont care what the html is, you simply try and "output" the control (within the "bubble") and determine if it loads OK in its current state. If it does (i.e. the html is returned) then you can do what you like with it, output the html, add the control to the controls collection, whatever! If not, then again, you can do what you like, render an error message, throw a custom exception.. The choice is yours! I hope this helps clarify things for you, if not, then please shout :) A: I used @Keith's approach, but the problem is that the control is rendered up until the Exception is thrown, potentially resulting in open HTML tags. I'm also rendering the exception information in the Control if in Debug mode. protected override void Render(System.Web.UI.HtmlTextWriter writer) { try { // Render the module to a local a temporary writer so that if an Exception occurs // the control is not halfway rendered - "it is all or nothing" proposition System.IO.StringWriter sw = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htw = new System.Web.UI.HtmlTextWriter(sw); base.Render(htw); // We made it! Copy the Control Render over writer.Write(sw.GetStringBuilder().ToString()); } catch (System.Exception ex) { string message = string.Format("Error Rendering Control {0}\n", ID); Log.Error(message, ex); if (Page.IsDebug) writer.Write(string.Format("{0}<br>Exception:<br><pre>{1}\n{2}</pre>", message, ex.Message, ex.StackTrace)); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/10793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASP.NET Web Site or Web Project When creating a new ASP.NET project in Visual Studio should I chose create: website or project? I understand that web application project was the way to do it back in the day with VS 2003 but is it still applicable today? What are some of the caveats using one over the other? A: There's a pretty good comparison chart on MSDN. Website projects are simple, in that all files added to the project folders are automatically compiled and included, which was supposedly added to make it more palatable to classic ASP and PHP developers. Once benefit is that it includes build providers, which allow for certain actions to be associated with a filetype - that's how the first release of SubSonic would rebuild the data access layer when you added a .abp file to the site. Web Application Projects are a lot more flexible, though. For instance, all class libraries in a Website Project need to be in the App_Code folder, which is frustrating in a complex application. There are a lot of scenarios which just don't work for a Website Project. You can convert from one to another, although if you're unsure I'd recommend just starting with a Web Application. A: I strongly disagree with some of what the Websites and Web Projects article says. First, it wasn't any "small" group of developers who rebelled - I'd suggest it was most of us, who had not been asked if we wanted to totally change the way we developed. They certainly didn't ask me if I wanted to lose six weeks of development time figuring out what they did to break a perfectly good web service. It wasn't some "download" MS released - it was VS2005 SP1, and they released it pretty damned fast. In their plusses for projectless development, the "Copy Project" command works very well, and we don't have to avoid debug or project files; you can move pages around - if you don't use source control; where do they get that you have to lock the project files in order to collaborate? What are they using for source control? I'd also add one question to the debate: what's so special about web sites that they should be the only "project" type (as far as I know) that doesn't use a "project" file? I can't think of anything, unless it's that Microsoft thought that web developers were too simpleminded to understand projects. Of course, if anyone knows of any other Visual Studio "project" type that does not use a project file, I'd be grateful to be informed of it.
{ "language": "en", "url": "https://stackoverflow.com/questions/10798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Ruby mixins and calling super methods Ok, so I've been refactoring my code in my little Rails app in an effort to remove duplication, and in general make my life easier (as I like an easy life). Part of this refactoring, has been to move code that's common to two of my models to a module that I can include where I need it. So far, so good. Looks like it's going to work out, but I've just hit a problem that I'm not sure how to get around. The module (which I've called sendable), is just going to be the code that handles faxing, e-mailing, or printing a PDF of the document. So, for example, I have a purchase order, and I have Internal Sales Orders (imaginatively abbreviated to ISO). The problem I've struck, is that I want some variables initialised (initialized for people who don't spell correctly :P ) after the object is loaded, so I've been using the after_initialize hook. No problem... until I start adding some more mixins. The problem I have, is that I can have an after_initialize in any one of my mixins, so I need to include a super call at the start to make sure the other mixin after_initialize calls get called. Which is great, until I end up calling super and I get an error because there is no super to call. Here's a little example, in case I haven't been confusing enough: class Iso < ActiveRecord::Base include Shared::TracksSerialNumberExtension include Shared::OrderLines extend Shared::Filtered include Sendable::Model validates_presence_of :customer validates_associated :lines owned_by :customer order_lines :despatched # Mixin tracks_serial_numbers :items # Mixin sendable :customer # Mixin attr_accessor :address def initialize( params = nil ) super self.created_at ||= Time.now.to_date end end So, if each one of the mixins have an after_initialize call, with a super call, how can I stop that last super call from raising the error? How can I test that the super method exists before I call it? A: You can use this: super if defined?(super) Here is an example: class A end class B < A def t super if defined?(super) puts "Hi from B" end end B.new.t A: Have you tried alias_method_chain? You can basically chained up all your after_initialize calls. It acts like a decorator: each new method adds a new layer of functionality and passes the control onto the "overridden" method to do the rest. A: The including class (the thing that inherits from ActiveRecord::Base, which, in this case is Iso) could define its own after_initialize, so any solution other than alias_method_chain (or other aliasing that saves the original) risks overwriting code. @Orion Edwards' solution is the best I can come up with. There are others, but they are far more hackish. alias_method_chain also has the benefit of creating named versions of the after_initialize method, meaning you can customize the call order in those rare cases that it matters. Otherwise, you're at the mercy of whatever order the including class includes the mixins. later: I've posted a question to the ruby-on-rails-core mailing list about creating default empty implementations of all callbacks. The saving process checks for them all anyway, so I don't see why they shouldn't be there. The only downside is creating extra empty stack frames, but that's pretty cheap on every known implementation. A: You can just throw a quick conditional in there: super if respond_to?('super') and you should be fine - no adding useless methods; nice and clean. A: Rather than checking if the super method exists, you can just define it class ActiveRecord::Base def after_initialize end end This works in my testing, and shouldn't break any of your existing code, because all your other classes which define it will just be silently overriding this method anyway
{ "language": "en", "url": "https://stackoverflow.com/questions/10808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Why do I get the error "Unable to update the password" when calling AzMan? I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error: Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan) Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles) The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2. I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing? A: For AzMan with ASP.NET, turn on impersonation in web.config (<identity impersonate="true" username="xx" pasword="xx" />), and make sure with an AD administrator that the impersonation account has "reader" permissions on the AzMan store; plus, give write permissions to this account on the Temporary ASP.NET Files folder (under C:\Windows\Microsoft.NET\<framework>). A: I found out from the event log that there was a security issue with the user making the call to AzMan from a remote computer. The user did not belong the local Users group on the computer running ADAM/AzMan. When I corrected that everything worked again.
{ "language": "en", "url": "https://stackoverflow.com/questions/10810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL, Auxiliary table of numbers For certain types of sql queries, an auxiliary table of numbers can be very useful. It may be created as a table with as many rows as you need for a particular task or as a user defined function that returns the number of rows required in each query. What is the optimal way to create such a function? A: This article gives 14 different possible solutions with discussion of each. The important point is that: suggestions regarding efficiency and performance are often subjective. Regardless of how a query is being used, the physical implementation determines the efficiency of a query. Therefore, rather than relying on biased guidelines, it is imperative that you test the query and determine which one performs better. I personally liked: WITH Nbrs ( n ) AS ( SELECT 1 UNION ALL SELECT 1 + n FROM Nbrs WHERE n < 500 ) SELECT n FROM Nbrs OPTION ( MAXRECURSION 500 ) A: This view is super fast and contains all positive int values. CREATE VIEW dbo.Numbers WITH SCHEMABINDING AS WITH Int1(z) AS (SELECT 0 UNION ALL SELECT 0) , Int2(z) AS (SELECT 0 FROM Int1 a CROSS JOIN Int1 b) , Int4(z) AS (SELECT 0 FROM Int2 a CROSS JOIN Int2 b) , Int8(z) AS (SELECT 0 FROM Int4 a CROSS JOIN Int4 b) , Int16(z) AS (SELECT 0 FROM Int8 a CROSS JOIN Int8 b) , Int32(z) AS (SELECT TOP 2147483647 0 FROM Int16 a CROSS JOIN Int16 b) SELECT ROW_NUMBER() OVER (ORDER BY z) AS n FROM Int32 GO A: From SQL Server 2022 you will be able to do SELECT Value FROM GENERATE_SERIES(START = 1, STOP = 100, STEP=1) In the public preview of SQL Server 2022 (CTP2.0) there are some very promising elements and other less so. Hopefully the negative aspects can be addressed before the actual release. ✅ Execution time for number generation The below generates 10,000,000 numbers in 700 ms in my test VM (the assigning to a variable removes any overhead from sending results to the client) DECLARE @Value INT SELECT @Value =[value] FROM GENERATE_SERIES(START=1, STOP=10000000) ✅ Cardinality estimates It is simple to calculate how many numbers will be returned from the operator and SQL Server takes advantage of this as shown below. ❌ Unnecessary Halloween Protection The plan for the below insert has a completely unnecessary spool - presumably as SQL Server does not currently have logic to determine the source of the rows is not potentially the destination. CREATE TABLE dbo.NumberHeap(Number INT); INSERT INTO dbo.Numbers SELECT [value] FROM GENERATE_SERIES(START=1, STOP=10); When inserting into a table with a clustered index on Number the spool may be replaced by a sort instead (that also provides the phase separation) ❌ Unnecessary sorts The below will return the rows in order anyway but SQL Server apparently does not yet have the properties set to guarantee this and take advantage of it in the execution plan. SELECT [value] FROM GENERATE_SERIES(START=1, STOP=10) ORDER BY [value] RE: This last point Aaron Bertrand indicates that this is not a box currently ticked but that this may be forthcoming. A: The most optimal function would be to use a table instead of a function. Using a function causes extra CPU load to create the values for the data being returned, especially if the values being returned cover a very large range. A: Heh... sorry I'm so late responding to an old post. And, yeah, I had to respond because the most popular answer (at the time, the Recursive CTE answer with the link to 14 different methods) on this thread is, ummm... performance challenged at best. First, the article with the 14 different solutions is fine for seeing the different methods of creating a Numbers/Tally table on the fly but as pointed out in the article and in the cited thread, there's a very important quote... "suggestions regarding efficiency and performance are often subjective. Regardless of how a query is being used, the physical implementation determines the efficiency of a query. Therefore, rather than relying on biased guidelines, it is imperative that you test the query and determine which one performs better." Ironically, the article itself contains many subjective statements and "biased guidelines" such as "a recursive CTE can generate a number listing pretty efficiently" and "This is an efficient method of using WHILE loop from a newsgroup posting by Itzik Ben-Gen" (which I'm sure he posted just for comparison purposes). C'mon folks... Just mentioning Itzik's good name may lead some poor slob into actually using that horrible method. The author should practice what (s)he preaches and should do a little performance testing before making such ridiculously incorrect statements especially in the face of any scalablility. With the thought of actually doing some testing before making any subjective claims about what any code does or what someone "likes", here's some code you can do your own testing with. Setup profiler for the SPID you're running the test from and check it out for yourself... just do a "Search'n'Replace" of the number 1000000 for your "favorite" number and see... --===== Test for 1000000 rows ================================== GO --===== Traditional RECURSIVE CTE method WITH Tally (N) AS ( SELECT 1 UNION ALL SELECT 1 + N FROM Tally WHERE N < 1000000 ) SELECT N INTO #Tally1 FROM Tally OPTION (MAXRECURSION 0); GO --===== Traditional WHILE LOOP method CREATE TABLE #Tally2 (N INT); SET NOCOUNT ON; DECLARE @Index INT; SET @Index = 1; WHILE @Index <= 1000000 BEGIN INSERT #Tally2 (N) VALUES (@Index); SET @Index = @Index + 1; END; GO --===== Traditional CROSS JOIN table method SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS N INTO #Tally3 FROM Master.sys.All_Columns ac1 CROSS JOIN Master.sys.ALL_Columns ac2; GO --===== Itzik's CROSS JOINED CTE method WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), E16(N) AS (SELECT 1 FROM E08 a, E08 b), E32(N) AS (SELECT 1 FROM E16 a, E16 b), cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32) SELECT N INTO #Tally4 FROM cteTally WHERE N <= 1000000; GO --===== Housekeeping DROP TABLE #Tally1, #Tally2, #Tally3, #Tally4; GO While we're at it, here's the numbers I get from SQL Profiler for the values of 100, 1000, 10000, 100000, and 1000000... SPID TextData Dur(ms) CPU Reads Writes ---- ---------------------------------------- ------- ----- ------- ------ 51 --===== Test for 100 rows ============== 8 0 0 0 51 --===== Traditional RECURSIVE CTE method 16 0 868 0 51 --===== Traditional WHILE LOOP method CR 73 16 175 2 51 --===== Traditional CROSS JOIN table met 11 0 80 0 51 --===== Itzik's CROSS JOINED CTE method 6 0 63 0 51 --===== Housekeeping DROP TABLE #Tally 35 31 401 0 51 --===== Test for 1000 rows ============= 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 47 47 8074 0 51 --===== Traditional WHILE LOOP method CR 80 78 1085 0 51 --===== Traditional CROSS JOIN table met 5 0 98 0 51 --===== Itzik's CROSS JOINED CTE method 2 0 83 0 51 --===== Housekeeping DROP TABLE #Tally 6 15 426 0 51 --===== Test for 10000 rows ============ 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 434 344 80230 10 51 --===== Traditional WHILE LOOP method CR 671 563 10240 9 51 --===== Traditional CROSS JOIN table met 25 31 302 15 51 --===== Itzik's CROSS JOINED CTE method 24 0 192 15 51 --===== Housekeeping DROP TABLE #Tally 7 15 531 0 51 --===== Test for 100000 rows =========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 4143 3813 800260 154 51 --===== Traditional WHILE LOOP method CR 5820 5547 101380 161 51 --===== Traditional CROSS JOIN table met 160 140 479 211 51 --===== Itzik's CROSS JOINED CTE method 153 141 276 204 51 --===== Housekeeping DROP TABLE #Tally 10 15 761 0 51 --===== Test for 1000000 rows ========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 41349 37437 8001048 1601 51 --===== Traditional WHILE LOOP method CR 59138 56141 1012785 1682 51 --===== Traditional CROSS JOIN table met 1224 1219 2429 2101 51 --===== Itzik's CROSS JOINED CTE method 1448 1328 1217 2095 51 --===== Housekeeping DROP TABLE #Tally 8 0 415 0 As you can see, the Recursive CTE method is the second worst only to the While Loop for Duration and CPU and has 8 times the memory pressure in the form of logical reads than the While Loop. It's RBAR on steroids and should be avoided, at all cost, for any single row calculations just as a While Loop should be avoided. There are places where recursion is quite valuable but this ISN'T one of them. As a side bar, Mr. Denny is absolutely spot on... a correctly sized permanent Numbers or Tally table is the way to go for most things. What does correctly sized mean? Well, most people use a Tally table to generate dates or to do splits on VARCHAR(8000). If you create an 11,000 row Tally table with the correct clustered index on "N", you'll have enough rows to create more than 30 years worth of dates (I work with mortgages a fair bit so 30 years is a key number for me) and certainly enough to handle a VARCHAR(8000) split. Why is "right sizing" so important? If the Tally table is used a lot, it easily fits in cache which makes it blazingly fast without much pressure on memory at all. Last but not least, every one knows that if you create a permanent Tally table, it doesn't much matter which method you use to build it because 1) it's only going to be made once and 2) if it's something like an 11,000 row table, all of the methods are going to run "good enough". So why all the indigination on my part about which method to use??? The answer is that some poor guy/gal who doesn't know any better and just needs to get his or her job done might see something like the Recursive CTE method and decide to use it for something much larger and much more frequently used than building a permanent Tally table and I'm trying to protect those people, the servers their code runs on, and the company that owns the data on those servers. Yeah... it's that big a deal. It should be for everyone else, as well. Teach the right way to do things instead of "good enough". Do some testing before posting or using something from a post or book... the life you save may, in fact, be your own especially if you think a recursive CTE is the way to go for something like this. ;-) Thanks for listening... A: Using SQL Server 2016+ to generate numbers table you could use OPENJSON : -- range from 0 to @max - 1 DECLARE @max INT = 40000; SELECT rn = CAST([key] AS INT) FROM OPENJSON(CONCAT('[1', REPLICATE(CAST(',1' AS VARCHAR(MAX)),@max-1),']')); LiveDemo Idea taken from How can we use OPENJSON to generate series of numbers? A: edit: see Conrad's comment below. Jeff Moden's answer is great ... but I find on Postgres that the Itzik method fails unless you remove the E32 row. Slightly faster on postgres (40ms vs 100ms) is another method I found on here adapted for postgres: WITH E00 (N) AS ( SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 ), E01 (N) AS (SELECT a.N FROM E00 a CROSS JOIN E00 b), E02 (N) AS (SELECT a.N FROM E01 a CROSS JOIN E01 b ), E03 (N) AS (SELECT a.N FROM E02 a CROSS JOIN E02 b LIMIT 11000 -- end record 11,000 good for 30 yrs dates ), -- max is 100,000,000, starts slowing e.g. 1 million 1.5 secs, 2 mil 2.5 secs, 3 mill 4 secs Tally (N) as (SELECT row_number() OVER (ORDER BY a.N) FROM E03 a) SELECT N FROM Tally As I am moving from SQL Server to Postgres world, may have missed a better way to do tally tables on that platform ... INTEGER()? SEQUENCE()? A: Still much later, I'd like to contribute a slightly different 'traditional' CTE (does not touch base tables to get the volume of rows): --===== Hans CROSS JOINED CTE method WITH Numbers_CTE (Digit) AS (SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) SELECT HundredThousand.Digit * 100000 + TenThousand.Digit * 10000 + Thousand.Digit * 1000 + Hundred.Digit * 100 + Ten.Digit * 10 + One.Digit AS Number INTO #Tally5 FROM Numbers_CTE AS One CROSS JOIN Numbers_CTE AS Ten CROSS JOIN Numbers_CTE AS Hundred CROSS JOIN Numbers_CTE AS Thousand CROSS JOIN Numbers_CTE AS TenThousand CROSS JOIN Numbers_CTE AS HundredThousand This CTE performs more READs then Itzik's CTE but less then the Traditional CTE. However, it consistently performs less WRITES then the other queries. As you know, Writes are consistently quite much more expensive then Reads. The duration depends heavily on the number of cores (MAXDOP) but, on my 8core, performs consistently quicker (less duration in ms) then the other queries. I am using: Microsoft SQL Server 2012 - 11.0.5058.0 (X64) May 14 2014 18:34:29 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) on Windows Server 2012 R2, 32 GB, Xeon X3450 @2.67Ghz, 4 cores HT enabled.
{ "language": "en", "url": "https://stackoverflow.com/questions/10819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Fast database access test from .NET What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway. A: Shorten the timeout on the connection string and execute something trivial. The wait should be about the same as the timeout. You would still need a second or two though. A: You haven't mentioned what database you are connecting to, however. In SQL Server 2005, from .NET, you can specify a connection timeout in your connection string like so: server=<server>;database=<database>;uid=<user>;password=<password>;Connect Timeout=3 This will try to connect to the server and if it doesn't do so in three seconds, it will throw a timeout error.
{ "language": "en", "url": "https://stackoverflow.com/questions/10822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Compare a date string to datetime in SQL Server? In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.) A: Just compare the year, month and day values. Declare @DateToSearch DateTime Set @DateToSearch = '14 AUG 2008' SELECT * FROM table1 WHERE Year(column_datetime) = Year(@DateToSearch) AND Month(column_datetime) = Month(@DateToSearch) AND Day(column_datetime) = Day(@DateToSearch) A: Technique 2: DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE DATEDIFF( d, column_datetime, @p_date ) = 0 If the column_datetime field is not indexed, and is unlikely to be (or the index is unlikely to be used) then using DATEDIFF() is shorter. A: Something like this? SELECT * FROM table1 WHERE convert(varchar, column_datetime, 111) = '2008/08/14' A: Good point about the index in the answer you accepted. Still, if you really search only on specific DATE or DATE ranges often, then the best solution I found is to add another persisted computed column to your table which would only contain the DATE, and add index on this column: ALTER TABLE "table1" ADD "column_date" AS CONVERT(DATE, "column_datetime") PERSISTED Add index on that column: CREATE NONCLUSTERED INDEX "table1_column_date_nu_nci" ON "table1" ( "column_date" ASC ) GO Then your search will be even faster: DECLARE @p_date DATE SET @p_date = CONVERT( DATE, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_date = @p_date A: I normally convert date-time to date and compare them, like these: SELECT 'Same Date' WHERE CAST(getDate() as date) = cast('2/24/2012 2:23 PM' as date) or SELECT 'Same Date' WHERE DATEDIFF(dd, cast(getDate() as date), cast('2/24/2012 2:23 PM' as date)) = 0 A: Technique 1: DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime >= @p_date AND column_datetime < DATEADD(d, 1, @p_date) The advantage of this is that it will use any index on 'column_datetime' if it exists. A: This function Cast(Floor(Cast(GetDate() As Float)) As DateTime) returns a datetime datatype with the time portion removed and could be used as so. Select * Table1 Where Cast(Floor(Cast(Column_DateTime As Float)) As DateTime) = '14-AUG-2008' or DECLARE @p_date DATETIME SET @p_date = Cast('14 AUG 2008' as DateTime) SELECT * FROM table1 WHERE Cast(Floor(Cast(column_datetime As Float)) As DateTime) = @p_date A: How to get the DATE portion of a DATETIME field in MS SQL Server: One of the quickest and neatest ways to do this is using DATEADD(dd, DATEDIFF( dd, 0, @DAY ), 0) It avoids the CPU busting "convert the date into a string without the time and then converting it back again" logic. It also does not expose the internal implementation that the "time portion is expressed as a fraction" of the date. Get the date of the first day of the month DATEADD(dd, DATEDIFF( dd, -1, GetDate() - DAY(GetDate()) ), 0) Get the date rfom 1 year ago DATEADD(m,-12,DATEADD(dd, DATEDIFF( dd, -1, GetDate() - DAY(GetDate()) ), 0)) A: In SQL Server 2008, you could use the new DATE datatype DECLARE @pDate DATE='2008-08-14' SELECT colA, colB FROM table1 WHERE convert(date, colDateTime) = @pDate @Guy. I think you will find that this solution scales just fine. Have a look at the query execution plan of your original query. And for mine: A: SELECT * FROM table1 WHERE CONVERT(varchar(10),columnDatetime,121) = CONVERT(varchar(10),CONVERT('14 AUG 2008' ,smalldatetime),121) This will convert the datatime and the string into varchars of the format "YYYY-MM-DD". This is very ugly, but should work A: I know this isn't exactly how you want to do this, but it could be a start: SELECT * FROM (SELECT *, DATEPART(yy, column_dateTime) as Year, DATEPART(mm, column_dateTime) as Month, DATEPART(dd, column_dateTime) as Day FROM table1) WHERE Year = '2008' AND Month = '8' AND Day = '14' A: DECLARE @Dat SELECT * FROM Jai WHERE CONVERT(VARCHAR(2),DATEPART("dd",Date)) +'/'+ CONVERT(VARCHAR(2),DATEPART("mm",Date)) +'/'+ CONVERT(VARCHAR(4), DATEPART("yy",Date)) = @Dat A: Date can be compared in sqlserver using string comparision: e.g. DECLARE @strDate VARCHAR(15) SET @strDate ='07-12-2010' SELECT * FROM table WHERE CONVERT(VARCHAR(15),dtInvoice, 112)>= CONVERT(VARCHAR(15),@strDate , 112) A: SELECT CONVERT(VARCHAR(2),DATEPART("dd",doj)) + '/' + CONVERT(VARCHAR(2),DATEPART("mm",doj)) + '/' + CONVERT(VARCHAR(4),DATEPART("yy",doj)) FROM emp A: SELECT * FROM tablename WHERE CAST(FLOOR(CAST(column_datetime AS FLOAT))AS DATETIME) = '30 jan 2012' A: The best way is to simply extract the date part using the SQL DATE() Function: SELECT * FROM table1 WHERE DATE(column_datetime) = @p_date; A: There are many formats for date in SQL which are being specified. Refer https://msdn.microsoft.com/en-in/library/ms187928.aspx Converting and comparing varchar column with selected dates. Syntax: SELECT * FROM tablename where CONVERT(datetime,columnname,103) between '2016-03-01' and '2016-03-03' In CONVERT(DATETIME,COLUMNNAME,103) "103" SPECIFIES THE DATE FORMAT as dd/mm/yyyy A: In sqlserver DECLARE @p_date DATE SELECT * FROM table1 WHERE column_dateTime=@p_date In C# Pass the short string of date value using ToShortDateString() function. sample: DateVariable.ToShortDateString();
{ "language": "en", "url": "https://stackoverflow.com/questions/10825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: How to programmatically send SMS on the iPhone? Does anybody know if it's possible, and how, to programmatically send a SMS from the iPhone, with the official SDK / Cocoa Touch? A: Here is the Swift version of code to send SMS in iOS. Please noted that it only works in real devices. Code tested in iOS 7+. You can read more here. 1) Create a new Class which inherits MFMessageComposeViewControllerDelegate and NSObject: import Foundation import MessageUI class MessageComposer: NSObject, MFMessageComposeViewControllerDelegate { // A wrapper function to indicate whether or not a text message can be sent from the user's device func canSendText() -> Bool { return MFMessageComposeViewController.canSendText() } // Configures and returns a MFMessageComposeViewController instance func configuredMessageComposeViewController(textMessageRecipients:[String] ,textBody body:String) -> MFMessageComposeViewController { let messageComposeVC = MFMessageComposeViewController() messageComposeVC.messageComposeDelegate = self // Make sure to set this property to self, so that the controller can be dismissed! messageComposeVC.recipients = textMessageRecipients messageComposeVC.body = body return messageComposeVC } // MFMessageComposeViewControllerDelegate callback - dismisses the view controller when the user is finished with it func messageComposeViewController(controller: MFMessageComposeViewController!, didFinishWithResult result: MessageComposeResult) { controller.dismissViewControllerAnimated(true, completion: nil) } } 2) How to use this class: func openMessageComposerHelper(sender:AnyObject ,withIndexPath indexPath: NSIndexPath) { var recipients = [String]() //modify your recipients here if (messageComposer.canSendText()) { println("can send text") // Obtain a configured MFMessageComposeViewController let body = Utility.createInvitationMessageText() let messageComposeVC = messageComposer.configuredMessageComposeViewController(recipients, textBody: body) // Present the configured MFMessageComposeViewController instance // Note that the dismissal of the VC will be handled by the messageComposer instance, // since it implements the appropriate delegate call-back presentViewController(messageComposeVC, animated: true, completion: nil) } else { // Let the user know if his/her device isn't able to send text messages self.displayAlerViewWithTitle("Cannot Send Text Message", andMessage: "Your device is not able to send text messages.") } } A: You can use a sms:[target phone number] URL to open the SMS application, but there are no indications on how to prefill a SMS body with text. A: Restrictions If you could send an SMS within a program on the iPhone, you'll be able to write games that spam people in the background. I'm sure you really want to have spams from your friends, "Try out this new game! It roxxers my boxxers, and yours will be too! roxxersboxxers.com!!!! If you sign up now you'll get 3,200 RB points!!" Apple has restrictions for automated (or even partially automated) SMS and dialing operations. (Imagine if the game instead dialed 911 at a particular time of day) Your best bet is to set up an intermediate server on the internet that uses an online SMS sending service and send the SMS via that route if you need complete automation. (ie, your program on the iPhone sends a UDP packet to your server, which sends the real SMS) iOS 4 Update iOS 4, however, now provides a viewController you can import into your application. You prepopulate the SMS fields, then the user can initiate the SMS send within the controller. Unlike using the "SMS:..." url format, this allows your application to stay open, and allows you to populate both the to and the body fields. You can even specify multiple recipients. This prevents applications from sending automated SMS without the user explicitly aware of it. You still cannot send fully automated SMS from the iPhone itself, it requires some user interaction. But this at least allows you to populate everything, and avoids closing the application. The MFMessageComposeViewController class is well documented, and tutorials show how easy it is to implement. iOS 5 Update iOS 5 includes messaging for iPod touch and iPad devices, so while I've not yet tested this myself, it may be that all iOS devices will be able to send SMS via MFMessageComposeViewController. If this is the case, then Apple is running an SMS server that sends messages on behalf of devices that don't have a cellular modem. iOS 6 Update No changes to this class. iOS 7 Update You can now check to see if the message medium you are using will accept a subject or attachments, and what kind of attachments it will accept. You can edit the subject and add attachments to the message, where the medium allows it. iOS 8 Update No changes to this class. iOS 9 Update No changes to this class. iOS 10 Update No changes to this class. iOS 11 Update No significant changes to this class Limitations to this class Keep in mind that this won't work on phones without iOS 4, and it won't work on the iPod touch or the iPad, except, perhaps, under iOS 5. You must either detect the device and iOS limitations prior to using this controller, or risk restricting your app to recently upgraded 3G, 3GS, and 4 iPhones. However, an intermediate server that sends SMS will allow any and all of these iOS devices to send SMS as long as they have internet access, so it may still be a better solution for many applications. Alternately, use both, and only fall back to an online SMS service when the device doesn't support it. A: There is a class in iOS 4 which supports sending messages with body and recipents from your application. It works the same as sending mail. You can find the documentation here: link text A: - (void)sendSMS:(NSString *)bodyOfMessage recipientList:(NSArray *)recipients { UIPasteboard *pasteboard = [UIPasteboard generalPasteboard]; UIImage *ui =resultimg.image; pasteboard.image = ui; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"sms:"]]; } A: //call method with name and number. -(void)openMessageViewWithName:(NSString*)contactName withPhone:(NSString *)phone{ CTTelephonyNetworkInfo *networkInfo=[[CTTelephonyNetworkInfo alloc]init]; CTCarrier *carrier=networkInfo.subscriberCellularProvider; NSString *Countrycode = carrier.isoCountryCode; if ([Countrycode length]>0) //Check If Sim Inserted { [self sendSMS:msg recipientList:[NSMutableArray arrayWithObject:phone]]; } else { [AlertHelper showAlert:@"Message" withMessage:@"No sim card inserted"]; } } //Method for sending message - (void)sendSMS:(NSString *)bodyOfMessage recipientList:(NSMutableArray *)recipients{ MFMessageComposeViewController *controller1 = [[MFMessageComposeViewController alloc] init] ; controller1 = [[MFMessageComposeViewController alloc] init] ; if([MFMessageComposeViewController canSendText]) { controller1.body = bodyOfMessage; controller1.recipients = recipients; controller1.messageComposeDelegate = self; [self presentViewController:controller1 animated:YES completion:Nil]; } } A: One of the systems of inter-process communication in MacOS is XPC. This system layer has been developed for inter-process communication based on the transfer of plist structures using libSystem and launchd. In fact, it is an interface that allows managing processes via the exchange of such structures as dictionaries. Due to heredity, iOS 5 possesses this mechanism as well. You might already understand what I mean by this introduction. Yep, there are system services in iOS that include tools for XPC communication. And I want to exemplify the work with a daemon for SMS sending. However, it should be mentioned that this ability is fixed in iOS 6, but is relevant for iOS 5.0—5.1.1. Jailbreak, Private Framework, and other illegal tools are not required for its exploitation. Only the set of header files from the directory /usr/include/xpc/* are needed. One of the elements for SMS sending in iOS is the system service com.apple.chatkit, the tasks of which include generation, management, and sending of short text messages. For the ease of control, it has the publicly available communication port com.apple.chatkit.clientcomposeserver.xpc. Using the XPC subsystem, you can generate and send messages without user's approval.  Well, let's try to create a connection. xpc_connection_t myConnection; dispatch_queue_t queue = dispatch_queue_create("com.apple.chatkit.clientcomposeserver.xpc", DISPATCH_QUEUE_CONCURRENT); myConnection = xpc_connection_create_mach_service("com.apple.chatkit.clientcomposeserver.xpc", queue, XPC_CONNECTION_MACH_SERVICE_PRIVILEGED); Now we have the XPC connection myConnection set to the service of SMS sending. However, XPC configuration provides for creation of suspended connections —we need to take one more step for the activation. xpc_connection_set_event_handler(myConnection, ^(xpc_object_t event){ xpc_type_t xtype = xpc_get_type(event); if(XPC_TYPE_ERROR == xtype) { NSLog(@"XPC sandbox connection error: %s\n", xpc_dictionary_get_string(event, XPC_ERROR_KEY_DESCRIPTION)); } // Always set an event handler. More on this later. NSLog(@"Received a message event!"); }); xpc_connection_resume(myConnection); The connection is activated. Right at this moment iOS 6 will display a message in the telephone log that this type of communication is forbidden. Now we need to generate a dictionary similar to xpc_dictionary with the data required for the message sending. NSArray *recipient = [NSArray arrayWithObjects:@"+7 (90*) 000-00-00", nil]; NSData *ser_rec = [NSPropertyListSerialization dataWithPropertyList:recipient format:200 options:0 error:NULL]; xpc_object_t mydict = xpc_dictionary_create(0, 0, 0); xpc_dictionary_set_int64(mydict, "message-type", 0); xpc_dictionary_set_data(mydict, "recipients", [ser_rec bytes], [ser_rec length]); xpc_dictionary_set_string(mydict, "text", "hello from your application!"); Little is left: send the message to the XPC port and make sure it is delivered. xpc_connection_send_message(myConnection, mydict); xpc_connection_send_barrier(myConnection, ^{ NSLog(@"The message has been successfully delivered"); }); That's all. SMS sent. A: Add the MessageUI.Framework and use the following code #import <MessageUI/MessageUI.h> And then: if ([MFMessageComposeViewController canSendText]) { MFMessageComposeViewController *messageComposer = [[MFMessageComposeViewController alloc] init]; NSString *message = @"Your Message here"; [messageComposer setBody:message]; messageComposer.messageComposeDelegate = self; [self presentViewController:messageComposer animated:YES completion:nil]; } and the delegate method - - (void)messageComposeViewController:(MFMessageComposeViewController *)controller didFinishWithResult:(MessageComposeResult)result { [self dismissViewControllerAnimated:YES completion:nil]; } A: You can use this approach: [[UIApplication sharedApplication]openURL:[NSURL URLWithString:@"sms:MobileNumber"]] iOS will automatically navigate from your app to the messages app's message composing page. Since the URL's scheme starts with sms:, this is identified as a type that is recognized by the messages app and launches it. A: If you want, you can use the private framework CoreTelephony which called CTMessageCenter class. There are a few methods to send sms. A: Here is a tutorial which does exactly what you are looking for: the MFMessageComposeViewController. http://blog.mugunthkumar.com/coding/iphone-tutorial-how-to-send-in-app-sms/ Essentially: MFMessageComposeViewController *controller = [[[MFMessageComposeViewController alloc] init] autorelease]; if([MFMessageComposeViewController canSendText]) { controller.body = @"SMS message here"; controller.recipients = [NSArray arrayWithObjects:@"1(234)567-8910", nil]; controller.messageComposeDelegate = self; [self presentModalViewController:controller animated:YES]; } And a link to the docs. https://developer.apple.com/documentation/messageui/mfmessagecomposeviewcontroller A: Follow this procedures 1 .Add MessageUI.Framework to project 2 . Import #import <MessageUI/MessageUI.h> in .h file. 3 . Copy this code for sending message if ([MFMessageComposeViewController canSendText]) { MFMessageComposeViewController *messageComposer = [[MFMessageComposeViewController alloc] init]; NSString *message = @"Message!!!"; [messageComposer setBody:message]; messageComposer.messageComposeDelegate = self; [self presentViewController:messageComposer animated:YES completion:nil]; } 4 . Implement delegate method if you want to. - (void)messageComposeViewController:(MFMessageComposeViewController *)controller didFinishWithResult:(MessageComposeResult)result{ ///your stuff here [self dismissViewControllerAnimated:YES completion:nil]; } Run And GO! A: //Add the Framework in .h file #import <MessageUI/MessageUI.h> #import <MessageUI/MFMailComposeViewController.h> //Set the delegate methods UIViewController<UINavigationControllerDelegate,MFMessageComposeViewControllerDelegate> //add the below code in .m file - (void)viewDidAppear:(BOOL)animated{ [super viewDidAppear:animated]; MFMessageComposeViewController *controller = [[[MFMessageComposeViewController alloc] init] autorelease]; if([MFMessageComposeViewController canSendText]) { NSString *str= @"Hello"; controller.body = str; controller.recipients = [NSArray arrayWithObjects: @"", nil]; controller.delegate = self; [self presentModalViewController:controller animated:YES]; } } - (void)messageComposeViewController: (MFMessageComposeViewController *)controller didFinishWithResult:(MessageComposeResult)result { switch (result) { case MessageComposeResultCancelled: NSLog(@"Cancelled"); break; case MessageComposeResultFailed: NSLog(@"Failed"); break; case MessageComposeResultSent: break; default: break; } [self dismissModalViewControllerAnimated:YES]; } A: * *You must add the MessageUI.framework to your Xcode project *Include an #import <MessageUI/MessageUI.h> in your header file *Add these delegates to your header file MFMessageComposeViewControllerDelegate & UINavigationControllerDelegate *In your IBAction method declare instance of MFMessageComposeViewController say messageInstance *To check whether your device can send text use [MFMessageComposeViewController canSendText] in an if condition, it'll return Yes/No *In the if condition do these: * *First set body for your messageInstance as: messageInstance.body = @"Hello from Shah"; *Then decide the recipients for the message as: messageInstance.recipients = [NSArray arrayWithObjects:@"12345678", @"87654321", nil]; *Set a delegate to your messageInstance as: messageInstance.messageComposeDelegate = self; *In the last line do this: [self presentModalViewController:messageInstance animated:YES]; A: Use this: - (void)showSMSPicker { Class messageClass = (NSClassFromString(@"MFMessageComposeViewController")); if (messageClass != nil) { // Check whether the current device is configured for sending SMS messages if ([messageClass canSendText]) { [self displaySMSComposerSheet]; } } } - (void)messageComposeViewController:(MFMessageComposeViewController *)controller didFinishWithResult:(MessageComposeResult)result { //feedbackMsg.hidden = NO; // Notifies users about errors associated with the interface switch (result) { case MessageComposeResultCancelled: { UIAlertView *alert1 = [[UIAlertView alloc] initWithTitle:@"Message" message:@"SMS sending canceled!!!" delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert1 show]; [alert1 release]; } // feedbackMsg.text = @"Result: SMS sending canceled"; break; case MessageComposeResultSent: { UIAlertView *alert2 = [[UIAlertView alloc] initWithTitle:@"Message" message:@"SMS sent!!!" delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert2 show]; [alert2 release]; } // feedbackMsg.text = @"Result: SMS sent"; break; case MessageComposeResultFailed: { UIAlertView *alert3 = [[UIAlertView alloc] initWithTitle:@"Message" message:@"SMS sending failed!!!" delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert3 show]; [alert3 release]; } // feedbackMsg.text = @"Result: SMS sending failed"; break; default: { UIAlertView *alert4 = [[UIAlertView alloc] initWithTitle:@"Message" message:@"SMS not sent!!!" delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert4 show]; [alert4 release]; } // feedbackMsg.text = @"Result: SMS not sent"; break; } [self dismissModalViewControllerAnimated: YES]; } A: [[UIApplication sharedApplication]openURL:[NSURL URLWithString:@"sms:number"]] This would be the best and short way to do it. A: You can present MFMessageComposeViewController, which can send SMS, but with user prompt(he taps send button). No way to do that without user permission. On iOS 11, you can make extension, that can be like filter for incoming messages , telling iOS either its spam or not. Nothing more with SMS cannot be done A: You need to use the MFMessageComposeViewController if you want to show creating and sending the message in your own app. Otherwise, you can use the sharedApplication method.
{ "language": "en", "url": "https://stackoverflow.com/questions/10848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "541" }
Q: LINQ query on a DataTable I'm trying to perform a LINQ query on a DataTable object and bizarrely I am finding that performing such queries on DataTables is not straightforward. For example: var results = from myRow in myDataTable where results.Field("RowNo") == 1 select results; This is not allowed. How do I get something like this working? I'm amazed that LINQ queries are not allowed on DataTables! A: var results = from myRow in myDataTable where results.Field<Int32>("RowNo") == 1 select results; A: In my application I found that using LINQ to Datasets with the AsEnumerable() extension for DataTable as suggested in the answer was extremely slow. If you're interested in optimizing for speed, use James Newtonking's Json.Net library (http://james.newtonking.com/json/help/index.html) // Serialize the DataTable to a json string string serializedTable = JsonConvert.SerializeObject(myDataTable); Jarray dataRows = Jarray.Parse(serializedTable); // Run the LINQ query List<JToken> results = (from row in dataRows where (int) row["ans_key"] == 42 select row).ToList(); // If you need the results to be in a DataTable string jsonResults = JsonConvert.SerializeObject(results); DataTable resultsTable = JsonConvert.DeserializeObject<DataTable>(jsonResults); A: Example on how to achieve this provided below: DataSet dataSet = new DataSet(); //Create a dataset dataSet = _DataEntryDataLayer.ReadResults(); //Call to the dataLayer to return the data //LINQ query on a DataTable var dataList = dataSet.Tables["DataTable"] .AsEnumerable() .Select(i => new { ID = i["ID"], Name = i["Name"] }).ToList(); A: It's not that they were deliberately not allowed on DataTables, it's just that DataTables pre-date the IQueryable and generic IEnumerable constructs on which Linq queries can be performed. Both interfaces require some sort type-safety validation. DataTables are not strongly typed. This is the same reason why people can't query against an ArrayList, for example. For Linq to work you need to map your results against type-safe objects and query against that instead. A: For VB.NET The code will look like this: Dim results = From myRow In myDataTable Where myRow.Field(Of Int32)("RowNo") = 1 Select myRow A: IEnumerable<string> result = from myRow in dataTableResult.AsEnumerable() select myRow["server"].ToString() ; A: Try this... SqlCommand cmd = new SqlCommand( "Select * from Employee",con); SqlDataReader dr = cmd.ExecuteReader( ); DataTable dt = new DataTable( "Employee" ); dt.Load( dr ); var Data = dt.AsEnumerable( ); var names = from emp in Data select emp.Field<String>( dt.Columns[1] ); foreach( var name in names ) { Console.WriteLine( name ); } A: As @ch00k said: using System.Data; //needed for the extension methods to work ... var results = from myRow in myDataTable.Rows where myRow.Field<int>("RowNo") == 1 select myRow; //select the thing you want, not the collection You also need to add a project reference to System.Data.DataSetExtensions A: You can get it work elegant via linq like this: from prod in TenMostExpensiveProducts().Tables[0].AsEnumerable() where prod.Field<decimal>("UnitPrice") > 62.500M select prod Or like dynamic linq this (AsDynamic is called directly on DataSet): TenMostExpensiveProducts().AsDynamic().Where (x => x.UnitPrice > 62.500M) I prefer the last approach while is is the most flexible. P.S.: Don't forget to connect System.Data.DataSetExtensions.dll reference A: you can try this, but you must be sure the type of values for each Column List<MyClass> result = myDataTable.AsEnumerable().Select(x=> new MyClass(){ Property1 = (string)x.Field<string>("ColumnName1"), Property2 = (int)x.Field<int>("ColumnName2"), Property3 = (bool)x.Field<bool>("ColumnName3"), }); A: I realize this has been answered a few times over, but just to offer another approach: I like to use the .Cast<T>() method, it helps me maintain sanity in seeing the explicit type defined and deep down I think .AsEnumerable() calls it anyways: var results = from myRow in myDataTable.Rows.Cast<DataRow>() where myRow.Field<int>("RowNo") == 1 select myRow; or var results = myDataTable.Rows.Cast<DataRow>() .FirstOrDefault(x => x.Field<int>("RowNo") == 1); As noted in comments, does not require System.Data.DataSetExtensions or any other assemblies (Reference) A: var query = from p in dt.AsEnumerable() where p.Field<string>("code") == this.txtCat.Text select new { name = p.Field<string>("name"), age= p.Field<int>("age") }; the name and age fields are now part of the query object and can be accessed like so: Console.WriteLine(query.name); A: Using LINQ to manipulate data in DataSet/DataTable var results = from myRow in tblCurrentStock.AsEnumerable() where myRow.Field<string>("item_name").ToUpper().StartsWith(tbSearchItem.Text.ToUpper()) select myRow; DataView view = results.AsDataView(); A: //Create DataTable DataTable dt= new DataTable(); dt.Columns.AddRange(new DataColumn[] { new DataColumn("ID",typeof(System.Int32)), new DataColumn("Name",typeof(System.String)) }); //Fill with data dt.Rows.Add(new Object[]{1,"Test1"}); dt.Rows.Add(new Object[]{2,"Test2"}); //Now Query DataTable with linq //To work with linq it should required our source implement IEnumerable interface. //But DataTable not Implement IEnumerable interface //So we call DataTable Extension method i.e AsEnumerable() this will return EnumerableRowCollection<DataRow> // Now Query DataTable to find Row whoes ID=1 DataRow drow = dt.AsEnumerable().Where(p=>p.Field<Int32>(0)==1).FirstOrDefault(); // A: Try this simple line of query: var result=myDataTable.AsEnumerable().Where(myRow => myRow.Field<int>("RowNo") == 1); A: You can use LINQ to objects on the Rows collection, like so: var results = from myRow in myDataTable.Rows where myRow.Field("RowNo") == 1 select myRow; A: var results = from DataRow myRow in myDataTable.Rows where (int)myRow["RowNo"] == 1 select myRow A: This is a simple way that works for me and uses lambda expressions: var results = myDataTable.Select("").FirstOrDefault(x => (int)x["RowNo"] == 1) Then if you want a particular value: if(results != null) var foo = results["ColName"].ToString() A: You can't query against the DataTable's Rows collection, since DataRowCollection doesn't implement IEnumerable<T>. You need to use the AsEnumerable() extension for DataTable. Like so: var results = from myRow in myDataTable.AsEnumerable() where myRow.Field<int>("RowNo") == 1 select myRow; And as @Keith says, you'll need to add a reference to System.Data.DataSetExtensions AsEnumerable() returns IEnumerable<DataRow>. If you need to convert IEnumerable<DataRow> to a DataTable, use the CopyToDataTable() extension. Below is query with Lambda Expression, var result = myDataTable .AsEnumerable() .Where(myRow => myRow.Field<int>("RowNo") == 1); A: Try this var row = (from result in dt.AsEnumerable().OrderBy( result => Guid.NewGuid()) select result).Take(3) ; A: Most likely, the classes for the DataSet, DataTable and DataRow are already defined in the solution. If that's the case you won't need the DataSetExtensions reference. Ex. DataSet class name-> CustomSet, DataRow class name-> CustomTableRow (with defined columns: RowNo, ...) var result = from myRow in myDataTable.Rows.OfType<CustomSet.CustomTableRow>() where myRow.RowNo == 1 select myRow; Or (as I prefer) var result = myDataTable.Rows.OfType<CustomSet.CustomTableRow>().Where(myRow => myRow.RowNo); A: I propose following solution: DataView view = new DataView(myDataTable); view.RowFilter = "RowNo = 1"; DataTable results = view.ToTable(true); Looking at the DataView Documentation, the first thing we can see is this: Represents a databindable, customized view of a DataTable for sorting, filtering, searching, editing, and navigation. What I am getting from this is that DataTable is meant to only store data and DataView is there enable us to "query" against the DataTable. Here is how this works in this particular case: You try to implement the SQL Statement SELECT * FROM myDataTable WHERE RowNo = 1 in "DataTable language". In C# we would read it like this: FROM myDataTable WHERE RowNo = 1 SELECT * which looks in C# like this: DataView view = new DataView(myDataTable); //FROM myDataTable view.RowFilter = "RowNo = 1"; //WHERE RowNo = 1 DataTable results = view.ToTable(true); //SELECT *
{ "language": "en", "url": "https://stackoverflow.com/questions/10855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1133" }
Q: Data Layer Best Practices I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA A: It really depends on your view of the world - I used to be in the uncoupled camp. The DAL was only there to supply data to the BAL - end of story. With emerging technologies such as Linq to SQL and Entity Framework becoming a bit more popular, then the line between DAL and BAL have been blurred a bit. In L2S especially your DAL is quite tightly coupled to the Business objects as the object model has a 1-1 mapping to your database field. Like anything in software development there is no right or wrong answer. You need to understand your requirements and future requirments and work from there. I would no more use a Ferrari on the Dakhar rally as I would a Range Rover on a track day. A: You can have both. Let data layer not know of your bussiness objects and make it capable of working with more than one type of data sources. If you supply a common interface (or an abstract class) for interacting with data, you can have different implementations for each type of data source. Factory pattern goes well here. A: An excellent book I have, which covers this topic, is Data Access Patterns, by Clifton Nock. It has got many good explanations and good ideas on how to decouple your business layer from the persistence layer. You really should give it a try. It's one of my favorite books. A: One trick I've found handy is to have my data layer be "collection agnostic". That is, whenever I want to return a list of objects from my data layer, I get the caller to pass in the list. So instead of this: public IList<Foo> GetFoosById(int id) { ... } I do this: public void GetFoosById(IList<Foo> foos, int id) { ... } This lets me pass in a plain old List if that's all I need, or a more intelligent implementation of IList<T> (like ObservableCollection<T>) if I plan to bind to it from the UI. This technique also lets me return stuff from the method like a ValidationResult containing an error message if one occurred. This still means that my data layer knows about my object definitions, but it gives me one extra degree of flexibility. A: Check out Linq to SQL, if I were creating a new application right now I would consider relying on an entirely Linq based data layer. Other than that I think it's good practise to de-couple data and logic as much as possible, but that isn't always practical. A pure separation between logic and data access makes joins and optimisations difficult, which is what makes Linq so powerful. A: In applications wherein we use NHibernate, the answer becomes "somewhere in between", in that, while the XML mapping definitions (they specify which table belongs to which object and which columns belong to which field, etc) are clearly in the business object tier. They are passed to a generic data session manager which is not aware of any of the business objects; the only requirement is that the business objects passed to it for CRUD have to have a mapping file.
{ "language": "en", "url": "https://stackoverflow.com/questions/10860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I "unaccept" a drag in Flex? Once I've called DragManager.acceptDrag is there any way to "unaccept" the drag? Say that I have a view which can accept drag and drop, but only in certain areas. Once the user drags over one of these areas I call DragManager.acceptDrag(this) (from a DragEvent.DRAG_OVER handler), but if the user then moves out of this area I'd like to change the status of the drag to not accepted and show the DragManager.NONE feedback. However, neither calling DragManager.acceptDrag(null) nor DragManager.showFeedback(DragManager.NONE) seems to have any effect. Once I've accepted the drag an set the feedback type I can't seem to change it. Just to make it clear: the areas where the user should be able to drop are not components or even display objects, in fact they are just ranges in the text of a text field (like the selection). Had they been components of their own I could have solved it by making each of them accept drag events individually. I guess I could create proxy components that float over the text to emulate it, but I'd rather not if it isn't necessary. I've managed to get it working in both AIR and the browser now, but only by putting proxy components on top of the ranges of text where you should be able to drop things. That way I get the right feedback and drops are automatically unaccepted on drag exit. This is the oddest thing about D&D in AIR: DragManager.doDrag(initiator, source, event, dragImage, offsetX, offsetY); In browser-based Flex, offsetX and offsetY should be negative (so says the documentation, and it works fine). However, when running exactly the same code in AIR you have to make the offsets positive. The same numbers, but positive. That is very, very weird. I've tested some more and what @maclema works, but not if you run in AIR. It seems like drag and drop in AIR is different. It's really, really weird because not only is the feedback not showing correctly, and it's not possible to unaccept, but the coordinates are also completely off. I just tried my application in a browser instead of AIR and dragging and dropping is completely broken. Also, skipping the dragEnter handler works fine in AIR, but breaks everything when running in a browser. A: Are you using only the dragEnter method? If you are trying to reject the drag while still dragging over the same component you need to use both the dragEnter and dragOver methods. Check out this example: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.core.DragSource; import mx.managers.DragManager; import mx.events.DragEvent; private function onDragEnter(e:DragEvent):void { if ( e.target == lbl ) { if ( e.localX < lbl.width/2 ) { trace("accept"); DragManager.acceptDragDrop(this); } else { DragManager.acceptDragDrop(null); } } } private function doStartDrag(e:MouseEvent):void { if ( e.buttonDown ) { var ds:DragSource = new DragSource(); ds.addData("test", "text"); DragManager.doDrag(btn, ds, e); } } ]]> </mx:Script> <mx:Label id="lbl" text="hello world!" left="10" top="10" dragEnter="onDragEnter(event)" dragOver="onDragEnter(event)" /> <mx:Button id="btn" x="47" y="255" label="Button" mouseMove="doStartDrag(event)"/> </mx:Application> A: If you don't need native drag and drop in AIR, you can get the Flex drag and drop behavior by subclassing WindowedApplication and setting the DragManager. See this post on the Adobe Jira for more info: https://bugs.adobe.com/jira/browse/SDK-13983 A: You are misunderstanding the concept. Your "unaccept" is achieved by implementing the dragOverHandler and signaling that the data is not wanted. Here is the basic concept: * *register the dragEnterHandler or override the already registered method. function dragEnterHandler(event: DragEvent):void { if (data suites at least one location in this component) DragManager.acceptDragDrop(this); } This enables your container to receive further messages (dragOver/dragExit). But this is NOT the location to decide which kind of mouse cursor should be displayed. Without DragManager.acceptDragDrop(this); the other handlers aren't called. *register the dragOverHandler or override the already registered method. function dragOverHandler(event: DragEvent):void { if (data suites at least no location in this component) { DragManager.showFeedback(DragManager.NONE); return; } ... // handle other cases and show the cursor / icon you want } Calling DragManager.showFeedback(DragManager.NONE); does the trick to display the "unaccept". *register the dragExitHandler or override the already registered method. function dragOverHandler(event: DragEvent):void { // handle the recieved data as you like. } A: ok, I see the problem now. Rather than null, try setting it to the dragInitiator. Check this out. <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.controls.Alert; import mx.events.DragEvent; import mx.managers.DragManager; import mx.core.DragSource; private function doStartDrag(e:MouseEvent):void { if ( e.buttonDown && !DragManager.isDragging ) { var ds:DragSource = new DragSource(); ds.addData("test", "test"); DragManager.doDrag(btn, ds, e); } } private function handleDragOver(e:DragEvent):void { if ( e.localX < cvs.width/2 ) { //since null does nothing, lets just set to accept the drag //operation, but accept it to the dragInitiator DragManager.acceptDragDrop(e.dragInitiator); } else { //accept drag DragManager.acceptDragDrop(cvs); DragManager.showFeedback( DragManager.COPY ); } } private function handleDragDrop(e:DragEvent):void { if ( e.dragSource.hasFormat("test") ) { Alert.show("Got a drag drop!"); } } ]]> </mx:Script> <mx:Canvas x="265" y="66" width="321" height="245" backgroundColor="#FF0000" id="cvs" dragOver="handleDragOver(event)" dragDrop="handleDragDrop(event)"> </mx:Canvas> <mx:Button id="btn" x="82" y="140" label="Drag Me" mouseDown="doStartDrag(event)"/> </mx:WindowedApplication> A: Yes, drag and drop is different in AIR. I HATE that! It takes a lot of playing around to figure out how to get things to work the same as custom dnd that was built in flex. As for the coordinates, maybe play around with localToContent, and localToGlobal methods. They may help in translating the coordinates to something useful. Good luck. I will let you know if I think of anything else.
{ "language": "en", "url": "https://stackoverflow.com/questions/10870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to encourage someone to learn programming? I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far * *Minimal money requirement (they already have a computer) *Will help them to think in new ways *(Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. *(Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. *(Rob Cooper) Money is/can be pretty good. *(Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. *(Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! *(Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. (Teifion: This is a really cool analogy!) *(Saj) Profitable way of Exercising Brain Muscles. *(Saj) It makes you look brilliant to some audience. *(Saj) Makes you tech-smart. *(Saj) Makes you eligible to the future world. *(Saj) It's easy, fun, not in a math way.. *(kiwiBastard) If the person likes problem solving then programming is no better example. *(kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded *(kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one (Teifion: I dunno where you do programming but I want to come visit some time) *(epatel) Learning how to program is like learning spell casting at Hogwarts . The computer will be your servant forever... Cons I have so far * *Can be frustrating when it's not working *Not physical exercise *(Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. *(Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. *(Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. *(Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! *(Saj) Can cause virtual damage. *(Saj) Can make one throw their computer away. *(Saj) Can make one only virtually available to the world. A: I do it for the ladies :D Seriously though, for me Pro's * *Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. *I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. *Money is/can be pretty good. *Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. *It's an exciting industry to work in, theres massive amounts of tech to work and play with! Cons (some of these can easily be Pro's too) * *There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. *Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. *Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. *Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! A: Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. A: Programming is one of the ways to be the richest person in the world. So far, we do not know any other. A: My advice would be that you don't push your friend too hard. If you're going to suggest they take up programming, only mention it casually. Suggesting recreational computer programming to someone "unenlightened" could be taken about the same way as suggesting they do some recreational mathematics, or stamp collecting (no offense to any philatelists out there!). A: Learning how to program is like learning spell casting at Hogwarts . The computer will be your servant forever... --if you have a Mac-- A simple start could be just to look at Automator (are several screencasts online ie) which is a simple way of making programs do a little more than sit and wait for user interaction...not real programming but gives a feel for things that a little programming can do. A: * *If the person likes problem solving then programming is no better example. *Brilliant sense of achivement when you can interact with something you have designed and coded *Great way to meet chicks/chaps - erm, maybe not that one A: I'll follow up on Carl Russmann's comments by suggesting that you shouldn't push too hard on your friend. Most readers of this site find programming to be interesting and fun, but we are really weird. For most people, learning programming would be very hard work, with little short-term benefit. Most people have no aptitude for programming, and would find it to be as much fun as doing their income taxes. That's a big Con. A: You could tell him how into programmers girls are.. you know, lie.
{ "language": "en", "url": "https://stackoverflow.com/questions/10872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can you customize the numbers in an ordered list? How can I left-align the numbers in an ordered list? 1. an item // skip some items for brevity 9. another item 10. notice the 1 is under the 9, and the item contents also line up Change the character after the number in an ordered list? 1) an item Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. I am mostly interested in answers that work on Firefox 3. A: Stole a lot of this from other answers, but this is working in FF3 for me. It has upper-roman, uniform indenting, a close bracket. ol { counter-reset: item; margin-left: 0; padding-left: 0; } li { margin-bottom: .5em; } li:before { display: inline-block; content: counter(item, upper-roman) ")"; counter-increment: item; width: 3em; } <ol> <li>One</li> <li>Two</li> <li>Three</li> <li>Four</li> <li>Five</li> <li>Six</li> <li>Seven</li> <li>Eight</li> <li>Nine</li> <li>Ten</li> </ol> A: I suggest playing with the :before attribute and seeing what you can achieve with it. It will mean your code really is limited to nice new browsers, and excludes the (annoyingly large) section of the market still using rubbish old browsers, Something like the following, which forces a fixed with on the items. Yes, I know it's less elegant to have to choose the width yourself, but using CSS for your layout is like undercover police work: however good your motives, it always gets messy. li:before { content: counter(item) ") "; counter-increment: item; display: marker; width: 2em; } But you're going to have to experiment to find the exact solution. A: The numbers line up better if you add leading-zeroes to the numbers, by setting list-style-type to: ol { list-style-type: decimal-leading-zero; } A: Borrowed and improved Marcus Downing's answer. Tested and works in Firefox 3 and Opera 9. Supports multiple lines, too. ol { counter-reset: item; margin-left: 0; padding-left: 0; } li { display: block; margin-left: 3.5em; /* Change with margin-left on li:before. Must be -li:before::margin-left + li:before::padding-right. (Causes indention for other lines.) */ } li:before { content: counter(item) ")"; /* Change 'item' to 'item, upper-roman' or 'item, lower-roman' for upper- and lower-case roman, respectively. */ counter-increment: item; display: inline-block; text-align: right; width: 3em; /* Must be the maximum width of your list's numbers, including the ')', for compatability (in case you use a fixed-width font, for example). Will have to beef up if using roman. */ padding-right: 0.5em; margin-left: -3.5em; /* See li comments. */ } A: The CSS for styling lists is here, but is basically: li { list-style-type: decimal; list-style-position: inside; } However, the specific layout you're after can probably only be achieved by delving into the innards of the layout with something like this (note that I haven't actually tried it): ol { counter-reset: item } li { display: block } li:before { content: counter(item) ") "; counter-increment: item } A: HTML5: Use the value attribute (no CSS needed) Modern browsers will interpret the value attribute and will display it as you expect. See MDN documentation. <ol> <li value="3">This is item three.</li> <li value="50">This is item fifty.</li> <li value="100">This is item one hundred.</li> </ol> Also have a look at the <ol> article on MDN, especially the documentation for the start and attribute. A: You can also specify your own numbers in the HTML - e.g. if the numbers are being provided by a database: ol { list-style: none; } ol>li:before { content: attr(seq) ". "; } <ol> <li seq="1">Item one</li> <li seq="20">Item twenty</li> <li seq="300">Item three hundred</li> </ol> The seq attribute is made visible using a method similar to that given in other answers. But instead of using content: counter(foo), we use content: attr(seq). Demo in CodePen with more styling A: This is the solution I have working in Firefox 3, Opera and Google Chrome. The list still displays in IE7 (but without the close bracket and left align numbers): ol { counter-reset: item; margin-left: 0; padding-left: 0; } li { display: block; margin-bottom: .5em; margin-left: 2em; } li::before { display: inline-block; content: counter(item) ") "; counter-increment: item; width: 2em; margin-left: -2em; } <ol> <li>One</li> <li>Two</li> <li>Three</li> <li>Four</li> <li>Five</li> <li>Six</li> <li>Seven</li> <li>Eight</li> <li>Nine<br>Items</li> <li>Ten<br>Items</li> </ol> EDIT: Included multiple line fix by strager Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. Refer to list-style-type CSS property. Or when using counters the second argument accepts a list-style-type value. For example the following will use upper roman: li::before { content: counter(item, upper-roman) ") "; counter-increment: item; /* ... */ A: There is the Type attribute which allows you to change the numbering style, however, you cannot change the full stop after the number/letter. <ol type="a"> <li>Turn left on Maple Street</li> <li>Turn right on Clover Court</li> </ol> A: The docs say regarding list-style-position: outside CSS1 did not specify the precise location of the marker box and for reasons of compatibility, CSS2 remains equally ambiguous. For more precise control of marker boxes, please use markers. Further up that page is the stuff about markers. One example is: LI:before { display: marker; content: "(" counter(counter) ")"; counter-increment: counter; width: 6em; text-align: center; } A: Nope... just use a DL: dl { overflow:hidden; } dt { float:left; clear: left; width:4em; /* adjust the width; make sure the total of both is 100% */ text-align: right } dd { float:left; width:50%; /* adjust the width; make sure the total of both is 100% */ margin: 0 0.5em; } A: Quick and dirt alternative solution. You can use a tabulation character along with preformatted text. Here's a possibility: <style type="text/css"> ol { list-style-position: inside; } li:first-letter { white-space: pre; } </style> and your html: <ol> <li> an item</li> <li> another item</li> ... </ol> Note that the space between the li tag and the beggining of the text is a tabulation character (what you get when you press the tab key inside notepad). If you need to support older browsers, you can do this instead: <style type="text/css"> ol { list-style-position: inside; } </style> <ol> <li><pre> </pre>an item</li> <li><pre> </pre>another item</li> ... </ol> A: I will give here the kind of answer i usually don't like to read, but i think that as there are other answers telling you how to achive what you want, it could be nice to rethink if what you are trying to achive is really a good idea. First, you should think if it is a good idea to show the items in a non-standard way, with a separator charater diferent than the provided. I don't know the reasons for that, but let's suppose you have good reasons. The ways propossed here to achive that consist in add content to your markup, mainly trough the CSS :before pseudoclass. This content is really modifing your DOM structure, adding those items to it. When you use standard "ol" numeration, you will have a rendered content in which the "li" text is selectable, but the number preceding it is not selectable. That is, the standard numbering system seems to be more "decoration" than real content. If you add content for numbers using for example those ":before" methods, this content will be selectable, and dued to this, performing undesired vopy/paste issues, or accesibility issues with screen readers that will read this "new" content in addition to the standard numeration system. Perhaps another approach could be to style the numbers using images, although this alternative will bring its own problems (numbers not shown when images are disabled, text size for number not changing, ...). Anyway, the reason for this answer is not just to propose this "images" alternative, but to make people think in the consequences of trying to change the standard numeration system for ordered lists. A: This code makes numbering style same as headers of li content. <style> h4 {font-size: 18px} ol.list-h4 {counter-reset: item; padding-left:27px} ol.list-h4 > li {display: block} ol.list-h4 > li::before {display: block; position:absolute; left:16px; top:auto; content: counter(item)"."; counter-increment: item; font-size: 18px} ol.list-h4 > li > h4 {padding-top:3px} </style> <ol class="list-h4"> <li> <h4>...</h4> <p>...</p> </li> <li>...</li> </ol> A: Vhndaree posted an interesting implementation of this problem on a duplicate question, which goes a step further than any of the existing answers in that it implements a custom character before the incrementing numbers: .custom { list-style-type: none; } .custom li { counter-increment: step-counter; } .custom li::before { content: '(' counter(step-counter) ')'; margin-right: 5px; } <ol class="custom"> <li>First</li> <li>Second</li> <li>Third</li> <li>Fourth</li> <li>Fifth</li> <li>Sixth</li> <li>Seventh</li> <li>Eighth</li> <li>Ninth</li> <li>Tenth</li> </ol> A: I have it. Try the following: <html> <head> <style type='text/css'> ol { counter-reset: item; } li { display: block; } li:before { content: counter(item) ")"; counter-increment: item; display: inline-block; width: 50px; } </style> </head> <body> <ol> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> <li>Something</li> </ol> </body> The catch is that this definitely won't work on older or less compliant browsers: display: inline-block is a very new property. A: The other answers are better from a conceptual point of view. However, you can just left-pad the numbers with the appropriate number of '&ensp;' to make them line up. * Note: I did not at first recognize that a numbered list was being used. I thought the list was being explicitly generated.
{ "language": "en", "url": "https://stackoverflow.com/questions/10877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127" }
Q: Any good advice on using emacs for C++ project? I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's "Effective emacs". A: Be aware that Emacs' C++ mode is based on only regular expressions, not a grammar. Hence, the syntax highlighting is not based strictly on the syntax of the language itself, but rather is largely based on commonplace formatting. The Emacs syntax highlighting of C++ often makes mistakes. The problem is not limited to syntax hightlighting. The same defective design applies to the automatic formatting. All this said, I have been using only Emacs for all of my editing of C++ source code for over 20 years, since the cfront days. (I usually turn off electric key bindings, because of Emacs' defective regex-based design. Regexes do not have enough expressive power to describe the C++ syntax accurately. A: No specific article, really, but I've found EmacsWiki to be full of useful information. Consider checking out these entries: * *CPlusPlus as a starting point for many C++-related articles, and *CppTemplate to define a template that can give you a good skeleton when you start new files A: I've recently stumbled upon this article which is quite good. EDIT: Yep the link is no longer valid. It seems like they've changed their url recently and it doesn't redirect properly. Hopefully it will be back soon. Anyway the article was called "Benjamin Rutt's Emacs C development tips". I managed to find a copy here. A: I'm planning to write such article in near future, but you can now take my configuration of Cedet + Emacs, that helps me to effectively edit C++ sources. If you'll have questions, you could ask me directly A: I recommend ggtags and irony-mode. Other then that you may want to use helm-ag to search ("grep") for strings in your codebase.
{ "language": "en", "url": "https://stackoverflow.com/questions/10880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Best way to connect To Sql Server with MFC Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking? A: The ATL consumer templates for OleDb are an option - start here. The ClassWizard is still there to assist you so the verbosity isn't too much of a hurdle at first. Very soon you will need to hand-code though. There is a lot of careful twiddling, for example ensuring that your command string has exactly the right number of ? marks corresponding to the COLUMN_ENTRYs for an accessor. Then you'll probably have a million CopyToCommandFromObject and CopyToObjectFromCommand methods. This app doesn't have any data access yet and you're going to be adding it? If so, I would seriously consider implementing a modern DAL (ADO.Net, linq if you're lucky enough to be on 2008) in a separate managed assembly and doing some interop.
{ "language": "en", "url": "https://stackoverflow.com/questions/10891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to set up a DB2 linked server on a 64-bit SQL Server 2005? I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used? It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005. A: We had this same issue with a production system late last year (sept 2007) and the official word from our Microsoft contact was that they had a 64 bit oledb driver to connect to ASI/DB2 but it was in BETA at the time. Not sure when it will be out of beta but that was the news as of last year. We decided to move the production server onto a 32 bit machine since we were not comfortable using beta drivers on production systems. I know this doesn't answer your question but it hopefully gives you some insight A: What provider are you using for Sql 2000? I'm pretty sure MS has an x64 OLEDB driver for DB2 (part of Host Integration Server, but available as a separate download). IBM has x64 for .NET and ODBC, and possible OLEDB as well (though it's a PITA to find). Once you get the linked server setup, I'm pretty sure all of your other requirements would be automatic.... A: From the Sql 2005 February 2007 Feature Pack: The Microsoft OLE DB Provider for DB2 is a COM component for integrating vital data stored in IBM DB2 databases with new solutions based on Microsoft SQL Server 2005 Enterprise Edition and Developer Edition. SQL Server developers and administrators can use the provider with Integration Services, Analysis Services, Replication, Reporting Services, and Distributed Query Processor. Run the self-extracting download package to create an installation folder. The single setup program will install the provider and tools on x86, x64, and IA64 computers.
{ "language": "en", "url": "https://stackoverflow.com/questions/10898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms? My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the .net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is .net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated! A: I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed. You can do this several ways: * *Host WPF content on your MFC views (see here) *For MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here) *Host WinForms user controls in MFC Dialogs and Views (see here) The problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic. The second approach looks viable but very complicated. The third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken. The Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the "ribbon" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors. My overall recommendation is that interop + incremental change is definitely preferable to sweeping changes. After reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it. A: Depending on the application and the willingness of your customers to install .NET (not all of them are), I would definitely move to WinForms or WPF. Interop with C++ code is hugely simplified by refactoring non-UI code into class libraries using C++/CLI (as you've noted in your selection of tags). The only issue with WPF is that it may be hard to maintain the current look-and-feel. Moving to WinForms can be done while maintaining the current look of your GUI. WPF uses such a different model that to attempt to keep the current layout would probably be futile and would definitely not be in the spirit of WPF. WPF also apparently has poor performance on pre-Vista machines when more than one WPF process is running. My suggestion is to find out what your clients are using. If most have moved to Vista and your team is prepared to put in a lot of GUI work, I would say skip WinForms and move to WPF. Otherwise, definitely look seriously at WinForms. In either case, a class library in C++/CLI is the answer to your interop concerns. A: You don't give a lot of detail on what your legacy code does or how it's structured. If you have certain performance criteria you might want to maintain some of your codebase in C++. You'll have an easier time doing interop with your old code if it is exposed in the right way - can you call into the existing codebase from C# today? Might be worth thinking about a project to get this structure right. On the point of WPF, you could argue that WinForms may be more appropriate. Moving to WinForms is a big step for you and your team. Perhaps they may be more comfortable with the move to WinForms? It's better documented, more experience in the market, and useful if you still need to support windows 2000 clients. You might be interested in Extending MFC Applications with the .NET Framework Something else to consider is C++/CLI, but I don't have experience with it. A: Thank you all kindly for your responses, it's reassuring to see that generally the consensus follows my line of thinking. I am in the fortunate situation that our software also runs on our own custom hardware (for the broadcast industry) - so the choice of OS is really ours and is thrust upon our customers. Currently we're running XP/2000, but I can see a desire to move up to Vista soon. However, we also need to maintain very fine control over GPU performance, which I guess automatically rules out WPF and hardware acceleration? I should have made that point in my original post - sorry. Perhaps it's possible to use two GPUs... but that's another question altogether... The team doesn't have any significant C# experience and I'm no expert myself, but I think the overall long term benefits of a managed environment probably outweigh the time it'll take to get up to speed. Looks like Winforms and C# have it for now. A: Were you to look at moving to C# and therefore .NET, I would consider Windows Presentation Foundation rather than WinForms. WPF is the future of smart clients in .NET, and the skills you pick up you'll be able to reuse if you want to make browser-hosted Silverlight applications. A: I concur with the WPF sentiment. Tag/XML based UI would seem to be a bit more portable than WinForms. I guess too you have to consider your team, if there is not a lot of current C# skills, then that is a factor, but going forward the market for MFC developers is diminishing and C# is growing. Maybe some kind of piecemeal approach would be possible? I have been involved with recoding legacy applications to C# quite a bit, and it always takes a lot longer than you would estimate, especially if you are keeping some legacy code, or your team isn't that conversant with C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/10901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can you use generic forms in C#? You should be able to create a generic form: public partial class MyGenericForm<T> : Form where T : class { /* form code */ public List<T> TypedList { get; set; } } Is valid C#, and compiles. However the designer won't work and the form will throw a runtime exception if you have any images stating that it cannot find the resource. I think this is because the windows forms designer assumes that the resources will be stored under the simple type's name. A: Yes you can! Here's a blog post I made a while ago with the trick: Designing Generic Forms Edit: Looks like you're already doing it this way. This method works fine so I wouldn't consider it too hacky. A: I have a hack to workaround this, which works but isn't ideal: Add a new class to the project that inherits the form with its simple name. internal class MyGenericForm: MyGenericForm<object> { } This means that although the designer is still wrong the expected simple type (i.e without <>) is still found. A: You can do it in three steps. 1) Replace in Form1.cs File public partial class Form1<TEntity, TContext> : Formbase // where.... 2) Replace in Form1.Designer.cs partial class Form1<TEntity, TContext> 3) Create new file : Form1.Generic.cs (for opening design) partial class Form1 { } A: If paleolithic code doesn't affraid you public static MyForm GetInstance<T>(T arg) where T : MyType { MyForm myForm = new MyForm(); myForm.InitializeStuffs<T>(arg); myForm.StartPosition = myForm.CenterParent; return myForm; } Use it var myFormInstance = MyForm.GetInstance<T>(arg); myFormInstance.ShowDialog(this);
{ "language": "en", "url": "https://stackoverflow.com/questions/10905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB? Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way? A: I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped. Here are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration. Pay attention to the way the bag effectivly maps the details table into a collection. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="CustomerAccountId" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan" table="details"> <key column="CustomerAccountId" foreign-key="AcceptedOfferFK"/> <many-to-many class="Namespace.AcceptedOffer, Namespace" column="AcceptedOfferFK" foreign-key="AcceptedOfferID" lazy="false" /> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="AcceptedOffer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="AcceptedOfferId" length="4" sql-type="int" not-null="true" unique="true" index="AcceptedOfferPK"/> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update" > <column name="PlanFK" length="4" sql-type="int" not-null="false"/> </many-to-one> <property name="StatusId" type="Int32"> <column name="StatusId" length="4" sql-type="int" not-null="true"/> </property> </class> </hibernate-mapping> A: Didn't see your database diagram whilst I was writing. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="customer_id" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan"> <key column="accepted_offer_id"/> <one-to-many class="Namespace.AcceptedOffer, Namespace"/> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="Accepted_Offer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="accepted_offer_id" length="4" sql-type="int" not-null="true" unique="true" /> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update"> <column name="plan_id" length="4" sql-type="int" not-null="false"/> </many-to-one> </class> </hibernate-mapping> Should probably do the trick (I've only done example mappings for the collections, you'll have to add other properties). A: The approach I'd take to model this is as follows: Customer object contains an ICollection <PaymentPlan> PaymentPlans which represent the plans that customer has accepted. The PaymentPlan to the Customer would be mapped using a bag which uses the details table to establish which customer id's mapped to which PaymentPlans. Using cascade all-delete-orphan, if the customer was deleted, both the entries from details and the PaymentPlans that customer owned would be deleted. The PaymentPlan object contains a PlanTerms object which represented the terms of the payment plan. The PlanTerms would be mapped to a PaymentPlan using a many-to-one mapping cascading save-update which would just insert a reference to the relevant PlanTerms object in to the PaymentPlan. Using this model, you could create PlanTerms independantly and then when you add a new PaymentPlan to a customer, you'd create a new PaymentPlan object passing in the relevant PlanTerms object and then add it to the collection on the relevant Customer. Finally you'd save the Customer and let nhibernate cascade the save operation. You'd end up with a Customer object, a PaymentPlan object and a PlanTerms object with the Customer (customer table) owning instances of PaymentPlans (the details table) which all adhear to specific PlanTerms (the plan table). I've got some more concrete examples of the mapping syntax if required but it's probably best to work it through with your own model and I don't have enough information on the database tables to provide any specific examples. A: I don't know if this is possibly because my NHibernate experience is limited, but could you create a BaseDetail class which has just the properties for the Details as they map directly to the Detail table. Then create a second class that inherits from the BaseDetail class that has the additional Parent Plan object so you can create a BaseDetail class when you want to just create a Detail row and assign the PlanId to it, but if you need to populate a full Detail record with the Parent plan object you can use the inherited Detail class. I don't know if that makes a whole lot of sense, but let me know and I'll clarify further. A: I think the problem you have here is that your AcceptedOffer object contains a Plan object, and then your Plan object appears to contain an AcceptedOffers collection that contains AcceptedOffer objects. Same thing with Customers. The fact that the objects are a child of each other is what causes your problem, I think. Likewise, what makes your AcceptedOffer complex is it has a two responsibilities: it indicates offers included in a plan, it indicates acceptance by a customer. That violates the Single Responsibility Principle. You may have to differentiate between an Offer that is under a Plan, and an Offer that is accepted by customers. So here's what I'm going to do: * *Create a separate Offer object which does not have a state, e.g., it does not have a customer and it does not have a status -- it only has an OfferId and the Plan it belongs to as its attributes. *Modify your Plan object to have an Offers collection (it does not have to have accepted offer in its context). *Finally, modify your AcceptedOffer object so that it contains an Offer, the Customer, and a Status. Customer remains the same. I think this will sufficiently untangle your NHibernate mappings and object saving problems. :) A: A tip that may (or may not) be helpful in NHibernate: you can map your objects against Views as though the View was a table. Just specify the view name as a table name; as long as all NOT NULL fields are included in the view and the mapping it will work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/10915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you measure SQL Fill Factor value Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value? A: I would tend to be of the opinion that if you're after performance improvements, your time is much better spent elsewhere, tweaking your schema, optimising your queries and ensuring good index coverage. Fill factor is one of those things that you only need to worry about when you know that everything else in your system is optimal. I don't know anyone that can say that. A: You could try running a big list of realistic operations and looking at IO queues for the different actions. There are a lot of variables that govern it, such as the size of each row and the number of writes vs reads. Basically: high fill factor = quicker read, low = quicker write. However it's not quite that simple, as almost all writes will be to a subset of rows that need to be looked up first. For instance: set a fill factor to 10% and each single-row update will take 10 times as long to find the row it's changing, even though a page split would then be very unlikely. Generally you see fill factors 70% (very high write) to 95% (very high read). It's a bit of an art form. I find that a good way of thinking of fill factors is as pages in an address book - the more tightly you pack the addresses the harder it is to change them, but the slimmer the book. I think I explained it better on my blog.
{ "language": "en", "url": "https://stackoverflow.com/questions/10919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Deleting / Replacing A Node in E4X (AS3 - Flex) I'm building a listing/grid control in a Flex application and using it in a .NET web application. To make a really long story short I am getting XML from a webservice of serialized objects. I have a page limit of how many things can be on a page. I've taken a data grid and made it page, sort across pages, and handle some basic filtering. In regards to paging I'm using a Dictionary keyed on the page and storing the XML for that page. This way whenever a user comes back to a page that I've saved into this dictionary I can grab the XML from local memory instead of hitting the webservice. Basically, I'm caching the data retrieved from each call to the webservice for a page of data. There are several things that can expire my cache. Filtering and sorting are the main reason. However, a user may edit a row of data in the grid by opening an editor. The data they edit could cause the data displayed in the row to be stale. I could easily go to the webservice and get the whole page of data, but since the page size is set at runtime I could be looking at a large amount of records to retrieve. So let me now get to the heart of the issue that I am experiencing. In order to prevent getting the whole page of data back I make a call to the webservice asking for the completely updated record (the editor handles saving its data). Since I'm using custom objects I need to serialize them on the server to XML (this is handled already for other portions of our software). All data is handled through XML in e4x. The cache in the Dictionary is stored as an XMLList. Now let me show you my code... var idOfReplacee:String = this._WebService.GetSingleModelXml.lastResult.*[0].*[0].@Id; var xmlToReplace:XMLList = this._DataPages[this._Options.PageIndex].Data.(@Id == idOfReplacee); if(xmlToReplace.length() > 0) { delete (this._DataPages[this._Options.PageIndex].Data.(@Id == idOfReplacee)[0]); this._DataPages[this._Options.PageIndex].Data += this._WebService.GetSingleModelXml.lastResult.*[0].*[0]; } Basically, I get the id of the node I want to replace. Then I find it in the cache's Data property (XMLList). I make sure it exists since the filter on the second line returns the XMLList. The problem I have is with the delete line. I cannot make that line delete that node from the list. The line following the delete line works. I've added the node to the list. How do I replace or delete that node (meaning the node that I find from the filter statement out of the .Data property of the cache)??? Hopefully the underscores for all of my variables do not stay escaped when this is posted! otherwise this.&#95 == this._ A: Thanks for the answers guys. @Theo: I tried the replace several different ways. For some reason it would never error, but never update the list. @Matt: I figured out a solution. The issue wasn't coming from what you suggested, but from how the delete works with Lists (at least how I have it in this instance). The Data property of the _DataPages dictionary object is list of the definition nodes (was arrived at by a previous filtering of another XML document). <Models> <Definition Id='1' /> <Definition Id='2' /> </Models> I ended up doing this little deal: //gets the index of the node to replace from the same filter var childIndex:int = (this._DataPages[this._Options.PageIndex].Data.(@Id == idOfReplacee)[0]).childIndex(); //deletes the node from the list delete this._DataPages[this._Options.PageIndex].Data[childIndex]; //appends the new node from the webservice to the list this._DataPages[this._Options.PageIndex].Data += this._WebService.GetSingleModelXml.lastResult.*[0].*[0]; So basically I had to get the index of the node in the XMLList that is the Data property. From there I could use the delete keyword to remove it from the list. The += adds my new node to the list. I'm so used to using the ActiveX or Mozilla XmlDocument stuff where you call "SelectSingleNode" and then use "replaceChild" to do this kind of stuff. Oh well, at least this is in some forum where someone else can find it. I do not know the procedure for what happens when I answer my own question. Perhaps this insight will help someone else come along and help answer the question better! A: Perhaps you could use replace instead? var oldNode : XML = this._DataPages[this._Options.PageIndex].Data.(@Id == idOfReplacee)[0]; var newNode : XML = this._WebService.GetSingleModelXml.lastResult.*[0].*[0]; oldNode.parent.replace(oldNode, newNode); A: I know this is an incredibly old question, but I don't see (what I think is) the simplest solution to this problem. Theo had the right direction here, but there's a number of errors with the way replace was being used (and the fact that pretty much everything in E4X is a function). I believe this will do the trick: oldNode.parent().replace(oldNode.childIndex(), newNode); replace() can take a number of different types in the first parameter, but AFAIK, XML objects are not one of them. A: I don't immediately see the problem, so I can only venture a guess. The delete line that you've got is looking for the first item at the top level of the list which has an attribute "Id" with a value equal to idOfReplacee. Ensure that you don't need to dig deeper into the XML structure to find that matching id. Try this instead: delete (this._DataPages[this._Options.PageIndex].Data..(@Id == idOfReplacee)[0]); (Notice the extra '.' after Data). You could more easily debug this by setting a breakpoint on the second line of the code you posted, and ensure that the XMLList looks like you expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/10926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can you recommend a good .NET web-based repository browser for SVN? We have an SVN repository running on a Windows server, and I want to link internal documentation, feature changes, bugs and so on to code changes. We've found WebSVN to be amazingly slow - the repository is too large for it (I think). The team using it is primarily coding in C#, and while some have experience with other languages I'd really like a tool anyone on the team can maintain. Most of the tools I've seen are based on PHP, Java, Python, etc. All languages the team could learn, but I'd rather something that uses the skills we already have. Can you recommend a good web-based repository browser for SVN, ideally one that uses ASP.NET, SQL Server and that runs on IIS? A: Have a look at http://warehouseapp.com It's Mongrel/Ruby/MySQL stack (should work on Windows though) but I'm looking to avoid installing MySQL and Ruby on the server. I know (also using C# stack myself), but self-hosted web-based SVN client market is such a small niche that even offering in different language could be considered good enough. MySQL doesn't bite and installation of Ruby is pretty much x-copy command. I understand why you don't want to spoil your server with additional software though, but if are OK to host your SVN repositories with third-party, you get a nice web-based interface without maintenance hassles. I'm using http://unfuddled.com (they also have some basic API to hook up on if needed). Not to promote reinventing the wheel, but I originally wrote my own web SVN browser by using the svn log --xml command This is actually good idea. I'm also parsing some XML formatted output during my automated build process, but creating our own full-blown SVN browser is kind of overkill because now you have to maintain not one primary project, but also the tool. But then again, we, programmers, love to create tools that will make working on our primary projects easier. ASP.NET SVN browser sounds like promising open-source idea, anybody willing to start work on it? I would contribute. A: Not to promote reinventing the wheel, but I originally wrote my own web SVN browser by using the svn log --xml command and then just an XML parser in whatever language i was using. I don't use .Net, but it shouldn't be too hard. A: I use Warehouse, as Lubos already pointed out, and it works very well. I looked at one point for a .NET version, but I was never able to find one. I was also at a point where I wanted to better myself as a programmer by learning a new language, and I ventured into learning Ruby and Ruby on Rails. Now, I program in both .NET and Ruby. Anyway, that is how I ran into Warehouse. I have Warehouse installed on a Linux machine running the Ubuntu server edition, nginx for the HTTP server, and mongrel cluster. I never even tried to install it on Windows and am glad that I didn't. Warehouse requires the svn-ruby bindings to work and this poor guy found out the hard way. Well, I know you are looking for a .NET application, but I thought I would give my two cents on Warehouse and I hope you don't dismiss it just because it doesn't run in .NET. I also wanted to inform you not to install Warehouse on Windows, if you did decide to give it a shot. A: Is your Subversion repository hosted inside of Apache (rather than svnserve)? If so, and your needs are very simple, you can access the repository directly through a web browser. Just take the repository URL, plop it in the browser, and you'll see a very rudimentary web navigation interface (basically the built-in Apache folder browsing interface). It's not pretty, but it works for basic linking to repository files if that's all you need. A: This isn't necessarily the answer to your question, but it seems like most other answers also mention related solutions, so I think this is worthwhile. http://ifdefined.com/doc_bug_tracker_subversion.html It's an open source project called BugTracker.NET. It's primarily an issue tracker (one we use well here), but it does include Subversion integration with, among other things, the ability to view diffs. We haven't implemented that piece but it looks fairly nice from the screen shots. It's IIS/MSSQL based, so it's a Windows deployment-friendly solution. I've found it to be solid, if a bit tricky to set up. It's not a raw repository browser, but close and hits on some of the other qualities you were looking for. A: Not web-based but if your team is using TortoiseSVN there's a great repository browser there. Just right click on your local checkout and select TortoiseSVN / Repo-Browser.
{ "language": "en", "url": "https://stackoverflow.com/questions/10933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to trace COM objects exceptions? I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens. So, How can I trace those COM objects exceptions? A: The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means. Then, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure. A: If you just want a really quick way to find out what the error code means, you could use the "Error Lookup" tool packaged with Visual Studio (details here). Enter the hex value, and it will give you the string describing that error code. Of course, once you know that, you've still got to figure out why it's happening. A: A good way to look up error (hresult) codes is HResult Plus or welt.exe (Windows Error Lookup Tool). I use logging internally in the COM-classes to see what is going on. Also, once the COM-class is loaded by the executable, you can attach the VS debugger to it and debug the COM code with breakpoints, watches, and all that fun stuff. A: COM objects don't throw exceptions. They return HRESULTs, most of which indicate a failure. So if you're looking for the equivalent of an exception stack trace, you're out of luck. You're going to have to walk through the code by hand and figure out what's going on.
{ "language": "en", "url": "https://stackoverflow.com/questions/10935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unit tests for deep cloning Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later? A: You method of testing will depend on the type of solution you come up with. If you write some custom cloning code and have to manually implement that in each cloneable type then you should really test the cloning of each one of those types. Alternatively, if you decide to go a more generic route (where the aforementioned reflection would likely fit in), your tests would only need to test the specific scenarios that you cloning system will have to deal with. To answer your specific questions: Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? You should have tests for all the methods that can be performed on both the original and cloned objects. Note that it should be pretty easy to set up a simple test design to support this without manually updating the logic for each test. How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later? It depends on the cloning method you choose. If you have to manually update the cloneable types then you should test that each type is cloning all (and only) the members you expect. Whereas, if you are testing a cloning framework I would create some test cloneable types to test each scenario you need to support. A: There's a really obvious solution that doesn't take nearly as much work: * *Serialize the object into a binary format. *Clone the object. *Serialize the clone into a binary format. *Compare the bytes. Assuming that serialization works - and it better because you are using it to clone - this should be easy to maintain. In fact, it will be encapsulated from changes to the structure of your class completely. A: I'd just write a single test to determine if the clone was correct or not. If the class isn't sealed, you can create a harness for it by extending it, and then exposing all your internals within the child class. Alternatively, you could use reflection (yech), or use MSTest's Accessor generators. You need to clone your object and then go through every single property and variable that your object has and determine if it was copied correctly or cloned correctly. A: I like to write unit tests that use one of the builtin serializers on the original and the cloned object and then check the serialized representations for equality (for a binary formatter, I can just compare the byte arrays). This works great in cases where the object is still serializable, and I'm only changing to a custom deep clone for perf reasons. Furthermore, I like to add a debug mode check to all of my Clone implementations using something like this [Conditional("DEBUG")] public static void DebugAssertValueEquality<T>(T current, T other, bool expected, params string[] ignoredFields) { if (null == current) { throw new ArgumentNullException("current"); } if (null == ignoredFields) { ignoredFields = new string[] { }; } FieldInfo lastField = null; bool test; if (object.ReferenceEquals(other, null)) { Debug.Assert(false == expected, "The other object was null"); return; } test = true; foreach (FieldInfo fi in current.GetType().GetFields(BindingFlags.Instance)) { if (test = false) { break; } if (0 <= Array.IndexOf<string>(ignoredFields, fi.Name)) { continue; } lastField = fi; object leftValue = fi.GetValue(current); object rightValue = fi.GetValue(other); if (object.ReferenceEquals(null, leftValue)) { if (!object.ReferenceEquals(null, rightValue)) { test = false; } } else if (object.ReferenceEquals(null, rightValue)) { test = false; } else { if (!leftValue.Equals(rightValue)) { test = false; } } } Debug.Assert(test == expected, string.Format("field: {0}", lastField)); } This method relies on an accurate implementation of Equals on any nested members, but in my case anything that is cloneable is also equatable A: I would usually implement Equals() for comparing the two objects in depth. You might not need it in your production code but it might still come in handy later and the test code is much cleaner. A: Here is a sample of how I implemented this a while back, although this will need to be tailored to the scenario. In this case we had a nasty object chain that could easily change and the clone was used as a very critical prototype implementation and so I had to patch (hack) this test together. public static class TestDeepClone { private static readonly List<long> objectIDs = new List<long>(); private static readonly ObjectIDGenerator objectIdGenerator = new ObjectIDGenerator(); public static bool DefaultCloneExclusionsCheck(Object obj) { return obj is ValueType || obj is string || obj is Delegate || obj is IEnumerable; } /// <summary> /// Executes various assertions to ensure the validity of a deep copy for any object including its compositions /// </summary> /// <param name="original">The original object</param> /// <param name="copy">The cloned object</param> /// <param name="checkExclude">A predicate for any exclusions to be done, i.e not to expect IPolicy items to be cloned</param> public static void AssertDeepClone(this Object original, Object copy, Predicate<object> checkExclude) { bool isKnown; if (original == null) return; if (copy == null) Assert.Fail("Copy is null while original is not", original, copy); var id = objectIdGenerator.GetId(original, out isKnown); //Avoid checking the same object more than once if (!objectIDs.Contains(id)) { objectIDs.Add(id); } else { return; } if (!checkExclude(original)) { Assert.That(ReferenceEquals(original, copy) == false); } Type type = original.GetType(); PropertyInfo[] propertyInfos = type.GetProperties(BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Public); FieldInfo[] fieldInfos = type.GetFields(BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Public); foreach (PropertyInfo memberInfo in propertyInfos) { var getmethod = memberInfo.GetGetMethod(); if (getmethod == null) continue; var originalValue = getmethod.Invoke(original, new object[] { }); var copyValue = getmethod.Invoke(copy, new object[] { }); if (originalValue == null) continue; if (!checkExclude(originalValue)) { Assert.That(ReferenceEquals(originalValue, copyValue) == false); } if (originalValue is IEnumerable && !(originalValue is string)) { var originalValueEnumerable = originalValue as IEnumerable; var copyValueEnumerable = copyValue as IEnumerable; if (copyValueEnumerable == null) Assert.Fail("Copy is null while original is not", new[] { original, copy }); int count = 0; List<object> items = copyValueEnumerable.Cast<object>().ToList(); foreach (object o in originalValueEnumerable) { AssertDeepClone(o, items[count], checkExclude); count++; } } else { //Recurse over reference types to check deep clone success if (!checkExclude(originalValue)) { AssertDeepClone(originalValue, copyValue, checkExclude); } if (originalValue is ValueType && !(originalValue is Guid)) { //check value of non reference type Assert.That(originalValue.Equals(copyValue)); } } } foreach (FieldInfo fieldInfo in fieldInfos) { var originalValue = fieldInfo.GetValue(original); var copyValue = fieldInfo.GetValue(copy); if (originalValue == null) continue; if (!checkExclude(originalValue)) { Assert.That(ReferenceEquals(originalValue, copyValue) == false); } if (originalValue is IEnumerable && !(originalValue is string)) { var originalValueEnumerable = originalValue as IEnumerable; var copyValueEnumerable = copyValue as IEnumerable; if (copyValueEnumerable == null) Assert.Fail("Copy is null while original is not", new[] { original, copy }); int count = 0; List<object> items = copyValueEnumerable.Cast<object>().ToList(); foreach (object o in originalValueEnumerable) { AssertDeepClone(o, items[count], checkExclude); count++; } } else { //Recurse over reference types to check deep clone success if (!checkExclude(originalValue)) { AssertDeepClone(originalValue, copyValue, checkExclude); } if (originalValue is ValueType && !(originalValue is Guid)) { //check value of non reference type Assert.That(originalValue.Equals(copyValue)); } } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/10949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: IKVM and Licensing I have been looking into IKVMing Apache's FOP project to use with our .NET app. It's a commercial product, and looking into licensing, IKVM runs into some sticky areas because of its use of GNU Classpath. From what I've seen, no one can say for sure if this stuff can be used in a commercial product. Has anyone used IKVM, or an IKVM'd product, in a commercial product? Here's what I've found so far: IKVM license page, which notes that one dll contains code from other projects, their license GPLv2 + Classpath Exception Saxon for .NET is generated with IKVM, but released under the Apache license... Anyone have experience with this? A: There are multiple issues here as ikvm is currently being transitioned away from the GNU classpath system to Sun's OpenJDK. Both are licensed as GPL+Exceptions to state explicitly that applications which merely use the OpenJDK libraries will not be considered derived works. Generally speaking, applications which rely upon components with defined specs such as this do not fall under the GPL anyway. For example, linking against public POSIX APIs does not trigger GPL reliance in a Linux application, despite the kernel being GPL. A similar principal will usually (the details can be tricky) apply to replacing Sun's Java with a FOSS/GPL implementation. A: Just a quick update on this after noticing the question, for anyone browsing by. IKVM seem to have updated to use the OpenJDK and not the GNU Classpath, infact IKVM.net have removed the comment from their license page. A: I'm not a lawyer but all licenses mentioned are okay to be used in commercial products as long as you don't make any changes and claim the code is yours. I think if you don't wanna risk anything you should consult a lawyer.
{ "language": "en", "url": "https://stackoverflow.com/questions/10980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to prevent an object being created on the heap? Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: Foo *ptr = new Foo; and only allow them to do this: Foo myfooObject; Does anyone have any ideas? Cheers, A: I don't know how to do it reliably and in a portable way.. but.. If the object is on the stack then you might be able to assert within the constructor that the value of 'this' is always close to stack pointer. There's a good chance that the object will be on the stack if this is the case. I believe that not all platforms implement their stacks in the same direction, so you might want to do a one-off test when the app starts to verify which way the stack grows.. Or do some fudge: FooClass::FooClass() { char dummy; ptrdiff_t displacement = &dummy - reinterpret_cast<char*>(this); if (displacement > 10000 || displacement < -10000) { throw "Not on the stack - maybe.."; } } A: @Nick This could be circumvented by creating a class that derives from or aggregates Foo. I think what I suggest (while not robust) would still work for derived and aggregating classes. E.g: struct MyStruct { Foo m_foo; }; MyStruct* p = new MyStruct(); Here I have created an instance of 'Foo' on the heap, bypassing Foo's hidden new operator. A: Because debug headers can override the operator new signature, it is best to use the ... signatures as a complete remedy: private: void* operator new(size_t, ...) = delete; void* operator new[](size_t, ...) = delete; A: Nick's answer is a good starting point, but incomplete, as you actually need to overload: private: void* operator new(size_t); // standard new void* operator new(size_t, void*); // placement new void* operator new[](size_t); // array new void* operator new[](size_t, void*); // placement array new (Good coding practice would suggest you should also overload the delete and delete[] operators -- I would, but since they're not going to get called it isn't really necessary.) Pauldoo is also correct that this doesn't survive aggregating on Foo, although it does survive inheriting from Foo. You could do some template meta-programming magic to HELP prevent this, but it would not be immune to "evil users" and thus is probably not worth the complication. Documentation of how it should be used, and code review to ensure it is used properly, are the only ~100% way. A: You could overload new for Foo and make it private. This would mean that the compiler would moan... unless you're creating an instance of Foo on the heap from within Foo. To catch this case, you could simply not write Foo's new method and then the linker would moan about undefined symbols. class Foo { private: void* operator new(size_t size); }; PS. Yes, I know this can be circumvented easily. I'm really not recommending it - I think it's a bad idea - I was just answering the question! ;-) A: You could declare a function called "operator new" inside the Foo class which would block the access to the normal form of new. Is this the kind of behaviour you want ? A: You could declare it as an interface and control the implementation class more directly from your own code. A: this can be prevented by making constructors private and providing a static member to create an object in the stack Class Foo { private: Foo(); Foo(Foo& ); public: static Foo GenerateInstance() { Foo a ; return a; } } this will make creation of the object always in the stack. A: Not sure if this offers any compile-time opportunities, but have you looked at overloading the 'new' operator for your class?
{ "language": "en", "url": "https://stackoverflow.com/questions/10985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: What are the proper permissions for an upload folder with PHP/Apache? Sorry for the basic question - I'm a .NET developer and don't have much experience with LAMP setups. I have a PHP site that will allow uploads to a specific folder. I have been told that this folder needs to be owned by the webserver user for the upload process to work, so I created the folder and then set permissions as such: chown apache:apache -R uploads/ chmod 755 -R uploads/ The only problem now is that the FTP user can not modify the uploaded files at all. Is there a permission setting that will allow me to still upload files and then modify them later as a user other than the webserver user? A: You can create a new group with both the apache user and FTP user as members and then make the permission on the upload folder 775. This should give both the apache and FTP users the ability to write to the files in the folder but keep everyone else from modifying them. A: I would support the idea of creating a ftp group that will have the rights to upload. However, i don't think it is necessary to give 775 permission. 7 stands for read, write, execute. Normally you want to allow certain groups to read and write, but depending on the case, execute may not be necessary. A: I would go with Ryan's answer if you really want to do this. In general on a *nix environment, you always want to err on giving away as little permissions as possible. 9 times out of 10, 755 is the ideal permission for this - as the only user with the ability to modify the files will be the webserver. Change this to 775 with your ftp user in a group if you REALLY need to change this. Since you're new to php by your own admission, here's a helpful link for improving the security of your upload service: move_uploaded_file A: I will add that if you are using SELinux that you need to make sure the type context is tmp_t You can accomplish this by using the chcon utility chcon -t tmp_t uploads A: What is important is that the apache user and group should have minimum read access and in some cases execute access. For the rest you can give 0 access. This is the most safe setting. A: chmod -R 775 uploads/ chown -R www-data uploads/ Important add to group public for web users standars in ubuntu www-data A: Remember also CHOWN or chgrp your website folder. Try myusername# chown -R myusername:_www uploads A: Based on the answer from @Ryan Ahearn, following is what I did on Ubuntu 16.04 to create a user front that only has permission for nginx's web dir /var/www/html. Steps: * pre-steps: * basic prepare of server, * create user 'dev' which will be the owner of "/var/www/html", * * install nginx, * * * create user 'front' sudo useradd -d /home/front -s /bin/bash front sudo passwd front # create home folder, if not exists yet, sudo mkdir /home/front # set owner of new home folder, sudo chown -R front:front /home/front # switch to user, su - front # copy .bashrc, if not exists yet, cp /etc/skel/.bashrc ~front/ cp /etc/skel/.profile ~front/ # enable color, vi ~front/.bashrc # uncomment the line start with "force_color_prompt", # exit user exit * * add to group 'dev', sudo usermod -a -G dev front * change owner of web dir, sudo chown -R dev:dev /var/www * change permission of web dir, chmod 775 $(find /var/www/html -type d) chmod 664 $(find /var/www/html -type f) * * re-login as 'front' to make group take effect, * * test * * ok *
{ "language": "en", "url": "https://stackoverflow.com/questions/10990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: MS Team Foundation Server in distributed environments - hints tips tricks needed Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: * *Things are being checked out to us without us asking on a get latest. *Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option. A: Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems. As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't! Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems. Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well. A: We use TFS with a somewhat distributed team - they aren't too far away but connect via a slow and unreliable VPN. For your first issue, get latest on checkout is not the default behaviour. (Here's an explanation) There is an add-in that will do it for you, though. Here's the workflow that works for us: * *Get latest *Build and verify nothing's broken *Work (changes pended) *Get latest again *Deal with merge conflicts *Build and verify nothing's broken *Check in [edit] OK looks like you rephrased this part of the question. Yes, Jeff's right, VS decides to check some files out "for you," like sln and proj files. It also automatically checks out any source file that you edit (that's what you want though, right? although you can change that setting in tools > options > source control) The proxy apparently takes a while to get ramped up (we don't use it) but once it has cached most of the tree it's supposed to be pretty quick. Can you do some monitoring and find the bottleneck(s)? Anything else giving you trouble, other than get-latest-on-checkout and speed? A: From my understanding you can have multiple TFS Application servers in different locations. They either can both talk to the same SQL Server or you could use SQL Server mirroring. Having your own local TFS server would likely speed up your development times.
{ "language": "en", "url": "https://stackoverflow.com/questions/10999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the advantages of explicit Join Transitive Closure in SQL? When I'm joining three or more tables together by a common column, I'd write my query like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id a colleague recently asked my why I didn't do explicit Join Transitive Closure in my queries like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id AND c.id = a.id are the really any advantages to this? Surely the optimiser can imply this for itself? edit: I know it's evil syntax, but it's a quick and dirty example of legitimate legacy code +1 @Stu for cleaning it up A: You don't need to do this in todays database engines, but there was a time when things like that would give the query optimizer more hints as to possible index paths and thus to speedier results. These days that entire syntax is going out anyway. A: This is filthy, evil legacy syntax. You write this as Select * -- Oh, and don't ever use *, either From A Inner Join B On A.ID = B.ID Inner Join C On B.ID = C.ID A: No this syntax stems from the days before joins were in the language. Not sure of the problems associated with it, but there are definitely language constructs that are more supported for jointing tables. A: If you look at it from a mathematical point of view, your examples should yield the same results. a = b = c So your first example would yield the same results as the second, so no need to do the extra work. A: I just want to say that this kind of joining is the devils work. Just think about it; the conditions for joining and filtering gets mixed together in the where statement. What happens when you need to join across 20 tables and filter on 15 values? Again, just my $.02 A: In Microsoft SQL the query plans for these two queries are identical - they are executed in the same way. A: This question is similar to this one here with a very in-depth explanation: SQL question from Joel Spolsky article The short answer is, that the explicit declaration of the transitiv property may speed the query up. This is because query optimization is not a trivial task and some SQL servers might have problems with it. A: That syntax has its uses though ... there are times when you find you need to join two tables on more than one field
{ "language": "en", "url": "https://stackoverflow.com/questions/11028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL Table Aliases - Good or Bad? What are the pros and cons of using table aliases in SQL? I personally try to avoid them, as I think they make the code less readable (especially when reading through large where/and statements), but I'd be interested in hearing any counter-points to this. When is it generally a good idea to use table aliases, and do you have any preferred formats? A: Good As it has been mentioned multiple times before, it is a good practice to prefix all column names to easily see which column belongs to which table - and aliases are shorter than full table names so the query is easier to read and thus understand. If you use a good aliasing scheme of course. And if you create or read the code of an application, which uses externally stored or dynamically generated table names, then without aliases it is really hard to tell at the first glance what all those "%s"es or other placeholders stand for. It is not an extreme case, for example many web apps allow to customize the table name prefix at installation time. A: Microsoft SQL's query optimiser benefits from using either fully qualified names or aliases. Personally I prefer aliases, and unless I have a lot of tables they tend to be single letter ones. --seems pretty readable to me ;-) select a.Text from Question q inner join Answer a on a.QuestionId = q.QuestionId There's also a practical limit on how long a Sql string can be executed - aliases make this limit easier to avoid. A: If I write a query myself (by typing into the editor and not using a designer) I always use aliases for the table name just so I only have to type the full table name once.I really hate reading queries generated by a designer with the full table name as a prefix to every column name. A: I suppose the only thing that really speaks against them is excessive abstraction. If you will have a good idea what the alias refers to (good naming helps; 'a', 'b', 'c' can be quite problematic especially when you're reading the statement months or years later), I see nothing wrong with aliasing. As others have said, joins require them if you're using the same table (or view) multiple times, but even outside that situation, an alias can serve to clarify a data source's purpose in a particular context. In the alias's name, try to answer why you are accessing particular data, not what the data is. A: I LOVE aliases!!!! I have done some tests using them vs. not and have seen some processing gains. My guess is the processing gains would be higher when you're dealing with larger datasets and complex nested queries than without. If I'm able to test this, I'll let you know. A: Table aliases are a necessary evil when dealing with highly normalized schemas. For example, and I'm not the architect on this DB so bear with me, it can take 7 joins in order to get a clean and complete record back which includes a person's name, address, phone number and company affiliation. Rather than the somewhat standard single character aliases, I tend to favor short word aliases so the above example's SQL ends up looking like: select person.FirstName ,person.LastName ,addr.StreetAddress ,addr.City ,addr.State ,addr.Zip ,phone.PhoneNumber ,company.CompanyName from tblPeople person left outer join tblAffiliations affl on affl.personID = person.personID left outer join tblCompany company on company.companyID = affl.companyID ... etc A: Well, there are some cases you must use them, like when you need to join to the same table twice in one query. It also depends on wether you have unique column names across tables. In our legacy database we have 3-letter prefixes for all columns, stemming from an abbreviated form from the table, simply because one ancient database system we were once compatible with didn't support table aliases all that well. If you have column names that occur in more than one table, specifying the table name as part of the column reference is a must, and thus a table alias will allow for a shorter syntax. A: You need them if you're going to join a table to itself, or if you use the column again in a subquery... A: Am I the only person here who really hates them? Generally, I don't use them unless I have to. I just really hate having to read something like select a.id, a.region, a.firstname, a.blah, b.yadda, b.huminahumina, c.crap from table toys as a inner join prices as b on a.blah = b.yadda inner join customers as c on c.crap = something else etc When I read SQL, I like to know exactly what I'm selecting when I read it; aliases actually confuse me more because I've got to slog through lines of columns before I actually get to the table name, which generally represents information about the data that the alias doesn't. Perhaps it's okay if you made the aliases, but I commonly read questions on StackOverflow with code that seems to use aliases for no good reason. (Additionally, sometimes, someone will create an alias in a statement and just not use it. Why?) I think that table aliases are used so much because a lot of people are averse to typing. I don't think that's a good excuse, though. That excuse is the reason we end up with terrible variable naming, terrible function acronyms, bad code...I would take the time to type out the full name. I'm a quick typer, though, so maybe that has something to do with it. (Maybe in the future, when I've got carpal tunnel, I'll reconsider my opinion on aliases. :P ) I especially hate running across table aliases in PHP code, where I believe there's absolutely no reason to have to do that - you've only got to type it once! I always use column qualifiers in my statements, but I'm not averse to typing a lot, so I will gladly type the full name multiple times. (Granted, I do abuse MySQL's tab completion.) Unless it's a situation where I have to use an alias (like some described in other answers), I find the extra layer of abstraction cumbersome and unnecessary. Edit: (Over a year later) I'm dealing with some stored procedures that use aliases (I did not write them and I'm new to this project), and they're kind of painful. I realize that the reason I don't like aliases is because of how they're defined. You know how it's generally good practice to declare variables at the top of your scope? (And usually at the beginning of a line?) Aliases in SQL don't follow this convention, which makes me grind my teeth. Thus, I have to search the entire code for a single alias to find out where it is (and what's frustrating is, I have to read through the logic before I find the alias declaration). If it weren't for that, I honestly might like the system better. If I ever write a stored procedure that someone else will have to deal with, I'm putting my alias definitions in a comment block at the beginning of the file, as a reference. I honestly can't understand how you guys don't go crazy without it. A: Aliases are great if you consider that my organization has table names like: SchemaName.DataPointName_SubPoint_Sub-SubPoint_Sub-Sub-SubPoint... My team uses a pretty standard set of abbreviations, so the guesswork is minimized. We'll have say ProgramInformationDataPoint shortened to pidp, and submissions to just sub. The good thing is that once you get going in this manner and people agree with it, it makes those HAYUGE files just a little smaller and easier to manage. At least for me, fewer characters to convey the same info seems to go a little easier on my brain. A: IMHO, it doesn't really matter with short table names that make sense, I have on occasion worked on databases where the table name could be something like VWRECOFLY or some other random string (dictated by company policy) that really represents users, so in that case I find aliases really help to make the code FAR more readable. (users.username makes a lot more sence then VWRECOFLY.username) A: I like long explicit table names (it's not uncommon to be more than 100 characters) because I use many tables and if the names aren't explicit, I might get confused as to what each table stores. So when I write a query, I tend to use shorter aliases that make sense within the scope of the query and that makes the code much more readable. A: I always use aliases in my queries and it is part of the code guidebook in my company. First of all you need aliases or table names when there are columns with identical names in the joining tables. In my opinion the aliases improve readability in complex queries and allow me to see quickly the location of each columns. We even use aliases with single table queries, because experience has shown that single table queries don´t stay single table for long. A: I always use aliases, since to get proper performance on MSSQL you need to prefix with schema at all times. So you'll see a lot of Select Person.Name From dbo.Person As Person A: I always use aliases when writing queries. Generally I try and abbreviate the table name to 1 or 2 representative letters. So Users becomes u and debtor_transactions becomes dt etc... It saves on typing and still carries some meaning. The shorter names makes it more readable to me as well. A: If you do not use an alias, it's a bug in your code just waiting to happen. SELECT Description -- actually in a FROM table_a a, table_b b WHERE a.ID = b.ID What happens when you do a little thing like add a column called Description to Table_B. That's right, you'll get an error. Adding a column doesn't need to break anything. I never see writing good code, bug free code, as a necessary evil. A: Aliases are required when joining tables with columns that have identical names.
{ "language": "en", "url": "https://stackoverflow.com/questions/11043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Refresh Excel VBA Function Results How can I get a user-defined function to re-evaluate itself based on changed data in the spreadsheet? I tried F9 and Shift+F9. The only thing that seems to work is editing the cell with the function call and then pressing Enter. A: Some more information on the F9 keyboard shortcuts for calculation in Excel * *F9 Recalculates all worksheets in all open workbooks *Shift+ F9 Recalculates the active worksheet *Ctrl+Alt+ F9 Recalculates all worksheets in all open workbooks (Full recalculation) *Shift + Ctrl+Alt+ F9 Rebuilds the dependency tree and does a full recalculation A: Okay, found this one myself. You can use Ctrl+Alt+F9 to accomplish this. A: You should use Application.Volatile in the top of your function: Function doubleMe(d) Application.Volatile doubleMe = d * 2 End Function It will then reevaluate whenever the workbook changes (if your calculation is set to automatic). A: If you include ALL references to the spreadsheet data in the UDF parameter list, Excel will recalculate your function whenever the referenced data changes: Public Function doubleMe(d As Variant) doubleMe = d * 2 End Function You can also use Application.Volatile, but this has the disadvantage of making your UDF always recalculate - even when it does not need to because the referenced data has not changed. Public Function doubleMe() Application.Volatile doubleMe = Worksheets("Fred").Range("A1") * 2 End Function A: To switch to Automatic: Application.Calculation = xlCalculationAutomatic To switch to Manual: Application.Calculation = xlCalculationManual A: This refreshes the calculation better than Range(A:B).Calculate: Public Sub UpdateMyFunctions() Dim myRange As Range Dim rng As Range ' Assume the functions are in this range A1:B10. Set myRange = ActiveSheet.Range("A1:B10") For Each rng In myRange rng.Formula = rng.Formula Next End Sub A: The Application.Volatile doesn't work for recalculating a formula with my own function inside. I use the following function: Application.CalculateFull A: I found it best to only update the calculation when a specific cell is changed. Here is an example VBA code to place in the "Worksheet" "Change" event: Private Sub Worksheet_Change(ByVal Target As Range) If Not Intersect(Target, Range("F3")) Is Nothing Then Application.CalculateFull End If End Sub A: Public Sub UpdateMyFunctions() Dim myRange As Range Dim rng As Range 'Considering The Functions are in Range A1:B10 Set myRange = ActiveSheet.Range("A1:B10") For Each rng In myRange rng.Formula = rng.Formula Next End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/11045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: How big would such a database be? I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it. A: You might try prototyping your design - create an initial version of the database and write some scripts (or use a tool) to populate the tables with a reasonable amount of data. Then you will know for sure how much space X rows takes up. If it's too much, you can go back to the drawing board with your design. I know you want a figure before creating the database, but you'll never be able to account for everything that's going on with the physical data files under the hood. A: you can from the size of the data types for the columns in a table. You can then get a rough estimate of the size of a row in that table. then for 1 to n tables, then for 1 row in 1 table for x rows in x tables = estimate of the database for a given rowsize. Long handed I know but this is how i normally do this. A: To be accurate, this can get really complex. For example, this is how you do it on MS SQL Server: http://msdn.microsoft.com/en-us/library/aa933068(SQL.80).aspx A: You also need to include indexes in your estimates. I've seen systems where the indexes were as big as the data. The only way I would trust the answer is to do prototyping like Eric Z Beard suggests. Different datbase systems have different overhead, but they all have it. A: Having an exact size wasn't too important, so I went with littlegeek's method. I figured out what my tables and columns would be, and looked up the sizes of the data types, then did some good 'ole multiplying.
{ "language": "en", "url": "https://stackoverflow.com/questions/11055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How should I unit test a code-generator? This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share? A: Recall that "unit testing" is only one kind of testing. You should be able to unit test the internal pieces of your code generator. What you're really looking at here is system level testing (a.k.a. regression testing). It's not just semantics... there are different mindsets, approaches, expectations, etc. It's certainly more work, but you probably need to bite the bullet and set up an end-to-end regression test suite: fixed C++ files -> SWIG interfaces -> python modules -> known output. You really want to check the known input (fixed C++ code) against expected output (what comes out of the final Python program). Checking the code generator results directly would be like diffing object files... A: I started writing up a summary of my experience with my own code generator, then went back and re-read your question and found you had already touched upon the same issues yourself, focus on the execution results instead of the code layout/look. Problem is, this is hard to test, the generated code might not be suited to actually run in the environment of the unit test system, and how do you encode the expected results? I've found that you need to break down the code generator into smaller pieces and unit test those. Unit testing a full code generator is more like integration testing than unit testing if you ask me. A: Yes, results are the ONLY thing that matters. The real chore is writing a framework that allows your generated code to run independently... spend your time there. A: If you are running on *nux you might consider dumping the unittest framework in favor of a bash script or makefile. on windows you might consider building a shell app/function that runs the generator and then uses the code (as another process) and unittest that. A third option would be to generate the code and then build an app from it that includes nothing but a unittest. Again you would need a shell script or whatnot to run this for each input. As to how to encode the expected behavior, it occurs to me that it could be done in much the same way as you would for the C++ code just using the generated interface rather than the C++ one. A: Just wanted to point out that you can still achieve fine-grained testing while verifying the results: you can test individual chunks of code by nesting them inside some setup and verification code: int x = 0; GENERATED_CODE assert(x == 100); Provided you have your generated code assembled from smaller chunks, and the chunks do not change frequently, you can exercise more conditions and test a little better, and hopefully avoid having all your tests break when you change specifics of one chunk. A: Unit testing is just that testing a specific unit. So if you are writing a specification for class A, it is ideal if class A does not have the real concrete versions of class B and C. Ok I noticed afterwards the tag for this question includes C++ / Python, but the principles are the same: public class A : InterfaceA { InterfaceB b; InterfaceC c; public A(InterfaceB b, InterfaceC c) { this._b = b; this._c = c; } public string SomeOperation(string input) { return this._b.SomeOtherOperation(input) + this._c.EvenAnotherOperation(input); } } Because the above System A injects interfaces to systems B and C, you can unit test just system A, without having real functionality being executed by any other system. This is unit testing. Here is a clever manner for approaching a System from creation to completion, with a different When specification for each piece of behaviour: public class When_system_A_has_some_operation_called_with_valid_input : SystemASpecification { private string _actualString; private string _expectedString; private string _input; private string _returnB; private string _returnC; [It] public void Should_return_the_expected_string() { _actualString.Should().Be.EqualTo(this._expectedString); } public override void GivenThat() { var randomGenerator = new RandomGenerator(); this._input = randomGenerator.Generate<string>(); this._returnB = randomGenerator.Generate<string>(); this._returnC = randomGenerator.Generate<string>(); Dep<InterfaceB>().Stub(b => b.SomeOtherOperation(_input)) .Return(this._returnB); Dep<InterfaceC>().Stub(c => c.EvenAnotherOperation(_input)) .Return(this._returnC); this._expectedString = this._returnB + this._returnC; } public override void WhenIRun() { this._actualString = Sut.SomeOperation(this._input); } } So in conclusion, a single unit / specification can have multiple behaviours, and the specification grows as you develop the unit / system; and if your system under test depends on other concrete systems within it, watch out. A: My recommendation would be to figure out a set of known input-output results, such as some simpler cases that you already have in place, and unit test the code that is produced. It's entirely possible that as you change the generator that the exact string that is produced may be slightly different... but what you really care is whether it is interpreted in the same way. Thus, if you test the results as you would test that code if it were your feature, you will find out if it succeeds in the ways you want. Basically, what you really want to know is whether your generator will produce what you expect without physically testing every possible combination (also: impossible). By ensuring that your generator is consistent in the ways you expect, you can feel better that the generator will succeed in ever-more-complex situations. In this way, you can also build up a suite of regression tests (unit tests that need to keep working correctly). This will help you make sure that changes to your generator aren't breaking other forms of code. When you encounter a bug that your unit tests didn't catch, you may want to include it to prevent similar breakage. A: I find that you need to test what you're generating more than how you generate it. In my case, the program generates many types of code (C#, HTML, SCSS, JS, etc.) that compile into a web application. The best way I've found to reduce regression bugs overall is to test the web application itself, not by testing the generator. Don't get me wrong, there are still unit tests checking out some of the generator code, but our biggest bang for our buck has been UI tests on the generated app itself. Since we're generating it we also generate a nice abstraction in JS we can use to programatically test the app. We followed some ideas outlined here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089 The great part is that it really tests your system end-to-end, from code generation out to what you're actually generating. Once a test fails, its easy to track it back to where the generator broke. It's pretty sweet. Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/11060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How can I determine the type of a blessed reference in Perl? In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this? A: You probably shouldn't do this. The underlying type of an object is an implementation detail you shouldn't mess with. Why would you want to know this? A: Scalar::Util::reftype() is the cleanest solution. The Scalar::Util module was added to the Perl core in version 5.7 but is available for older versions (5.004 or later) from CPAN. You can also probe with UNIVERSAL::isa(): $x->isa('HASH') # if $x is known to be an object UNIVERSAL::isa($x, 'HASH') # if $x might not be an object or reference Obviously, you'd also have to check for ARRAY and SCALAR types. The UNIVERSAL module (which serves as the base class for all objects) has been part of the core since Perl 5.003. Another way -- easy but a little dirty -- is to stringify the reference. Assuming that the class hasn't overloaded stringification you'll get back something resembling Class=HASH(0x1234ABCD), which you can parse to extract the underlying data type: my $type = ($object =~ /=(.+)\(0x[0-9a-f]+\)$/i); A: And my first thought on this was: "Objects in Perl are always hash refs, so what the hack?" But, Scalar::Util::reftype is the answer. Thanks for putting the question here. Here is a code snippet to prove this.. (in case it is of any use to anyone). $> perl -e 'use strict; use warnings "all"; my $x = [1]; bless ($x, "ABC::Def"); use Data::Dumper; print Dumper $x; print ref($x) . "\n"; use Scalar::Util "reftype"; print reftype($x) . "\n"'` Output: $VAR1 = bless( [ 1 ], 'ABC::Def' ); ABC::Def ARRAY
{ "language": "en", "url": "https://stackoverflow.com/questions/11085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: What is the best way to work around the fact that ALL Java bytes are signed? In Java, there is no such thing as an unsigned byte. Working with some low level code, occasionally you need to work with bytes that have unsigned values greater than 128, which causes Java to interpret them as a negative number due to the MSB being used for sign. What's a good way to work around this? (Saying don't use Java is not an option) A: It is actually possible to get rid of the if statement and the addition if you do it like this. byte[] foobar = ..; int value = (foobar[10] & 0xff); This way Java doesn't interpret the byte as a negative number and flip the sign bit on the integer also. A: When reading any single value from the array copy it into something like a short or an int and manually convert the negative number into the positive value it should be. byte[] foobar = ..; int value = foobar[10]; if (value < 0) value += 256 // Patch up the 'falsely' negative value You can do a similar conversion when writing into the array. A: Using ints is generally better than using shorts because java uses 32-bit values internally anyway (Even for bytes, unless in an array) so using ints will avoid unnecessary conversion to/from short values in the bytecode. A: Probably your best bet is to use an integer rather than a byte. It has the room to allow for numbers greater than 128 without the overhead of having to create a special object to replace byte. This is also suggested by people smarter than me (everybody) * *http://www.darksleep.com/player/JavaAndUnsignedTypes.html *http://www.jguru.com/faq/view.jsp?EID=13647 A: The best way to do bit manipulation/unsigned bytes is through using ints. Even though they are signed they have plenty of spare bits (32 total) to treat as an unsigned byte. Also, all of the mathematical operators will convert smaller fixed precision numbers to int. Example: short a = 1s; short b = 2s; int c = a + b; // the result is up-converted short small = (short)c; // must cast to get it back to short Because of this it is best to just stick with integer and mask it to get the bits that you are interested in. Example: int a = 32; int b = 128; int foo = (a + b) | 255; Here is some more info on Java primitive types http://mindprod.com/jgloss/primitive.html One last trivial note, there is one unsigned fixed precision number in Java. That is the char primitive. A: I know this is a very late response, but I came across this thread when trying to do the exact same thing. The issue is simply trying to determine if a Java byte is >127. The simple solution is: if((val & (byte)0x80) != 0) { ... } If the real issue is >128 instead, just adding another condition to that if-statement will do the trick. A: I guess you could just use a short to store them. Not very efficient, but really the only option besides some herculean effort that I have seen.
{ "language": "en", "url": "https://stackoverflow.com/questions/11088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: What is the prefered style for single decision and action statements? In the case of languages that support single decision and action without brackets, such as the following example: if (var == true) doSomething(); What is the preferred way of writing this? Should brackets always be used, or should their usage be left as a preference of the individual developer? Additionally, does this practice depend on the size of the code block, such as in the following example: if (var == 1) doSomething(1); else if (var > 1 && var < 10) doSomething(2); else { validate(var); doSomething(var); } A: I tend to use braces at all times. You can get some subtle bugs where you started off with something like: if(something) DoOneThing(); else DoItDifferently(); and then decide to add another operation to the else clause and forget to wrap it in braces: if(something) DoOneThing(); else DoItDifferently(); AlwaysGetsCalled(); AlwaysGetsCalled() will always get called, and if you're sitting there at 3am wondering why your code is behaving all strange, something like that could elude you for quite some time. For this reason alone, I always use braces. A: My preference is to be consistent, e.g., if you use brackets on one block, use brackets all throughout even with just one statement: if (cond1) { SomeOperation(); Another(); } elseif (cond2) { DoSomething(); } else { DoNothing(); DoAnother(); } But if you have just a bunch of one liners: if (cond1) DoFirst(); elseif (cond2) DoSecond(); else DoElse(); Looks cleaner (if you don't mind the dummy method names ;) that way, but that's just me. This also applies to loop constructs and the like: foreach (var s as Something) if (s == someCondition) yield return SomeMethod(s); You should also consider that this is a convention that might be more suited to .NET (notice that Java peepz like to have their first curly brace in the same line as the if). A: Chalk this one to lack of experience, but during my seven-year stint as a code monkey I've never actually seen anyone make the mistake of not adding braces when adding code to a block that doesn't have braces. That's precisely zero times. And before the wisecrackers get to it, no, the reason wasn't "everyone always uses braces". So, an honest question -- I really would like to get actual replies instead of just downvotes: does that ever actually happen? (Edit: I've heard enough outsourcing horror stories to clarify a bit: does it ever actually happen to competent programmers?) A: It doesn't really matter, as long as you're consistent with it. There does seem to be a tendency to demand sameness within a single statement, i.e. if there's brackets in one branch, there's brackets everywhere. The Linux kernel coding standards, for one, mandate that. A: I would strongly advocate always using braces, even when they're optional. Why? Take this chunk of C++ code: if (var == 1) doSomething(); doSomethingElse(); Now, someone comes along who isn't really paying enough attention and decides that something extra needs to happen if (var == 1), so they do this: if (var == 1) doSomething(); doSomethingExtra(); doSomethingElse(); It's all still beautifully indented but it won't do what was intended. By always using braces, you're more likely to avoid this sort of bug. A: I personnally side with McConnell's explanation from Code Complete. Use them whenever you can. They enhance your code's readability and remove the few and scarce confusions that might occur. There is one thing that's more important though....Consistency. Which ever style you use,make sure you always do it the same way. Start writing stuff like: If A == true FunctA(); If B == "Test" { FunctB(); } You are bound to end up looking for an odd bug where the compiler won't understand what you were trying to do and that will be hard to find. Basically find the one you are comfortable writing everytime and stick to it. I do believe in using the block delimeters('{', '}') as much as possible is the way to go. I don't want to start a question inside another, but there is something related to this that I want to mention to get your mental juices going. One the decision of using the brackets has been made. Where do you put the opening bracket? On the same line as the statement or underneath. Indented brackets or not? If A == false { //calls and whatnot } //or If B == "BlaBla" { //calls and whatnot } //or If C == B { //calls and whatnot } Please don't answer to this since this would be a new question. If I see an interest in this I will open a new question your input. A: There isn't really a right answer. This is what coding standards within the company are for. If you can keep it consistent across the whole company then it will be easy to read. I personally like if ( a == b) { doSomething(); } else { doSomething(); } but this is a holy war. A: I recommend if(a==b) { doSomething(); } because I find it far easier to do it up-front than to try to remember to add the braces when I add a second statement to to success condition... if(a==b) doSomething(); doSomethingElse(); is very different to if(a==b) { doSomething(); doSomethingElse(); } see Joel's article for further details A: I've always used brackets at all times except for the case where I'm checking a variable for NULL before freeing it, like is necessary in C In that case, I make sure it's clear that it's a single statement by keeping everything on one line, like this: if (aString) free(aString); A: There is no right or wrong way to write the above statement. There are plenty of accepted coding styles. However, for me, I prefer keeping the coding style consist throughout the entire project. ie. If the project is using K&R style, you should use K&R. A: Ruby nicely obviates one issue in the discussion. The standard for a one-liner is: do_something if (a == b) and for a multi-line: if (a == b) do_something do_something_else end This allows concise one-line statements, but it forces you to reorganize the statement if you go from single- to multi-line. This is not (yet) available in Java, nor in many other languages, AFAIK. A: As others have mentioned, doing an if statement in two lines without braces can lead to confusion: if (a == b) DoSomething(); DoSomethingElse(); <-- outside if statement so I place it on a single line if I can do so without hurting readability: if (a == b) DoSomething(); and at all other times I use braces. Ternary operators are a little different. Most of the time I do them on one line: var c = (a == b) ? DoSomething() : DoSomethingElse(); but sometimes the statements have nested function calls, or lambda expressions which make a one-line statement difficult to parse visually, so I prefer something like this: var c = (a == b) ? AReallyReallyLongFunctionName() : AnotherReallyReallyLongFunctionOrStatement(); Still more concise than an if/else block but easy to see what's going on. A: Sun's Code Conventions for the Java programming Language has this to say: The if-else class of statements should have the following form: if (condition) { statements; } if (condition) { statements; } else { statements; } if (condition) { statements; } else if (condition) { statements; } else { statements; } A: Our boss makes us put { } after a decision statement no matter what, even if it's a single statement. It's really annoying to add two extra lines. The only exception is ternary operators. I guess it's a good thing I have my code monitor in portrait orientation at 1200x1600. A: I tend to agree with Joel Spolsky on that one with that article (Making Wrong Code Look Wrong) with the following code example : if (i != 0) bar(i); foo(i); Foo is now unconditionnal. Wich is real bad! I always use brackets for decision statements. It helps code maintainability and it makes the code less bug prone. A: I prefer if (cond) { //statement } even with only a single statement. If you were going to write something once, had no doubts that it worked, and never planned on another coder ever looking at that code, go ahead and use whatever format you want. But, what does the extra bracketing really cost you? Less time in the course of a year than it takes to type up this post. Yes, I like to indent my brackets to the level of the block, too. Python is nice in that the indentation defines the block. The question is moot in a language like that. A: I used to follow the "use curly braces always" line like an apparatchik. However, I've modified my style to allow for omitting them on single line conditional expressions: if(!ok)return; For any multistatement scenario though I'm still of the opinion that braces should be mandatory: if(!ok){ do(); that(); thing(); } A: I use curly braces around every statement if and only if at least one of them requires it. A: In Perl if you are doing a simple test, sometime you will write it in this form: do_something if condition; do_something unless condition; Which can be really useful to check the arguments at the start of a subroutine. sub test{ my($self,@args) = @_; return undef unless defined $self; # rest of code goes here } A: The golden rule is that, when working in an existing project, follow those coding standards. When I'm at home, I have two forms. The first is the single line: if (condition) doThis(); and the second is for multiple lines: if (condition) { doThis(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/11099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best Wiki for Mobile Users Most wiki software I've presents lots of "features" on their pages. This is fine for desktop users, but is annoying when using an iPhone or other mobile device. I'd prefer pages that just had the content, along with maybe an Edit button and a Search button. The editors are also often too fancy for mobile users; a simple multi-line edit field would be better for mobile users than a bunch of formatting controls. What is a good wiki package for mobile users? A: I find the wiki in Fogbugz to be very good using it with the iPhone. A: It is very easy to override the mediawiki skins with your own. You could remove whatever you want to without much of a problem. A: W2 by Steven Frank (of Panic, makers of Transmit) is awesome. It's totally stripped down for iPhone--just the essentials. It supports markdown and basic Wiki-style formatting. https://github.com/panicsteve/w2wiki That page has a link to a demo site.
{ "language": "en", "url": "https://stackoverflow.com/questions/11112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In C++/Windows how do I get the network name of the computer I'm on? In a C++ Windows (XP and NT, if it makes a difference) application I'm working on, I need to get the network name associated with the computer the code is executing on, so that I can convert local filenames from C:\filename.ext to \\network_name\C$\filename.ext. How would I do this? Alternatively, if there's a function that will just do the conversion I described, that would be even better. I looked into WNetGetUniversalName, but that doesn't seem to work with local (C drive) files. A: There are more than one alternatives: a. Use Win32's GetComputerName() as suggested by Stu. Example: http://www.techbytes.ca/techbyte97.html OR b. Use the function gethostname() under Winsock. This function is cross platform and may help if your app is going to be run on other platforms besides Windows. MSDN Reference: http://msdn.microsoft.com/en-us/library/ms738527(VS.85).aspx OR c. Use the function getaddrinfo(). MSDN reference: http://msdn.microsoft.com/en-us/library/ms738520(VS.85).aspx A: You'll want Win32's GetComputerName: http://msdn.microsoft.com/en-us/library/ms724295(VS.85).aspx A: I agree with Pascal on using winsock's gethostname() function. Here you go: #include <winsock2.h> //of course this is the way to go on windows only #pragma comment(lib, "Ws2_32.lib") void GetHostName(std::string& host_name) { WSAData wsa_data; int ret_code; char buf[MAX_PATH]; WSAStartup(MAKEWORD(1, 1), &wsa_data); ret_code = gethostname(buf, MAX_PATH); if (ret_code == SOCKET_ERROR) host_name = "unknown"; else host_name = buf; WSACleanup(); } A: If you want only the name of the local computer (NetBIOS) use GetComputerName function. It retrives only local computer name that is established at system startup, when the system reads it from the registry. BOOL WINAPI GetComputerName( _Out_ LPTSTR lpBuffer, _Inout_ LPDWORD lpnSize ); More about GetComputerName If you want to get DNS host name, DNS domain name, or the fully qualified DNS name call the GetComputerNameEx function. BOOL WINAPI GetComputerNameEx( _In_ COMPUTER_NAME_FORMAT NameType, _Out_ LPTSTR lpBuffer, _Inout_ LPDWORD lpnSize ); More about GetComputerNameEx
{ "language": "en", "url": "https://stackoverflow.com/questions/11127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to run remote shell scripts from ASP pages? I need to create an ASP page (classic, not ASP.NET) which runs remote shell scripts on a UNIX server, then captures the output into variables in VBScript within the page itself. I have never done ASP or VBScipt before. I have tried to google this stuff, but all I find are references to remote server side scripting, nothing concrete. I could really use: * *An elementary example of how this could be done. *Any other better alternatives to achieve this in a secure manner. Are there any freeware/open source alternatives to these libraries? Any examples? A: If the shell scripts are normally run on a telnet session then you could screen scrape and parse the responses. There are commercial COM components out there such as the Dart telnet library: http://www.dart.com/pttel.aspx that would let you do this. Either that or you could roll your own using AspSock http://www.15seconds.com/component/pg000300.htm A: @Pascal, sadly I'm not aware of any F/OSS alternatives. We usually just buy in these types of libraries provided that they're not hugely expensive, and more often than not the cost is built into the customer's overall project cost. If you had .NET on the server, you could build a COM wrapped component to do the heavy lifting around System.Net.Sockets.TcpClient. Just a thought.
{ "language": "en", "url": "https://stackoverflow.com/questions/11135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET Caching Recently I have been investigating the possibilities of caching in ASP.NET. I rolled my own "Cache", because I didn't know any better, it looked a bit like this: public class DataManager { private static DataManager s_instance; public static DataManager GetInstance() { } private Data[] m_myData; private DataTime m_cacheTime; public Data[] GetData() { TimeSpan span = DateTime.Now.Substract(m_cacheTime); if(span.TotalSeconds > 10) { // Do SQL to get data m_myData = data; m_cacheTime = DateTime.Now; return m_myData; } else { return m_myData; } } } So the values are stored for a while in a singleton, and when the time expires, the values are renewed. If time has not expired, and a request for the data is done, the stored values in the field are returned. What are the benefits over using the real method (http://msdn.microsoft.com/en-us/library/aa478965.aspx) instead of this? A: I think the maxim "let the computer do it; it's smarter than you" applies here. Just like memory management and other complicated things, the computer is a lot more informed about what it's doing than your are; consequently, able to get more performance than you are. Microsoft has had a team of engineers working on it and they've probably managed to squeeze much more performance out of the system than would be possible for you to. It's also likely that ASP.NET's built-in caching operates at a different level (which is inaccessible to your application), making it much faster. A: The ASP.NET caching mechanism has been around for a while, so it's stable and well understood. There are lots of resources out there to help you make the most of it. Rolling your own might be the right solution, depending on your requirements. The hard part about caching is choosing what is safe to cache, and when. For applications in which data changes frequently, you can introduce some hard to troubleshoot bugs with caching, so be careful. A: Caching in ASP.NET is feature rich and you can configure caching in quite a granular way. In your case (data caching) one of the features you're missing out on is the ability to invalidate and refresh the cache if data on the SQL server is updated in some way (SQL Cache Dependency). http://msdn.microsoft.com/en-us/library/ms178604.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/11141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Drawing Library for Ruby I am trying to code a flowchart generator for a language using Ruby. I wanted to know if there were any libraries that I could use to draw various shapes for the various flowchart elements and write out text to those shapes. I would really prefer not having to write code for drawing basic shapes, if I can help it. Can someone could point me to some reference documentation with examples of using that library? A: It sounds like you're going to be limited mainly by the capabilities of whatever user agent you're building for; if this is a web project, drawing capabilities are going to be dependent on the browser. Since Ruby is running server-side, you would at minimum need some JavaScript to allow dragging/zooming, etc. There's plenty of examples of JavaScript being used for vector drawing (just google "javascript graphics library"), but all require coding, and I haven't seen any library that abstracts this elegantly. ImageMagick has a Ruby binding called RMagick (sometimes known by other names, depending on the repository). (Link) I haven't used it myself, but I believe it will do what you're looking for. You will need to do some coding, but it's along the lines of draw.rectangle(x1, y1, x2, y2) draw.polygon(x1, y1,...,xN, yN) A: The simple answer is that (at time of writing) what you want almost certainly didn't exist. Sorry! If you're Windows-based then I'd look for a product in the .NET space, where I expect you'd find something. You're probably going to have to pay real money though. I suppose, if you're brave, you may be able to talk to it using IronRuby. From a non-MS environment I would be hoping for a Java-based solution. As already mentioned, within the Web world, you're going to be hoping some something JavaScript-based, or probably more likely, something from the Flash (or maybe even Silverlight?) world. Actually, if you want to stay in Ruby and don't mind some risk, Silverlight might be a way to go - assuming the DLR stuff actually works (no experience here). Ruby - much as I love it - hardly has the most mature GUI structure around it. Update: Since writing this (approaching 5 years ago) the situation has improved - a little. While I'm still not aware of any specific Ruby graph-drawing libraries, there is improved support for Graphviz here (Github) or by gem install ruby-graphviz. I've found that for simple graphs, it's quite possible to write the script directly from Ruby. A: Write up your flowchart as a directed or undirected graph in Graphviz. Graphviz has a language, dot that makes it easy to generate graphs. Just generate the dot file, run it through Graphiviz, and you get your image. graph { A -- B -- C; B -- D; C -- D [constraint=false]; } renders as digraph { A [label="start"]; B [label="eat"]; C [label="drink"]; D [label="be merry"]; A -> B -> C; C -> D [constraint=false]; B -> D [ arrowhead=none, arrowtail=normal]; // reverse this edge } renders as You can control node shapes and much more in Graphviz. A: I am totally sure, but check out cairo bindings for ruby. Pango for text. I am investigating then currently and came across this page.
{ "language": "en", "url": "https://stackoverflow.com/questions/11145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I create a workflow instance reliably based on an external event? a little new to the windows workflow stuff so go easy :) I wish to design a workflow host environment that has high availability - a minimum of 2 WF runtime hosts on separate hardware both pointing to the same persistance or tracking SQL database. I am looking for a pattern whereby I can asynchronously create new workflow instances based on some external event (i.e. some piece of data is updated in DB by a different application). For each event I need to create exactly one workflow instance and doesn't matter which host that instance is created on. There is also some flexibility regarding the duration of time between the event and when the workflow instance is actually created. One solution I am considering is having a WCF interface on the WF hosts and placing them behind some sort of load balancer. It would then be up to whatever part of the system that is firing the "event" to make the WCF call. I'm not really happy with this because if both\all WF hosts are down, or otherwise unavailable, the event could be "lost". Also, I won't be able manage load the way I would like to. I envisage a situation where there may be lots of events in a small period of time, but it's perfectly acceptable to handle those events some time later. So I reckon I need to persist the events somehow and decouple the event creation from the event handling. Is putting these events into MSMQ, or a simple event table in SQL Server, and having the WF host just poll the queue periodically a viable solution? Polling seems to be a such a dirty word though... Would NServiceBus and durable messaging be useful here? Any insights would be much appreciated. Addendum The database will be clustered with shared fiber channel storage. The network will also be redundant. In order for WF runtime instances to have fail-over they must point at a common persistence service, which in this case is a SQL backend. It's high availability, not Total Availabilty :) MSDN article on WF Reliability and High Availabilty Also, each instance of the WF runtime must be running exactly the same bits, so upgrading will require taking them all down at the same time. I like the idea of being able to do that, if required, without taking the whole system down. A: If you use a WCF service with a netMsmqBinding, you can receive queued messages without having to poll. Messages will wait if there is no service running to pick them up. You would want to make sure to use a clustered queue for reliability in case the main queuing machine goes down. Also be aware when upgrading that you can't resuscitate instances from an old version of the service. So to upgrade long running workflows, you need to stop them from receiving new requests and wait until all instances are finished before changing the bits, or the old instances will be stuck in your persistence store forever. A: I would go with MSMQ/event table. Polling is only dirty if you do it wrong. One thing to keep in mind: you say you want multiple WF servers for high availability, but both of them use the same SQL backend? High availability only works if you remove all single points of failure, not just some of them. A: This is how I have solved it. I'm using NServiceBus and with each WF runtime host pointing to the same messagebus (using MSMQ as a transport). NServiceBus supports transactional reads off the message bus and rollback. If a message is taken off the bus but the process terminates before the message is fully handled it remains on the queue and a different runtime host will pick it up. In order to have WF runtime hosts running on separate machines, the messagebus\queue will have to reside on Windows 2008 server (MSMQ 4.0) or later, as earlier versions of MSMQ don't support remote transactional reads. Note also, in order to perform a remote transactional read, the machine performing the read will also need to have MSMQ 4.0 installed (i.e. Windows Server 2008)
{ "language": "en", "url": "https://stackoverflow.com/questions/11152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I conditionally apply a Linq operator? We're working on a Log Viewer. The use will have the option to filter by user, severity, etc. In the Sql days I'd add to the query string, but I want to do it with Linq. How can I conditionally add where-clauses? A: Doing this: bool lastNameSearch = true/false; // depending if they want to search by last name, having this in the where statement: where (lastNameSearch && name.LastNameSearch == "smith") means that when the final query is created, if lastNameSearch is false the query will completely omit any SQL for the last name search. A: Another option would be to use something like the PredicateBuilder discussed here. It allows you to write code like the following: var newKids = Product.ContainsInDescription ("BlackBerry", "iPhone"); var classics = Product.ContainsInDescription ("Nokia", "Ericsson") .And (Product.IsSelling()); var query = from p in Data.Products.Where (newKids.Or (classics)) select p; Note that I've only got this to work with Linq 2 SQL. EntityFramework does not implement Expression.Invoke, which is required for this method to work. I have a question regarding this issue here. A: If you need to filter base on a List / Array use the following: public List<Data> GetData(List<string> Numbers, List<string> Letters) { if (Numbers == null) Numbers = new List<string>(); if (Letters == null) Letters = new List<string>(); var q = from d in database.table where (Numbers.Count == 0 || Numbers.Contains(d.Number)) where (Letters.Count == 0 || Letters.Contains(d.Letter)) select new Data { Number = d.Number, Letter = d.Letter, }; return q.ToList(); } A: I ended using an answer similar to Daren's, but with an IQueryable interface: IQueryable<Log> matches = m_Locator.Logs; // Users filter if (usersFilter) matches = matches.Where(l => l.UserName == comboBoxUsers.Text); // Severity filter if (severityFilter) matches = matches.Where(l => l.Severity == comboBoxSeverity.Text); Logs = (from log in matches orderby log.EventTime descending select log).ToList(); That builds up the query before hitting the database. The command won't run until .ToList() at the end. A: I solved this with an extension method to allow LINQ to be conditionally enabled in the middle of a fluent expression. This removes the need to break up the expression with if statements. .If() extension method: public static IQueryable<TSource> If<TSource>( this IQueryable<TSource> source, bool condition, Func<IQueryable<TSource>, IQueryable<TSource>> branch) { return condition ? branch(source) : source; } This allows you to do this: return context.Logs .If(filterBySeverity, q => q.Where(p => p.Severity == severity)) .If(filterByUser, q => q.Where(p => p.User == user)) .ToList(); Here's also an IEnumerable<T> version which will handle most other LINQ expressions: public static IEnumerable<TSource> If<TSource>( this IEnumerable<TSource> source, bool condition, Func<IEnumerable<TSource>, IEnumerable<TSource>> branch) { return condition ? branch(source) : source; } A: if you want to only filter if certain criteria is passed, do something like this var logs = from log in context.Logs select log; if (filterBySeverity) logs = logs.Where(p => p.Severity == severity); if (filterByUser) logs = logs.Where(p => p.User == user); Doing so this way will allow your Expression tree to be exactly what you want. That way the SQL created will be exactly what you need and nothing less. A: When it comes to conditional linq, I am very fond of the filters and pipes pattern. http://blog.wekeroad.com/mvc-storefront/mvcstore-part-3/ Basically you create an extension method for each filter case that takes in the IQueryable and a parameter. public static IQueryable<Type> HasID(this IQueryable<Type> query, long? id) { return id.HasValue ? query.Where(o => i.ID.Equals(id.Value)) : query; } A: It isn't the prettiest thing but you can use a lambda expression and pass your conditions optionally. In TSQL I do a lot of the following to make parameters optional: WHERE Field = @FieldVar OR @FieldVar IS NULL You could duplicate the same style with a the following lambda (an example of checking authentication): MyDataContext db = new MyDataContext(); void RunQuery(string param1, string param2, int? param3){ Func checkUser = user => ((param1.Length > 0)? user.Param1 == param1 : 1 == 1) && ((param2.Length > 0)? user.Param2 == param2 : 1 == 1) && ((param3 != null)? user.Param3 == param3 : 1 == 1); User foundUser = db.Users.SingleOrDefault(checkUser); } A: I had a similar requirement recently and eventually found this in he MSDN. CSharp Samples for Visual Studio 2008 The classes included in the DynamicQuery sample of the download allow you to create dynamic queries at runtime in the following format: var query = db.Customers. Where("City = @0 and Orders.Count >= @1", "London", 10). OrderBy("CompanyName"). Select("new(CompanyName as Name, Phone)"); Using this you can build a query string dynamically at runtime and pass it into the Where() method: string dynamicQueryString = "City = \"London\" and Order.Count >= 10"; var q = from c in db.Customers.Where(queryString, null) orderby c.CompanyName select c; A: You can create and use this extension method public static IQueryable<TSource> WhereIf<TSource>(this IQueryable<TSource> source, bool isToExecute, Expression<Func<TSource, bool>> predicate) { return isToExecute ? source.Where(predicate) : source; } A: Just use C#'s && operator: var items = dc.Users.Where(l => l.Date == DateTime.Today && l.Severity == "Critical") Edit: Ah, need to read more carefully. You wanted to know how to conditionally add additional clauses. In that case, I have no idea. :) What I'd probably do is just prepare several queries, and execute the right one, depending on what I ended up needing. A: You could use an external method: var results = from rec in GetSomeRecs() where ConditionalCheck(rec) select rec; ... bool ConditionalCheck( typeofRec input ) { ... } This would work, but can't be broken down into expression trees, which means Linq to SQL would run the check code against every record. Alternatively: var results = from rec in GetSomeRecs() where (!filterBySeverity || rec.Severity == severity) && (!filterByUser|| rec.User == user) select rec; That might work in expression trees, meaning Linq to SQL would be optimised. A: Well, what I thought was you could put the filter conditions into a generic list of Predicates: var list = new List<string> { "me", "you", "meyou", "mow" }; var predicates = new List<Predicate<string>>(); predicates.Add(i => i.Contains("me")); predicates.Add(i => i.EndsWith("w")); var results = new List<string>(); foreach (var p in predicates) results.AddRange(from i in list where p.Invoke(i) select i); That results in a list containing "me", "meyou", and "mow". You could optimize that by doing the foreach with the predicates in a totally different function that ORs all the predicates.
{ "language": "en", "url": "https://stackoverflow.com/questions/11194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: .NET Framework dependency When developing a desktop application in .NET, is it possible to not require the .NET Framework? Is developing software in .NET a preferred way to develop desktop applications? What is the most used programming language that software companies use to develop desktop applications? Is the requirement of the .NET Framework just assumed based on the Windows OS you have installed hence why they list Windows OS version requirements? A: You can't run a .Net app without the .Net framework. The framework takes care of some of the more tedious background tasks so you couldn't run the app without the framework. A: On the Windows platform using .NET is the preferred way to develop desktop applications. The WinForms model of .NET is one way to develop traditional or thick client apps, with Windows Presentation Foundation of .NET being the latest technology direction from MS. A: You can still develop applications for the windows desktop using C/C++, eliminating the requirement to the .NET framework, but you'll need to make sure the necessary libraries are already on the system or installed. The nice thing about the .NET framework is that Windows XP SP2 and Vista has the 3.0 framework runtime installed by default. In a lot of ways, this is Microsoft's "development standard" and has been that way for a while. This allows you to not worry about having a bunch of libraries tacked onto your application. If you're sticking to all of the .NET provided libraries, you wind up only have to worry about deploying your executable, which is a big headache reliever. When you have a bunch of libraries you have to deploy as well, then you start to run into hassles when you write updates, because you have to make sure those updates are pushed out in your existing installer and to all the existing installed apps out there. As for "preferred", that always tends to ruffle feathers at times, but there are more and more .NET developers wanted for the web and the desktop at the job hunt sites I tend to visit. 8^D EDIT: Many thanks to Orion for pointing out my confusion on the frameworks. You get 3.0 "out the gate if you're on XP SP2 or Vista. Everything else is going to require a simple download or run of Windows Update. A: I guess what I'm trying to say is that when I look at system requirements for certain software I rarely ever see the .NET Framework as being a requirement. So, I always wonder how they get by without it being a requirement (if they developed the software in .NET). So, I just assume that most commercial software is not written in .NET so that's why I'm asking this question. Hope that cleared some things up. A: Mono Has a Windows release, if you absolutely have to avoid dependency on .NET. Any way you look at it, though, you are going to need a .NET compatible runtime on any computer that your application is running on. So if you want to completely avoid .NET, you will probably have to distribute the Mono runtime along with your application. A: The best practice for .NET application distribution is that the installer is somehow bootstrapped with the .NET Redistributable installer for the required framework, so that if the required framework is not yet installed (say, you need 3.5 in Windows XP) then the installer will just put it in. The .NET Runtime is small enough an installation that this feasible (it's around 24MB for .NET 2.0, haven't checked how big .NET 3.5 is). A: I guess what I'm trying to say is that when I look at system requirements for certain software I rarely ever see the .NET Framework as being a requirement. So, I always wonder how they get by without it being a requirement (if they developed the software in .NET). So, I just assume that most commercial software is not written in .NET so that's why I'm asking this question. Hope that cleared some things up. I don't have any numbers, but I'm going to guess that since the majority of folks out there are running XP and Vista on their desktops, listing the .NET framework is moot, especially if they are targeting the 2.0 framework in the application itself. Back in the day, how many applications did you see that said "requires vbrun50.dll" or something to that regard since it was put into the Windows installs by default? Plus it is a little less "scary" for those that aren't terribly computer saavy. All they want to do is download, install, and run the app. A couple of the apps I have out there require the 2.0 framework and I do get some folks asking what is that and how do I get it and does it cost me anything? The typical answer I give them is "If you're running XP or Vista, there's nothing to worry about" and they seem to like that. A: I think if you were able to do something like staticly link the .NET framework so you didn't have to develop it then you would be in breach of the EULA that microsoft supplies! It's the price we have to pay for having such a rich developer experience! It's worth it when you consider the difficulty of going back to MFC programming! A: Remotesoft offers a linker - $1250 for a single developer license: http://www.remotesoft.com/linker/index.html If your application will run on Mono (and Mono's Winform desktop support is pretty good now), you can package your app as a Mono bundle, which is a single executable. There are two options - the default includes the runtime but doesn't static link to it, the other staticly links you to the Mono runtime. The reason you might not want to static link is that it would put your application under the LGPL; bundles that aren't static linked (still just a single exe) don't have that requirement. http://www.mono-project.com/Mono:Runtime#Bundles A: You might consider using "ClickOnce Deployment", which makes it very easy to add bootstrapping the .Net 2.0, 3.0, and/or 3.5 redistributable installers into your application. Just click a checkbox in your project's properties and your installer will automatically detect whether the pre-requisite framework has been installed and will install it if not. It's not suitable for every situation but if you can take advantage of it it can be pretty slick. A: I did some programming in .NET (using C#) and I realized that often times I craved more control over a number of the Controls. These required knowledge that extended beyond the .NET framework. For example when I was working with the WebBrowser control to produce an automated testing tool for web applications, I realized that there are certain situations which required event handlers from the lower-level axWebBrowser ActiveX control and documentation was scarce/code samples incorporated a lot of COM Interop concepts. So perhaps having some knowledge on COM could come in useful? A: I guess what I'm trying to say is that when I look at system requirements for certain software I rarely ever see the .NET Framework as being a requirement. So, I always wonder how they get by without it being a requirement (if they developed the software in .NET). So, I just assume that most commercial software is not written in .NET so that's why I'm asking this question. Hope that cleared some things up. Many applications developed with C++/MFC for Windows desktops require a specific version of the MFC runtime DLLs even though it may not be explicitly listed as a requirement. I believe the same is becoming true with applications requiring .NET. For example, the application I work on ships with redistributable files for both .NET and the particular version of MFC that we require as well as a number of other required components. Our install program will install any components not currently installed on the users system. Every release over the past several years has used more .NET code than the previous release. I do not think it is accurate to assume that most commercial software does not use .NET just because its not listed as a requirement. I don't think you can accurately assume anything from that. A: The nice thing about the .NET framework is that Windows XP has the 2.0 framework runtime library installed by default Since when? I've had to tell lots of our windows XP users to install it. Yes you can pull it down through windows update, but I'm pretty confident it's an optional install and not something that happens automatically. and Vista has 3.5 installed No, it has 3.0 installed. You get WPF, but you don't get linq A: Yes, you can construct an application built in dot NET without using the framework. You can use a program like ESS dotNET FuZe to incorporate any dll, including framework dll's into the application. The resulting EXE does not need a framework anymore. Please goto this link: http://essaver.net/fuze.html to take a look at FuZe. A: It is possible not to require the .NET Framework; there are some companies that sell (for thousands of dollars, mind you) solutions that will allow you to do this. These are complete hacks, however, and not supported by Microsoft. How you develop desktop applications (ie, using .NET or not) depends on your requirement. There isn't a preferred way. Most used language is probably C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/11199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: T-Sql date format for seconds since last epoch / formatting for sqlite input I'm guessing it needs to be something like: CONVERT(CHAR(24), lastModified, 101) However I'm not sure of the right value for the third parameter. Thanks! Well I'm trying to write a script to copy my sql server db to a sqlite file, which gets downloaded to an air app, which then syncs the data to another sqlite file. I'm having a ton of trouble with dates. If I select a date in air and try to insert it, it fails because it's not in the right format... even if it was a valid date to begin with. I figured I'd try to experiment with the unix time since that's the only thing thats worked so far. I am considering just leaving them as varchar because I don't sort by them anyway. A: Last epoch is when 1970 GMT? SELECT DATEDIFF(s,'19700101 05:00:00:000',lastModified) See also Epoch Date A: sqlite> select datetime(); 2011-01-27 19:32:57 sqlite> select strftime('%Y-%m-%d %H:%M:%S','now'); 2011-01-27 19:33:57 REFERENCE: (Date time Functions)[http://sqlite.org/lang_datefunc.html] A: I wound up using format 120 in MS SQL: convert(char(24), lastModified, 120) Each time I needed to a select a date in SQLite for non-display purposes I used: strftime(\"%Y-%m-%d %H:%M:%S\", dateModified) as dateModified Now I just need a readable/friendly way to display the date to the user! edit: accept answer goes to whoever shows me how to display the date nicely from sqlite ;p A: Define "last epoch". Does this come close? Select Cast(lastModified As Integer) A: If you store them as varchar, store them as YYYYMMDD. That way you CAN sort by them later if you want to. A: SQL server has only 2 failsafe date formats ISO = YYYYMMDD, run this to see that select convert(varchar(10),getdate(),112) ISO8601 = yyyy-mm-dd Thh:mm:ss:mmm(no spaces) run this to see that select convert(varchar(30),getdate(),126) To learn more about how dates are stored in SQL server I wrote How Are Dates Stored In SQL Server?
{ "language": "en", "url": "https://stackoverflow.com/questions/11200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Calling REST web services from a classic asp page I'd like to start moving our application business layers into a collection of REST web services. However, most of our Intranet has been built using Classic ASP and most of the developers where I work keep programming in Classic ASP. Ideally, then, for them to benefit from the advantages of a unique set of web APIs, it would have to be called from Classic ASP pages. I haven't the slightest idea how to do that. A: Here are a few articles describing how to call a web service from a class ASP page: * *Integrating ASP.NET XML Web Services with 'Classic' ASP Applications *Consuming XML Web Services in Classic ASP *Consuming a WSDL Webservice from ASP A: You could use a combination of JQuery with JSON calls to consume REST services from the client or if you need to interact with the REST services from the ASP layer you can use MSXML2.ServerXMLHTTP like: Set HttpReq = Server.CreateObject("MSXML2.ServerXMLHTTP") HttpReq.open "GET", "Rest_URI", False HttpReq.send A: A number of the answers presented here appear to cover how ClassicASP can be used to consume web-services & REST calls. In my opinion a tidier solution may be for your ClassicASP to just serve data in REST formats. Let your browser-based client code handle the 'mashup' if possible. You should be able to do this without incorporating any other ASP components. So, here's how I would mockup shiny new REST support in ClassicASP: * *provide a single ASP web page that acts as a landing pad *The landing pad will handle two parameters: verb and URL, plus a set of form contents *Use some kind of switch block inspect the URL and direct the verb (and form contents) to a relevant handler *The handler will then process the verb (PUT/POST/GET/DELETE) together with the form contents, returning a success/failure code plus data as appropriate. *Your landing pad will inspect the success/failure code and return the respective HTTP status plus any returned data You would benefit from a support class that decodes/encodes the form data from/to JSON, since that will ease your client-side implementation (and potentially streamline the volume of data passed). See the conversation here at Any good libraries for parsing JSON in Classic ASP? Lastly, at the client-side, provide a method that takes a Verb, Url and data payload. In the short-term the method will collate the parameters and forward them to your landing pad. In the longer term (once you switch away from Classic ASP) your method can send the data to the 'real' url. Good luck... A: @KP You should actually use MSXML2.ServerXMLHTTP from ASP/server side applications. XMLHTTP should only be used client side because it uses WinInet which is not supported for use in server/service apps. See http://support.microsoft.com/kb/290761, questions 3, 4 & 5 and http://support.microsoft.com/kb/238425/. This is quite important, otherwise you'll experience your web app hanging and all sorts of strange nonsense going on. A: Another possible solution is to write a .NET DLL that makes the calls and returns the results (maybe wrap something like RESTSharp - give it a simple API customized to your needs). Then you register the DLL as a COM DLL and use it in your ASP code via the CreateObject method. I've done this for things like creating signed JWTs and salting and hashing passwords. It works nicely (while you work like crazy to rewrite the ASP). A: All you need is an HTTP client. In .Net, WebRequest works well. For classic ASP, you will need a specific component like this one. A: Another possibility is to use the WinHttp COM object Using the WinHttpRequest COM Object. WinHttp was designed to be used from server code.
{ "language": "en", "url": "https://stackoverflow.com/questions/11219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Do you know any patterns for GUI programming? (Not patterns on designing GUIs) I'm looking for patterns that concern coding parts of a GUI. Not as global as MVC, that I'm quite familiar with, but patterns and good ideas and best practices concerning single controls and inputs. Let say I want to make a control that display some objects that may overlap. Now if I click on an object, I need to find out what to do (Just finding the object I can do in several ways, such as an quad-tree and Z-order, thats not the problem). And also I might hold down a modifier key, or some object is active from the beginning, making the selection or whatever a bit more complicated. Should I have an object instance representing a screen object, handle the user-action when clicked, or a master class. etc.. What kind of patterns or solutions are there for problems like this? A: I think to be honest you a better just boning up on your standard design patterns and applying them to the individual problems that you face in developing your UI. While there are common UI "themes" (such as dealing with modifier keys) the actual implementation may vary widely. I have O'Reilly's Head First Design Patterns and The Poster, which I have found invaluable! Shameless Plug : These links are using my associates ID. A: Object-Oriented Design and Patterns by Cay Horstmann has a chapter entitled "Patterns and GUI Programming". In that chapter, Horstmann touches on the following patterns: * *Observer Layout Managers and the *Strategy Pattern Components, *Containers, and the Composite Pattern *Scroll Bars and the Decorator Pattern A: I don't think the that benefit of design patterns come from trying to find a design pattern to fit a problem. You can however use some heuristics to help clean up your design in this quite a bit, like keeping the UI as decoupled as possible from the rest of the objects in your system. There is a pattern that might help out in this case, the Observer Pattern. A: perhaps you're looking for something like the 'MouseTrap' which I saw in some articles on codeproject (search for UI Platform)? I also found this series very useful http://codebetter.com/jeremymiller/2007/07/26/the-build-your-own-cab-series-table-of-contents/ where you might have a look at embedded controllers etc. Micha. A: I know you said not as global as MVC, but there are some variations on MVC - specifically HMVC and PAC - which I think can answer questions such as the ones you pose. Other than that, try to write new code "in the spirit" of existing patterns even if you don't apply them directly. A: You are looking at a professional application programming. I searched for tips and tricks a long time, without success. Unfortunately you will not find anything useful, it is a complicated topic and only with many years of experience you will be able to understand how to write efficiently an application. For example, almost every program opens a file, extracts information, shows it in different forms, allow processing, saving, ... but nobody explains exactly what the good strategy is and so on. Further, if you are writing a big application, you need to look at some strategies to reduce your compilation time (otherwise you will wait hours at every compilation). Impls idioms in C++ help you for example. And then there is a lot more. For this reason software developers are well paid and there are so many jobs :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/11263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: ASP.NET UserControl's and DefaultEvent Outline OK, I have Google'd this and already expecting a big fat NO!! But I thought I should ask since I know sometimes there can be the odd little gem of knowledge lurking around in peoples heads ^_^ I am working my way through some excercises in a book for study, and this particular exercise is User Controls. I have cobbled together a control and would like to set the DefaultEvent for it (having done this for previous controls) so when I double-click it, the default event created is whatever I specify it to be. NOTE: This is a standard User Control (.ascx), NOT a custom rendered control. Current Code Here is the class & event definition: [System.ComponentModel.DefaultEvent("OKClicked")] public partial class AddressBox : System.Web.UI.UserControl { public event EventHandler OKClicked; Current Result Now, when I double click the the control when it is on a ASPX page, the following is created: protected void AddressBox1_Load(object sender, EventArgs e) { } Not quite what I was expecting! So, my question: Is it possible to define a DefaultEvent for a UserControl? Is it a hack? If it's [not] supported, is there a reason? Side Note: How do we put underscores in code? I cant seem to put and escape char in? A: Here is a possible answer, without testing (like martin did). In reflector, you will see that the DefaultEventAttribute allows itself to be inherited. In reflector, you see that the UserControl class has it's default event set to the Load event. So the possible reason is that even though you are decorating your user control with the default event of OKClick, VS might still be thinking that the default event is load, as it's being inherited from UserControl whose default event is Load. Just a high level guess at what might be happening. A: OK, I checked this out, Inheriting from WebControl rather than UserControl.. All worked fine. Looks like Darren Kopp takes the crown for this one! Thanks for the input!
{ "language": "en", "url": "https://stackoverflow.com/questions/11267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Standard Signature a Text in a Message using Exchange Server Anyone know how to do this without using a third party program? If there no way to do it with a add-on someone can recommend one? EDIT: I need to add this in the server so all users have the same signature. Thanks A: You need to create your own exchange message sink to do this. Here's a classic VB example from MS KB: http://support.microsoft.com/kb/317327 and a VB Script one: http://support.microsoft.com/kb/317680 And lots of goodness from MSDN about Exchange 2003 Transport Event Sinks: http://msdn.microsoft.com/en-us/library/ms526223(EXCHG.10).aspx If you're running Exchange 2007 then you can use Transport Rules: http://msexchangeteam.com/archive/2006/12/12/431879.aspx http://www.msexchange.org/tutorials/Using-Transport-Rules-Creating-Disclaimers-Exchange-Server-2007.html A: We used CodeTwo-s Exchange rules for a while on Exchange 2003. However there is a known problem with it: if the messages stay in the queue for 2-3 minutes, the Exchange itself sends out the message without the footer. Most of the times it's not a problem, but we have something like 700 people in our organization. If there are a lot of emails and some of them contains attachments, then the virus scanner stops them for a while (MS Antigen). Otherwise it's a perfect solution if you have a smaller group of users to manage. From other point of view: our users like to have some kind of control over the signature. We generated them and put it to their Outlooks. They like it that they know and see that the signature is there and how it looks like.
{ "language": "en", "url": "https://stackoverflow.com/questions/11275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Automatically incremented revision number doesn't show up in the About Box I have a small VB.NET application that I'm working on using the full version of Visual Studio 2005. In the Publish properties of the project, I have it set to Automatically increment revision with each publish. The issue is that it's only incrementing the revision in the Setup files. It doesn't seem to be updating the version number in the About Box (which is the generic, built-in, About Box template). That version number seems to be coming from My.Application.Info.Version. What should I be using instead so that my automatically incrementing revision number shows up in the about box? A: Change the code for the About box to Me.LabelVersion.Text = String.Format("Version {0}", My.Application.Deployment.CurrentVersion.ToString) Please note that all the other answers are correct for "how do I get my assembly version", not the stated question "how do I show my publish version". A: It took me a second to find this, but I believe this is what you are looking for: using System; using System.Reflection; public class VersionNumber { public static void Main() { System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly(); Version version = assembly.GetName().Version; Console.WriteLine ("Version: {0}", version); Console.WriteLine ("Major: {0}", version.Major); Console.WriteLine ("Minor: {0}", version.Minor); Console.WriteLine ("Build: {0}", version.Build); Console.WriteLine ("Revision: {0}", version.Revision); Console.Read(); } } It was based upon the code provided at the following site - http://en.csharp-online.net/Display_type_version_number A: I'm no VB.NET expert, but have you tried to set the value to for example 1.0.0.*? This should increase the revision number (at least it does in the AssemblyInfo.cs in C#). A: The option you select is only to update the setup number. To update the program number you have to modify the AssemblyInfo. C# [assembly: AssemblyVersion("X.Y.")] [assembly: AssemblyFileVersion("X.Y.")] VB.NET Assembly: AssemblyVersion("X.Y.*") A: It's a maximum of 65535 for each of the 4 values, but when using 1.0.* or 1.0.*.*, the Assembly Linker will use a coded timestamp (so it's not a simple auto-increment, and it can repeat!) that will fit 65535. See my answer to this question for more links and details.
{ "language": "en", "url": "https://stackoverflow.com/questions/11279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Sorting a composite collection So WPF doesn't support standard sorting or filtering behavior for views of CompositeCollections, so what would be a best practice for solving this problem. There are two or more object collections of different types. You want to combine them into a single sortable and filterable collection (withing having to manually implement sort or filter). One of the approaches I've considered is to create a new object collection with only a few core properties, including the ones that I would want the collection sorted on, and an object instance of each type. class MyCompositeObject { enum ObjectType; DateTime CreatedDate; string SomeAttribute; myObjectType1 Obj1; myObjectType2 Obj2; { class MyCompositeObjects : List<MyCompositeObject> { } And then loop through my two object collections to build the new composite collection. Obviously this is a bit of a brute force method, but it would work. I'd get all the default view sorting and filtering behavior on my new composite object collection, and I'd be able to put a data template on it to display my list items properly depending on which type is actually stored in that composite item. What suggestions are there for doing this in a more elegant way? A: "Brute force" method you mention is actually ideal solution. Mind you, all objects are in RAM, there is no I/O bottleneck, so you can pretty much sort and filter millions of objects in less than a second on any modern computer. The most elegant way to work with collections is System.Linq namespace in .NET 3.5 Thanks - I also considered LINQ to objects, but my concern there is loss of flexibility for typed data templates, which I need to display the objects in my list. If you can't predict at this moment how people will sort and filter your object collection, then you should look at System.Linq.Expressions namespace to build your lambda expressions on demand during runtime (first you let user to build expression, then compile, run and at the end you use reflection namespace to enumerate through results). It's more tricky to wrap your head around it but invaluable feature, probably (to me definitively) even more ground-breaking feature than LINQ itself. A: I'm not yet very familiar with WPF but I see this as a question about sorting and filtering List<T> collections. (withing having to manually implement sort or filter) Would you reconsider implementing your own sort or filter functions? In my experience it is easy to use. The examples below use an anonymous delegate but you could easily define your own method or a class to implement a complex sort or filter. Such a class could even have properties to configure and change the sort and filter dynamically. Use List<T>.Sort(Comparison<T> comparison) with your custom compare function: // Sort according to the value of SomeAttribute List<MyCompositeObject> myList = ...; myList.Sort(delegate(MyCompositeObject a, MyCompositeObject b) { // return -1 if a < b // return 0 if a == b // return 1 if a > b return a.SomeAttribute.CompareTo(b.SomeAttribute); }; A similar approach for getting a sub-collection of items from the list. Use List<T>.FindAll(Predicate<T> match) with your custom filter function: // Select all objects where myObjectType1 and myObjectType2 are not null myList.FindAll(delegate(MyCompositeObject a) { // return true to include 'a' in the sub-collection return (a.myObjectType1 != null) && (a.myObjectType2 != null); } A: Update: I found a much more elegant solution: class MyCompositeObject { DateTime CreatedDate; string SomeAttribute; Object Obj1; { class MyCompositeObjects : List<MyCompositeObject> { } I found that due to reflection, the specific type stored in Obj1 is resolved at runtime and the type specific DataTemplate is applied as expected!
{ "language": "en", "url": "https://stackoverflow.com/questions/11288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best way to capture key events in NSTextView? I'm slowly learning Objective-C and Cocoa, and the only way I see so far to capture key events in Text Views is to use delegation, but I'm having trouble finding useful documentation and examples on how to implement such a solution. Can anyone point me in the right direction or supply some first-hand help? A: It's important to tell us what you're really trying to accomplish — the higher-level goal that you think capturing key events in an NSTextView will address. For example, when someone asks me how to capture key events in an NSTextField what they really want to know is how to validate input in the field. That's done by setting the field's formatter to an instance of NSFormatter (whether one of the formatters included in Cocoa or a custom one), not by processing keystrokes directly. So given that example, what are you really trying to accomplish? A: Generally, the way you implement it is simply to add the required function to your view's controller, and set its delegate. For example, if you want code to run when the view loads, you just delegate your view to the controller, and implement the awakeFromNib function. So, to detect a key press in a text view, make sure your controller is the text view's delegate, and then implement this: - (void)keyUp:(NSEvent *)theEvent Note that this is an inherited NSResponder method, not a NSTextView method. A: Just a tip for syntax highlighting: Don't highlight the whole text view at once - it's very slow. Also don't highlight the last edited text using -editedRange - it's very slow too if the user pastes a large body of text into the text view. Instead you need to highlight the visible text which is done like this: NSRect visibleRect = [[[textView enclosingScrollView] contentView] documentVisibleRect]; NSRange visibleRange = [[textView layoutManager] glyphRangeForBoundingRect:visibleRect inTextContainer:[textView textContainer]]; Then you feed visibleRange to your highlighting code. A: I've done some hard digging, and I did find an answer to my own question. I'll get at it below, but thanks to the two fellas who replied. I think that Stack Overflow is a fantastic site already--I hope more Mac developers find their way in once the beta is over--this could be a great resource for other developers looking to transition to the platform. So, I did, as suggested by Danny, find my answer in delegation. What I didn't understand from Danny's post was that there are a set of delegate-enabled methods in the delegating object, and that the delegate must implement said events. And so for a TextView, I was able to find the method textDidChange, which accomplished what I wanted in an even better way than simply capturing key presses would have done. So if I implement this in my controller: - (void)textDidChange:(NSNotification *)aNotification; I can respond to the text being edited. There are, of course, other methods available, and I'm excited to play with them, because I know I'll learn a whole lot as I do. Thanks again, guys.
{ "language": "en", "url": "https://stackoverflow.com/questions/11291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to parse XML using vba I work in VBA, and want to parse a string eg <PointN xsi:type='typens:PointN' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xs='http://www.w3.org/2001/XMLSchema'> <X>24.365</X> <Y>78.63</Y> </PointN> and get the X & Y values into two separate integer variables. I'm a newbie when it comes to XML, since I'm stuck in VB6 and VBA, because of the field I work in. How do I do this? A: This is an example OPML parser working with FeedDemon opml files: Sub debugPrintOPML() ' http://msdn.microsoft.com/en-us/library/ms763720(v=VS.85).aspx ' http://msdn.microsoft.com/en-us/library/system.xml.xmlnode.selectnodes.aspx ' http://msdn.microsoft.com/en-us/library/ms256086(v=VS.85).aspx ' expressions ' References: Microsoft XML Dim xmldoc As New DOMDocument60 Dim oNodeList As IXMLDOMSelection Dim oNodeList2 As IXMLDOMSelection Dim curNode As IXMLDOMNode Dim n As Long, n2 As Long, x As Long Dim strXPathQuery As String Dim attrLength As Byte Dim FilePath As String FilePath = "rss.opml" xmldoc.Load CurrentProject.Path & "\" & FilePath strXPathQuery = "opml/body/outline" Set oNodeList = xmldoc.selectNodes(strXPathQuery) For n = 0 To (oNodeList.length - 1) Set curNode = oNodeList.Item(n) attrLength = curNode.Attributes.length If attrLength > 1 Then ' or 2 or 3 Call processNode(curNode) Else Call processNode(curNode) strXPathQuery = "opml/body/outline[position() = " & n + 1 & "]/outline" Set oNodeList2 = xmldoc.selectNodes(strXPathQuery) For n2 = 0 To (oNodeList2.length - 1) Set curNode = oNodeList2.Item(n2) Call processNode(curNode) Next End If Debug.Print "----------------------" Next Set xmldoc = Nothing End Sub Sub processNode(curNode As IXMLDOMNode) Dim sAttrName As String Dim sAttrValue As String Dim attrLength As Byte Dim x As Long attrLength = curNode.Attributes.length For x = 0 To (attrLength - 1) sAttrName = curNode.Attributes.Item(x).nodeName sAttrValue = curNode.Attributes.Item(x).nodeValue Debug.Print sAttrName & " = " & sAttrValue Next Debug.Print "-----------" End Sub This one takes multilevel trees of folders (Awasu, NewzCrawler): ... Call xmldocOpen4 Call debugPrintOPML4(Null) ... Dim sText4 As String Sub debugPrintOPML4(strXPathQuery As Variant) Dim xmldoc4 As New DOMDocument60 'Dim xmldoc4 As New MSXML2.DOMDocument60 ' ? Dim oNodeList As IXMLDOMSelection Dim curNode As IXMLDOMNode Dim n4 As Long If IsNull(strXPathQuery) Then strXPathQuery = "opml/body/outline" ' http://msdn.microsoft.com/en-us/library/ms754585(v=VS.85).aspx xmldoc4.async = False xmldoc4.loadXML sText4 If (xmldoc4.parseError.errorCode <> 0) Then Dim myErr Set myErr = xmldoc4.parseError MsgBox ("You have error " & myErr.reason) Else ' MsgBox xmldoc4.xml End If Set oNodeList = xmldoc4.selectNodes(strXPathQuery) For n4 = 0 To (oNodeList.length - 1) Set curNode = oNodeList.Item(n4) Call processNode4(strXPathQuery, curNode, n4) Next Set xmldoc4 = Nothing End Sub Sub processNode4(strXPathQuery As Variant, curNode As IXMLDOMNode, n4 As Long) Dim sAttrName As String Dim sAttrValue As String Dim x As Long For x = 0 To (curNode.Attributes.length - 1) sAttrName = curNode.Attributes.Item(x).nodeName sAttrValue = curNode.Attributes.Item(x).nodeValue 'If sAttrName = "text" Debug.Print strXPathQuery & " :: " & sAttrName & " = " & sAttrValue 'End If Next Debug.Print "" If curNode.childNodes.length > 0 Then Call debugPrintOPML4(strXPathQuery & "[position() = " & n4 + 1 & "]/" & curNode.nodeName) End If End Sub Sub xmldocOpen4() Dim oFSO As New FileSystemObject ' Microsoft Scripting Runtime Reference Dim oFS Dim FilePath As String FilePath = "rss_awasu.opml" Set oFS = oFSO.OpenTextFile(CurrentProject.Path & "\" & FilePath) sText4 = oFS.ReadAll oFS.Close End Sub or better: Sub xmldocOpen4() Dim FilePath As String FilePath = "rss.opml" ' function ConvertUTF8File(sUTF8File): ' http://www.vbmonster.com/Uwe/Forum.aspx/vb/24947/How-to-read-UTF-8-chars-using-VBA ' loading and conversion from Utf-8 to UTF sText8 = ConvertUTF8File(CurrentProject.Path & "\" & FilePath) End Sub but I don't understand, why xmldoc4 should be loaded each time. A: Thanks for the pointers. I don't know, whether this is the best approach to the problem or not, but here is how I got it to work. I referenced the Microsoft XML, v2.6 dll in my VBA, and then the following code snippet, gives me the required values Dim objXML As MSXML2.DOMDocument Set objXML = New MSXML2.DOMDocument If Not objXML.loadXML(strXML) Then 'strXML is the string with XML' Err.Raise objXML.parseError.ErrorCode, , objXML.parseError.reason End If Dim point As IXMLDOMNode Set point = objXML.firstChild Debug.Print point.selectSingleNode("X").Text Debug.Print point.selectSingleNode("Y").Text A: This is a bit of a complicated question, but it seems like the most direct route would be to load the XML document or XML string via MSXML2.DOMDocument which will then allow you to access the XML nodes. You can find more on MSXML2.DOMDocument at the following sites: * *Manipulating XML files with Excel VBA & Xpath *MSXML - http://msdn.microsoft.com/en-us/library/ms763742(VS.85).aspx *An Overview of MSXML 4.0 A: Update The procedure presented below gives an example of parsing XML with VBA using the XML DOM objects. Code is based on a beginners guide of the XML DOM. Public Sub LoadDocument() Dim xDoc As MSXML.DOMDocument Set xDoc = New MSXML.DOMDocument xDoc.validateOnParse = False If xDoc.Load("C:\My Documents\sample.xml") Then ' The document loaded successfully. ' Now do something intersting. DisplayNode xDoc.childNodes, 0 Else ' The document failed to load. ' See the previous listing for error information. End If End Sub Public Sub DisplayNode(ByRef Nodes As MSXML.IXMLDOMNodeList, _ ByVal Indent As Integer) Dim xNode As MSXML.IXMLDOMNode Indent = Indent + 2 For Each xNode In Nodes If xNode.nodeType = NODE_TEXT Then Debug.Print Space$(Indent) & xNode.parentNode.nodeName & _ ":" & xNode.nodeValue End If If xNode.hasChildNodes Then DisplayNode xNode.childNodes, Indent End If Next xNode End Sub Nota Bene - This initial answer shows the simplest possible thing I could imagine (at the time I was working on a very specific issue) . Naturally using the XML facilities built into the VBA XML Dom would be much better. See the updates above. Original Response I know this is a very old post but I wanted to share my simple solution to this complicated question. Primarily I've used basic string functions to access the xml data. This assumes you have some xml data (in the temp variable) that has been returned within a VBA function. Interestingly enough one can also see how I am linking to an xml web service to retrieve the value. The function shown in the image also takes a lookup value because this Excel VBA function can be accessed from within a cell using = FunctionName(value1, value2) to return values via the web service into a spreadsheet. openTag = "" closeTag = "" ' Locate the position of the enclosing tags startPos = InStr(1, temp, openTag) endPos = InStr(1, temp, closeTag) startTagPos = InStr(startPos, temp, ">") + 1 ' Parse xml for returned value Data = Mid(temp, startTagPos, endPos - startTagPos) A: Here is a short sub to parse a MicroStation Triforma XML file that contains data for structural steel shapes. 'location of triforma structural files 'c:\programdata\bentley\workspace\triforma\tf_imperial\data\us.xml Sub ReadTriformaImperialData() Dim txtFileName As String Dim txtFileLine As String Dim txtFileNumber As Long Dim Shape As String Shape = "w12x40" txtFileNumber = FreeFile txtFileName = "c:\programdata\bentley\workspace\triforma\tf_imperial\data\us.xml" Open txtFileName For Input As #txtFileNumber Do While Not EOF(txtFileNumber) Line Input #txtFileNumber, txtFileLine If InStr(1, UCase(txtFileLine), UCase(Shape)) Then P1 = InStr(1, UCase(txtFileLine), "D=") D = Val(Mid(txtFileLine, P1 + 3)) P2 = InStr(1, UCase(txtFileLine), "TW=") TW = Val(Mid(txtFileLine, P2 + 4)) P3 = InStr(1, UCase(txtFileLine), "WIDTH=") W = Val(Mid(txtFileLine, P3 + 7)) P4 = InStr(1, UCase(txtFileLine), "TF=") TF = Val(Mid(txtFileLine, P4 + 4)) Close txtFileNumber Exit Do End If Loop End Sub From here you can use the values to draw the shape in MicroStation 2d or do it in 3d and extrude it to a solid. A: Add reference Project->References Microsoft XML, 6.0 and you can use example code: Dim xml As String xml = "<root><person><name>Me </name> </person> <person> <name>No Name </name></person></root> " Dim oXml As MSXML2.DOMDocument60 Set oXml = New MSXML2.DOMDocument60 oXml.loadXML xml Dim oSeqNodes, oSeqNode As IXMLDOMNode Set oSeqNodes = oXml.selectNodes("//root/person") If oSeqNodes.length = 0 Then 'show some message Else For Each oSeqNode In oSeqNodes Debug.Print oSeqNode.selectSingleNode("name").Text Next End If be careful with xml node //Root/Person is not same with //root/person, also selectSingleNode("Name").text is not same with selectSingleNode("name").text A: You can use a XPath Query: Dim objDom As Object '// DOMDocument Dim xmlStr As String, _ xPath As String xmlStr = _ "<PointN xsi:type='typens:PointN' " & _ "xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' " & _ "xmlns:xs='http://www.w3.org/2001/XMLSchema'> " & _ " <X>24.365</X> " & _ " <Y>78.63</Y> " & _ "</PointN>" Set objDom = CreateObject("Msxml2.DOMDocument.3.0") '// Using MSXML 3.0 '/* Load XML */ objDom.LoadXML xmlStr '/* ' * XPath Query ' */ '/* Get X */ xPath = "/PointN/X" Debug.Print objDom.SelectSingleNode(xPath).text '/* Get Y */ xPath = "/PointN/Y" Debug.Print objDom.SelectSingleNode(xPath).text A: Often it is easier to parse without VBA, when you don't want to enable macros. This can be done with the replace function. Enter your start and end nodes into cells B1 and C1. Cell A1: {your XML here} Cell B1: <X> Cell C1: </X> Cell D1: =REPLACE(A1,1,FIND(A2,A1)+LEN(A2)-1,"") Cell E1: =REPLACE(A4,FIND(A3,A4),LEN(A4)-FIND(A3,A4)+1,"") And the result line E1 will have your parsed value: Cell A1: {your XML here} Cell B1: <X> Cell C1: </X> Cell D1: 24.365<X><Y>78.68</Y></PointN> Cell E1: 24.365
{ "language": "en", "url": "https://stackoverflow.com/questions/11305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Formatting text in WinForm Label Is it possible to format certain text in a WinForm Label instead of breaking the text into multiple labels? Please disregard the HTML tags within the label's text; it's only used to get my point out. For example: Dim myLabel As New Label myLabel.Text = "This is <b>bold</b> text. This is <i>italicized</i> text." Which would produce the text in the label as: This is bold text. This is italicized text. A: * *Create the text as a RTF file in wordpad *Create Rich text control with no borders and editable = false *Add the RTF file to the project as a resource *In the Form1_load do myRtfControl.Rtf = Resource1.MyRtfControlText A: AutoRichLabel        I was solving this problem by building an UserControl that contains a TransparentRichTextBox that is readonly. The TransparentRichTextBox is a RichTextBox that allows to be transparent: TransparentRichTextBox.cs: public class TransparentRichTextBox : RichTextBox { [DllImport("kernel32.dll", CharSet = CharSet.Auto)] static extern IntPtr LoadLibrary(string lpFileName); protected override CreateParams CreateParams { get { CreateParams prams = base.CreateParams; if (TransparentRichTextBox.LoadLibrary("msftedit.dll") != IntPtr.Zero) { prams.ExStyle |= 0x020; // transparent prams.ClassName = "RICHEDIT50W"; } return prams; } } } The final UserControl acts as wrapper of the TransparentRichTextBox. Unfortunately, I had to limit it to AutoSize on my own way, because the AutoSize of the RichTextBox became broken. AutoRichLabel.designer.cs: partial class AutoRichLabel { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary> /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } #region Component Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.rtb = new TransparentRichTextBox(); this.SuspendLayout(); // // rtb // this.rtb.BorderStyle = System.Windows.Forms.BorderStyle.None; this.rtb.Dock = System.Windows.Forms.DockStyle.Fill; this.rtb.Location = new System.Drawing.Point(0, 0); this.rtb.Margin = new System.Windows.Forms.Padding(0); this.rtb.Name = "rtb"; this.rtb.ReadOnly = true; this.rtb.ScrollBars = System.Windows.Forms.RichTextBoxScrollBars.None; this.rtb.Size = new System.Drawing.Size(46, 30); this.rtb.TabIndex = 0; this.rtb.Text = ""; this.rtb.WordWrap = false; this.rtb.ContentsResized += new System.Windows.Forms.ContentsResizedEventHandler(this.rtb_ContentsResized); // // AutoRichLabel // this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font; this.AutoSizeMode = System.Windows.Forms.AutoSizeMode.GrowAndShrink; this.BackColor = System.Drawing.Color.Transparent; this.Controls.Add(this.rtb); this.Name = "AutoRichLabel"; this.Size = new System.Drawing.Size(46, 30); this.ResumeLayout(false); } #endregion private TransparentRichTextBox rtb; } AutoRichLabel.cs: /// <summary> /// <para>An auto sized label with the ability to display text with formattings by using the Rich Text Format.</para> /// <para>­</para> /// <para>Short RTF syntax examples: </para> /// <para>­</para> /// <para>Paragraph: </para> /// <para>{\pard This is a paragraph!\par}</para> /// <para>­</para> /// <para>Bold / Italic / Underline: </para> /// <para>\b bold text\b0</para> /// <para>\i italic text\i0</para> /// <para>\ul underline text\ul0</para> /// <para>­</para> /// <para>Alternate color using color table: </para> /// <para>{\colortbl ;\red0\green77\blue187;}{\pard The word \cf1 fish\cf0 is blue.\par</para> /// <para>­</para> /// <para>Additional information: </para> /// <para>Always wrap every text in a paragraph. </para> /// <para>Different tags can be stacked (i.e. \pard\b\i Bold and Italic\i0\b0\par)</para> /// <para>The space behind a tag is ignored. So if you need a space behind it, insert two spaces (i.e. \pard The word \bBOLD\0 is bold.\par)</para> /// <para>Full specification: http://www.biblioscape.com/rtf15_spec.htm </para> /// </summary> public partial class AutoRichLabel : UserControl { /// <summary> /// The rich text content. /// <para>­</para> /// <para>Short RTF syntax examples: </para> /// <para>­</para> /// <para>Paragraph: </para> /// <para>{\pard This is a paragraph!\par}</para> /// <para>­</para> /// <para>Bold / Italic / Underline: </para> /// <para>\b bold text\b0</para> /// <para>\i italic text\i0</para> /// <para>\ul underline text\ul0</para> /// <para>­</para> /// <para>Alternate color using color table: </para> /// <para>{\colortbl ;\red0\green77\blue187;}{\pard The word \cf1 fish\cf0 is blue.\par</para> /// <para>­</para> /// <para>Additional information: </para> /// <para>Always wrap every text in a paragraph. </para> /// <para>Different tags can be stacked (i.e. \pard\b\i Bold and Italic\i0\b0\par)</para> /// <para>The space behind a tag is ignored. So if you need a space behind it, insert two spaces (i.e. \pard The word \bBOLD\0 is bold.\par)</para> /// <para>Full specification: http://www.biblioscape.com/rtf15_spec.htm </para> /// </summary> [Browsable(true)] public string RtfContent { get { return this.rtb.Rtf; } set { this.rtb.WordWrap = false; // to prevent any display bugs, word wrap must be off while changing the rich text content. this.rtb.Rtf = value.StartsWith(@"{\rtf1") ? value : @"{\rtf1" + value + "}"; // Setting the rich text content will trigger the ContentsResized event. this.Fit(); // Override width and height. this.rtb.WordWrap = this.WordWrap; // Set the word wrap back. } } /// <summary> /// Dynamic width of the control. /// </summary> [Browsable(false)] public new int Width { get { return base.Width; } } /// <summary> /// Dynamic height of the control. /// </summary> [Browsable(false)] public new int Height { get { return base.Height; } } /// <summary> /// The measured width based on the content. /// </summary> public int DesiredWidth { get; private set; } /// <summary> /// The measured height based on the content. /// </summary> public int DesiredHeight { get; private set; } /// <summary> /// Determines the text will be word wrapped. This is true, when the maximum size has been set. /// </summary> public bool WordWrap { get; private set; } /// <summary> /// Constructor. /// </summary> public AutoRichLabel() { InitializeComponent(); } /// <summary> /// Overrides the width and height with the measured width and height /// </summary> public void Fit() { base.Width = this.DesiredWidth; base.Height = this.DesiredHeight; } /// <summary> /// Will be called when the rich text content of the control changes. /// </summary> private void rtb_ContentsResized(object sender, ContentsResizedEventArgs e) { this.AutoSize = false; // Disable auto size, else it will break everything this.WordWrap = this.MaximumSize.Width > 0; // Enable word wrap when the maximum width has been set. this.DesiredWidth = this.rtb.WordWrap ? this.MaximumSize.Width : e.NewRectangle.Width; // Measure width. this.DesiredHeight = this.MaximumSize.Height > 0 && this.MaximumSize.Height < e.NewRectangle.Height ? this.MaximumSize.Height : e.NewRectangle.Height; // Measure height. this.Fit(); // Override width and height. } } The syntax of the rich text format is quite simple: Paragraph: {\pard This is a paragraph!\par} Bold / Italic / Underline text: \b bold text\b0 \i italic text\i0 \ul underline text\ul0 Alternate color using color table: {\colortbl ;\red0\green77\blue187;} {\pard The word \cf1 fish\cf0 is blue.\par But please note: Always wrap every text in a paragraph. Also, different tags can be stacked (i.e. \pard\b\i Bold and Italic\i0\b0\par) and the space character behind a tag is ignored. So if you need a space behind it, insert two spaces (i.e. \pard The word \bBOLD\0 is bold.\par). To escape \ or { or }, please use a leading \. For more information there is a full specification of the rich text format online. Using this quite simple syntax you can produce something like you can see in the first image. The rich text content that was attached to the RtfContent property of my AutoRichLabel in the first image was: {\colortbl ;\red0\green77\blue187;} {\pard\b BOLD\b0 \i ITALIC\i0 \ul UNDERLINE\ul0 \\\{\}\par} {\pard\cf1\b BOLD\b0 \i ITALIC\i0 \ul UNDERLINE\ul0\cf0 \\\{\}\par} If you want to enable word wrap, please set the maximum width to a desired size. However, this will fix the width to the maximum width, even when the text is shorter. Have fun! A: There is an excellent article from 2009 on Code Project named "A Professional HTML Renderer You Will Use" which implements something similar to what the original poster wants. I use it successfully within several projects of us. A: Very simple solution: * *Add 2 labels on the form, LabelA and LabelB *Go to properties for LabelA and dock it to left. *Go to properties for LabelB and dock it to left as well. *Set Font to bold for LabelA . Now the LabelB will shift depending on length of text of LabelA. That's all. A: That's not possible with a WinForms label as it is. The label has to have exactly one font, with exactly one size and one face. You have a couple of options: * *Use separate labels *Create a new Control-derived class that does its own drawing via GDI+ and use that instead of Label; this is probably your best option, as it gives you complete control over how to instruct the control to format its text *Use a third-party label control that will let you insert HTML snippets (there are a bunch - check CodeProject); this would be someone else's implementation of #2. A: Not really, but you could fake it with a read-only RichTextBox without borders. RichTextBox supports Rich Text Format (rtf). A: Another workaround, late to the party: if you don't want to use a third party control, and you're just looking to call attention to some of the text in your label, and you're ok with underlines, you can use a LinkLabel. Note that many consider this a 'usability crime', but if you're not designing something for end user consumption then it may be something you're prepared to have on your conscience. The trick is to add disabled links to the parts of your text that you want underlined, and then globally set the link colors to match the rest of the label. You can set almost all the necessary properties at design-time apart from the Links.Add() piece, but here they are in code: linkLabel1.Text = "You are accessing a government system, and all activity " + "will be logged. If you do not wish to continue, log out now."; linkLabel1.AutoSize = false; linkLabel1.Size = new Size(365, 50); linkLabel1.TextAlign = ContentAlignment.MiddleCenter; linkLabel1.Links.Clear(); linkLabel1.Links.Add(20, 17).Enabled = false; // "government system" linkLabel1.Links.Add(105, 11).Enabled = false; // "log out now" linkLabel1.LinkColor = linkLabel1.ForeColor; linkLabel1.DisabledLinkColor = linkLabel1.ForeColor; Result: A: Worked solution for me - using custom RichEditBox. With right properties it will be looked as simple label with bold support. 1) First, add your custom RichTextLabel class with disabled caret : public class RichTextLabel : RichTextBox { public RichTextLabel() { base.ReadOnly = true; base.BorderStyle = BorderStyle.None; base.TabStop = false; base.SetStyle(ControlStyles.Selectable, false); base.SetStyle(ControlStyles.UserMouse, true); base.SetStyle(ControlStyles.SupportsTransparentBackColor, true); base.MouseEnter += delegate(object sender, EventArgs e) { this.Cursor = Cursors.Default; }; } protected override void WndProc(ref Message m) { if (m.Msg == 0x204) return; // WM_RBUTTONDOWN if (m.Msg == 0x205) return; // WM_RBUTTONUP base.WndProc(ref m); } } 2) Split you sentence to words with IsSelected flag, that determine if that word should be bold or no : private void AutocompleteItemControl_Load(object sender, EventArgs e) { RichTextLabel rtl = new RichTextLabel(); rtl.Font = new Font("MS Reference Sans Serif", 15.57F); StringBuilder sb = new StringBuilder(); sb.Append(@"{\rtf1\ansi "); foreach (var wordPart in wordParts) { if (wordPart.IsSelected) { sb.Append(@"\b "); } sb.Append(ConvertString2RTF(wordPart.WordPart)); if (wordPart.IsSelected) { sb.Append(@"\b0 "); } } sb.Append(@"}"); rtl.Rtf = sb.ToString(); rtl.Width = this.Width; this.Controls.Add(rtl); } 3) Add function for convert you text to valid rtf (with unicode support!) : private string ConvertString2RTF(string input) { //first take care of special RTF chars StringBuilder backslashed = new StringBuilder(input); backslashed.Replace(@"\", @"\\"); backslashed.Replace(@"{", @"\{"); backslashed.Replace(@"}", @"\}"); //then convert the string char by char StringBuilder sb = new StringBuilder(); foreach (char character in backslashed.ToString()) { if (character <= 0x7f) sb.Append(character); else sb.Append("\\u" + Convert.ToUInt32(character) + "?"); } return sb.ToString(); } Works like a charm for me! Solutions compiled from : How to convert a string to RTF in C#? Format text in Rich Text Box How to hide the caret in a RichTextBox? A: I Would also be interested in finding out if it is possible. When we couldn't find a solution we resorted to Component Ones 'SuperLabel' control which allows HTML markup in a label. A: Realising this is an old question, my answer is more for those, like me, who still may be looking for such solutions and stumble upon this question. Apart from what was already mentioned, DevExpress's LabelControl is a label that supports this behaviour - demo here. Alas, it is part of a paid library. If you're looking for free solutions, I believe HTML Renderer is the next best thing. A: A FlowLayoutPanel works well for your problem. If you add labels to the flow panel and format each label's font and margin properties, then you can have different font styles. Pretty quick and easy solution to get working. A: Yeah. You can implements, using HTML Render. For you see, click on the link: https://htmlrenderer.codeplex.com/ I hope this is useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/11311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Simple animation in WinForms Imagine you want to animate some object on a WinForm. You setup a timer to update the state or model, and override the paint event of the Form. But from there, what's the best way to continually repaint the Form for the animation? * *Invalidate the Form as soon as you are done drawing? *Setup a second timer and invalidate the form on a regular interval? *Perhaps there is a common pattern for this thing? *Are there any useful .NET classes to help out? Each time I need to do this I discover a new method with a new drawback. What are the experiences and recommendations from the SO community? A: In some situations, it's faster and more convenient to not draw using the paint event, but getting the Graphics object from the control/form and painting "on" that. This may give some troubles with opacity/anti aliasing/text etc, but could be worth the trouble in terms of not having to repaint the whole shabang. Something along the lines of: private void AnimationTimer_Tick(object sender, EventArgs args) { // First paint background, like Clear(Control.Background), or by // painting an image you have previously buffered that was the background. animationControl.CreateGraphics().DrawImage(0, 0, animationImages[animationTick++])); } I use this in some Controls myself, and have buffered images to "clear" the background with, when the object of interest moves or need to be removed. A: I've created a library that might help with this. It's called Transitions, and can be found here: https://github.com/UweKeim/dot-net-transitions. Available on nuget as the dot-net-transitions package It uses timers running on a background thread to animate the objects. The library is open-source, so if it is any use to you, you can look at the code to see what it's doing. A: What you're doing is the only solution I've ever used in WinForms (a timer with constant redrawings). There are a bunch of techniques that you can use to make the user's experience with it smoother (such as double-buffering). You might want to give WPF a try. There are built-in facilities for doing animations in WPF, and they're much smoother (and require less code and no synchronization on your part) than a timer-based solution. Note that you do not need to use WPF throughout your whole app for that solution; it's possible to pack this functionality into a WPF control and embed the control in a WinForms application (or an unmanaged app, for that matter): http://www.codeproject.com/KB/WPF/WPF_UserControls.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/11318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Passing more parameters in C function pointers Let's say I'm creating a chess program. I have a function void foreachMove( void (*action)(chess_move*), chess_game* game); which will call the function pointer action on each valid move. This is all well and good, but what if I need to pass more parameters to the action function? For example: chess_move getNextMove(chess_game* game, int depth){ //for each valid move, determine how good the move is foreachMove(moveHandler, game); } void moveHandler(chess_move* move){ //uh oh, now I need the variables "game" and "depth" from the above function } Redefining the function pointer is not the optimal solution. The foreachMove function is versatile and many different places in the code reference it. It doesn't make sense for each one of those references to have to update their function to include parameters that they don't need. How can I pass extra parameters to a function that I'm calling through a pointer? A: You'd probably need to redefine the function pointer to take additional arguments. void foreachMove( void (*action)(chess_move*, int), chess_game* game ) A: If you're willing to use some C++, you can use a "function object": struct MoveHandler { chess_game *game; int depth; MoveHandler(chess_game *g, int d): game(g), depth(d) {} void operator () (chess_move*) { // now you can use the game and the depth } }; and turn your foreachMove into a template: template <typename T> void foreachMove(T action, chess_game* game); and you can call it like this: chess_move getNextMove(chess_game* game, int depth){ //for each valid move, determine how good the move is foreachMove(MoveHandler(game, depth), game); } but it won't disrupt your other uses of MoveHandler. A: If I'm reading this right, what I'd suggest is to make your function take a pointer to a struct as an argument. Then, your struct can have "game" and "depth" when it needs them, and just leave them set to 0 or Null when you don't need them. What is going on in that function? Do you have a conditional that says, if (depth > -1) //some default { //do something } Does the function always REQUIRE "game" and "depth"? Then, they should always be arguments, and that can go into your prototypes. Are you indicating that the function only sometimes requires "game" and "depth"? Well, maybe make two functions and use each one when you need to. But, having a structure as the argument is probably the easiest thing. A: Ah, if only C supported closures... Antonio is right; if you need to pass extra parameters, you'll need to redefine your function pointer to accept the additional arguments. If you don't know exactly what parameters you'll need, then you have at least three choices: * *Have the last argument in your prototype be a void*. This gives you flexibility of passing in anything else that you need, but it definitely isn't type-safe. *Use variadic parameters (...). Given my lack of experience with variadic parameters in C, I'm not sure if you can use this with a function pointer, but this gives even more flexibility than the first solution, albeit still with the lack of type safety. *Upgrade to C++ and use function objects. A: I'd suggest using an array of void*, with the last entry always void. say you need 3 parameters you could do this: void MoveHandler (void** DataArray) { // data1 is always chess_move chess_move data1 = DataArray[0]? (*(chess_move*)DataArray[0]) : NULL; // data2 is always float float data1 = DataArray[1]? (*(float*)DataArray[1]) : NULL; // data3 is always char char data1 = DataArray[2]? (*(char*)DataArray[2]) : NULL; //etc } void foreachMove( void (*action)(void**), chess_game* game); and then chess_move getNextMove(chess_game* game, int depth){ //for each valid move, determine how good the move is void* data[4]; data[0] = &chess_move; float f1; char c1; data[1] = &f1; data[2] = &c1; data[3] = NULL; foreachMove(moveHandler, game); } If all the parameters are the same type then you can avoid the void* array and just send a NULL-terminated array of whatever type you need. A: +1 to Antonio. You need to change your function pointer declaration to accept additional parameters. Also, please don't start passing around void pointers or (especially) arrays of void pointers. That's just asking for trouble. If you start passing void pointers, you're going to also have to pass some kind of message to indicate what the pointer type is (or types are). This technique is rarely appropriate. If your parameters are always the same, just add them to your function pointer arguments (or possibly pack them into a struct and use that as the argument if there are a lot of parameters). If your parameters change, then consider using multiple function pointers for the multiple call scenarios instead of passing void pointers. A: If your parameters change, I would change the function pointer declaration to use the "..." technique to set up a variable number of arguments. It could save you in readability and also having to make a change for each parameter you want to pass to the function. It is definately a lot safer than passing void around. http://publications.gbdirect.co.uk/c_book/chapter9/stdarg.html Just an FYI, about the example code in the link: some places they have “n args” and others it is “n_args” with the underscore. They should all have the underscore. I thought the syntax looked a little funny until I realized they had dropped the underscore in some places. A: Use a typedef for the function pointer. See my answer for this question A: Another option would be to modify the chess_move structure instead of the function prototype. The structure is presumably defined in only one place already. Add the members to the structure, and fill the structure with appropriate data before any call which uses it.
{ "language": "en", "url": "https://stackoverflow.com/questions/11330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Getting Java and TWAIN to play together nicely I'm working on building an app to scan directly from TWAIN scanner to a Java applet. I'm already aware of Morena and JTwain, but they cost money. I need free. I could re-invent the wheel with JNI, but it seems like someone has probably already done this as a FOSS tool. Is anyone familiar with a free tool that can get a Java applet to read directly from a TWAIN scanner? A: Calling the TWAIN API from anything except C/C++ is going to be a major pain, it relies entirely on complicated C structures that you have to replicate exactly in memory. If you need only fairly basic scanning, you could use something like GitHub site to call my old free 'EZTwain Classic' DLL (google for eztw32.dll) A: hm. I might have some homebrew available for it somewhere I could check, but for now: At our company, we basically gave up on this issue and implemented an (unfortunately win only) ActiveX solution: Site Link A: I've actually purchased the chestysoft activeX control. Been using it for about 3 years. Works great but as with all ActiveX you are restricted to IE. And this one is 32-bit only. I'm looking into a flash approach now. Since flash can capture from a camera why not from a scanner. If I remember I'll report back what I find.
{ "language": "en", "url": "https://stackoverflow.com/questions/11338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Create PDFs from multipage forms in WebObjects I would like to automatically generate PDF documents from WebObjects based on mulitpage forms. Assuming I have a class which can assemble the related forms (java/wod files) is there a good way to then parse the individual forms into a PDF instead of going to the screen? A: The canonical response when asked about PDFs from WebObjects has generally been ReportMill. It's a PDF document generating framework that works a lot like WebObjects, and includes its own graphical PDF builder tool similar to WebObjects Builder and Interface Builder. You can bind elements in your generated PDFs to dynamic data in your application just as you would for a WOComponent. They have couple of tutorial videos on the ReportMill product page that should give you an idea of how the tool works. It'll probably be a lot easier than trying to work with FOP programmatically. A: I'm not familiar with WebObjects, but I see you have java listed in there. iText is a java api for building pdfs. If you can access a java api from WebObjects you should be able to build pdfs that way. A: ERPDFWrapper component in Project Wonder: Site link A: ScArcher2>> I have looked into different routes for creating PDFs on the fly including FOP and a few Java libraries. I think what I am really asking is if anyone has already done this in the WebObjects framework. My hope is that someone familiar with WebObjects might have done this already and have some insight that would save me some time. A: You can use ReportMill or Jasper Reports. Compared with ReportMill Jasper Reports is Free but requires learning huge library. You can use IReport or Jasper Assistant eclipse plugin(If you are using WOLips) for building report templates. My experiance both are good. A: Jasper Reports support have been added to Project Wonder a week ago : Site Link Video A talk about that new framework was done at WOWODC 2010, and it was recorded. Check wocommunity.org and the mailing list about that in October.
{ "language": "en", "url": "https://stackoverflow.com/questions/11341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: XPATHS and Default Namespaces What is the story behind XPath and support for namespaces? Did XPath as a specification precede namespaces? If I have a document where elements have been given a default namespace: <foo xmlns="uri" /> It appears as though some of the XPath processor libraries won't recognize //foo because of the namespace whereas others will. The option my team has thought about is to add a namespace prefix using regular expressions to the XPath (you can add a namespace prefix via XmlNameTable) but this seems brittle since XPath is such a flexible language when it comes to node tests. Is there a standard that applies to this? My approach is a bit hackish but it seems to work fine; I remove the xmlns declaration with a search/replace and then apply XPath. string readyForXpath = Regex.Replace(xmldocument, "xmlns=\".+\"", String.Empty ); Is that a fair approach or has anyone solved this differently? A: The issue is that an element without a namespace is declared to be in the NULL namespace - therefore if //foo matched against the namespace you consider to be the 'default' there would be no way to refer to an element in the null namespace. Remember as well that the prefix for a namespace is only a shorthand convention, the real element name (Qualified Name, or QName for short) consists of the full namespace and the local name. Changing the prefix for a namespace does not change the 'identity' of an element - if it is in the same namespace and same local name then it is the same kind of element, even if the prefix is different. XPath 2.0 (or rather XSLT 2.0) has the concept of the 'default xpath namespace'. You can set the xpath-default-namespace attribute on the xsl:stylesheet element. A: You need local-name(): http://www.w3.org/TR/xpath#function-local-name To crib from http://web.archive.org/web/20100810142303/http://jcooney.net:80/archive/2005/08/09/6517.aspx: <foo xmlns='urn:foo'> <bar> <asdf/> </bar> </foo> This expression will match the “bar” element: //*[local-name()='bar'] This one won't: //bar A: I tried something similar to what palehorse proposed and could not get it to work. Since I was getting data from a published service I couldn't change the xml. I ended up using XmlDocument and XmlNamespaceManager like so: XmlDocument doc = new XmlDocument(); doc.LoadXml(xmlWithBogusNamespace); XmlNamespaceManager nSpace = new XmlNamespaceManager(doc.NameTable); nSpace.AddNamespace("myNs", "http://theirUri"); XmlNodeList nodes = doc.SelectNodes("//myNs:NodesIWant",nSpace); //etc A: If you are trying to use xslt you can add the namespace in to the stylesheet declaration. If you do that, you must make sure that there is a prefix or it will not work. If the source XML does not have a prefix, that is still fine, you add your own prefix in the stylesheet. Stylesheet <xsl:stylesheet xmlns:fb="uri" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="fb:foo/bar"> <!-- do stuff here --> </xsl:template> </xsl:stylsheet> Or something like that. A: Using libxml it seems this works: http://xmlsoft.org/examples/xpath1.c int register_namespaces(xmlXPathContextPtr xpathCtx, const xmlChar* nsList) { xmlChar* nsListDup; xmlChar* prefix; xmlChar* href; xmlChar* next; assert(xpathCtx); assert(nsList); nsListDup = xmlStrdup(nsList); if(nsListDup == NULL) { fprintf(stderr, "Error: unable to strdup namespaces list\n"); return(-1); } next = nsListDup; while(next != NULL) { /* skip spaces */ while((*next) == ' ') next++; if((*next) == '\0') break; /* find prefix */ prefix = next; next = (xmlChar*)xmlStrchr(next, '='); if(next == NULL) { fprintf(stderr,"Error: invalid namespaces list format\n"); xmlFree(nsListDup); return(-1); } *(next++) = '\0'; /* find href */ href = next; next = (xmlChar*)xmlStrchr(next, ' '); if(next != NULL) { *(next++) = '\0'; } /* do register namespace */ if(xmlXPathRegisterNs(xpathCtx, prefix, href) != 0) { fprintf(stderr,"Error: unable to register NS with prefix=\"%s\" and href=\"%s\"\n", prefix, href); xmlFree(nsListDup); return(-1); } } xmlFree(nsListDup); return(0); }
{ "language": "en", "url": "https://stackoverflow.com/questions/11345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What is good server performance monitoring software for Windows? I'm looking for some software to monitor a single server for performance alerts. Preferably free and with a reasonable default configuration. Edit: To clarify, I would like to run this software on a Windows machine and monitor a remote Windows server for CPU/memory/etc. usage alerts (not a single application). Edit: I suppose its not necessary that this software be run remotely, I would also settle for something that ran on the server and emailed me if there was an alert. It seems like Windows performance logs and alerts might be used for this purpose somehow but it was not immediately obvious to me. Edit: Found a neat tool on the coding horror blog, not as useful for remote monitoring but very useful for things you would worry about as a server admin: http://www.winsupersite.com/showcase/winvista_ff_rmon.asp A: I've been experimenting with munin for monitoring around 8 Windows 2003 servers. http://munin.projects.linpro.no/ Its a free linux-based system and the Windows agent works well & is easily extensible. Setup is simple if you have some minimal linux knowledge. A: For performance monitor - start it on the server (Win+R and enter "perfmon"). Select "Performance Logs and Alerts" and expand. Select "Alerts". Select "Action" & then "New Alert". Give the alert a name, click "Add" to add a counter (there are hundres of counters, for example CPU %), then give it some limits. Select the "Action" tab, and then decide what you want to do. You may need a third party program - for example Blat to send emails - but basiaclly any script can be run. A: If you want something free, try Nagios. http://www.nagios.org/ A: You can configure you perfmon to collect specific counters to "Trace Logs" files on your hard drive. We usually keep daily logs for important counters: * *Vital signs (CPU, Memory, HDD space) *Application specific (ASP.Net counters / SQL Counters) *Custom counters if your applicaiton exposes such You can add "Alerts" for specific counters / counters groups and define actions when these alerts fire. A: A list of monitoring tools from the High Scalability blog A: MS's solutions used to be called MOM. It looks like it's been redesigned a bit since I last used it. A: I kind of like Perfmon myself. It comes with windows out of the box and has support for a lot of different measurements.
{ "language": "en", "url": "https://stackoverflow.com/questions/11359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Making human readable representations of an Integer Here's a coding problem for those that like this kind of thing. Let's see your implementations (in your language of choice, of course) of a function which returns a human readable String representation of a specified Integer. For example: * *humanReadable(1) returns "one". *humanReadable(53) returns "fifty-three". *humanReadable(723603) returns "seven hundred and twenty-three thousand, six hundred and three". *humanReadable(1456376562) returns "one billion, four hundred and fifty-six million, three hundred and seventy-six thousand, five hundred and sixty-two". Bonus points for particularly clever/elegant solutions! It might seem like a pointless exercise, but there are number of real world applications for this kind of algorithm (although supporting numbers as high as a billion may be overkill :-) A: There was already a question about this: Convert integers to written numbers The answer is for C#, but I think you can figure it out. A: import math def encodeOnesDigit(num): return ['', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'][num] def encodeTensDigit(num): return ['twenty', 'thirty', 'forty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'][num-2] def encodeTeens(num): if num < 10: return encodeOnesDigit(num) else: return ['ten', 'eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen', 'sixteen', 'seventeen', 'eighteen', 'nineteen'][num-10] def encodeTriplet(num): if num == 0: return '' str = '' if num >= 100: str = encodeOnesDigit(num / 100) + ' hundred' tens = num % 100 if tens >= 20: if str != '': str += ' ' str += encodeTensDigit(tens / 10) if tens % 10 > 0: str += '-' + encodeOnesDigit(tens % 10) elif tens != 0: if str != '': str += ' ' str += encodeTeens(tens) return str def zipNumbers(numList): if len(numList) == 1: return numList[0] strList = ['', ' thousand', ' million', ' billion'] # Add more as needed strList = strList[:len(numList)] strList.reverse() joinedList = zip(numList, strList) joinedList = [item for item in joinedList if item[0] != ''] return ', '.join(''.join(item) for item in joinedList) def humanReadable(num): if num == 0: return 'zero' negative = False if num < 0: num *= -1 negative = True numString = str(num) tripletCount = int(math.ceil(len(numString) / 3.0)) numString = numString.zfill(tripletCount * 3) tripletList = [int(numString[i*3:i*3+3]) for i in range(tripletCount)] readableList = [encodeTriplet(num) for num in tripletList] readableStr = zipNumbers(readableList) return 'negative ' + readableStr if negative else readableStr A: Supports up to 999 million, but no negative numbers: String humanReadable(int inputNumber) { if (inputNumber == -1) { return ""; } int remainder; int quotient; quotient = inputNumber / 1000000; remainder = inputNumber % 1000000; if (quotient > 0) { return humanReadable(quotient) + " million, " + humanReadable(remainder); } quotient = inputNumber / 1000; remainder = inputNumber % 1000; if (quotient > 0) { return humanReadable(quotient) + " thousand, " + humanReadable(remainder); } quotient = inputNumber / 100; remainder = inputNumber % 100; if (quotient > 0) { return humanReadable(quotient) + " hundred, " + humanReadable(remainder); } quotient = inputNumber / 10; remainder = inputNumber % 10; if (remainder == 0) { //hackish way to flag the algorithm to not output something like "twenty zero" remainder = -1; } if (quotient == 1) { switch(inputNumber) { case 10: return "ten"; case 11: return "eleven"; case 12: return "twelve"; case 13: return "thirteen"; case 14: return "fourteen"; case 15: return "fifteen"; case 16: return "sixteen"; case 17: return "seventeen"; case 18: return "eighteen"; case 19: return "nineteen"; } } switch(quotient) { case 2: return "twenty " + humanReadable(remainder); case 3: return "thirty " + humanReadable(remainder); case 4: return "forty " + humanReadable(remainder); case 5: return "fifty " + humanReadable(remainder); case 6: return "sixty " + humanReadable(remainder); case 7: return "seventy " + humanReadable(remainder); case 8: return "eighty " + humanReadable(remainder); case 9: return "ninety " + humanReadable(remainder); } switch(inputNumber) { case 0: return "zero"; case 1: return "one"; case 2: return "two"; case 3: return "three"; case 4: return "four"; case 5: return "five"; case 6: return "six"; case 7: return "seven"; case 8: return "eight"; case 9: return "nine"; } } A: using System; namespace HumanReadable { public static class HumanReadableExt { private static readonly string[] _digits = { "", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen" }; private static readonly string[] _teens = { "", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety" }; private static readonly string[] _illions = { "", "thousand", "million", "billion", "trillion" }; private static string Seg(int number) { var work = string.Empty; if (number >= 100) work += _digits[number / 100] + " hundred "; if ((number % 100) < 20) work += _digits[number % 100]; else work += _teens[(number % 100) / 10] + "-" + _digits[number % 10]; return work; } public static string HumanReadable(this int number) { if (number == 0) return "zero"; var work = string.Empty; var parts = new string[_illions.Length]; for (var ind = 0; ind < parts.Length; ind++) parts[ind] = Seg((int) (number % Math.Pow(1000, ind + 1) / Math.Pow(1000, ind))); for (var ind = 0; ind < parts.Length; ind++) if (!string.IsNullOrEmpty(parts[ind])) work = parts[ind] + " " + _illions[ind] + ", " + work; work = work.TrimEnd(',', ' '); var lastSpace = work.LastIndexOf(' '); if (lastSpace >= 0) work = work.Substring(0, lastSpace) + " and" + work.Substring(lastSpace); return work; } } class Program { static void Main(string[] args) { Console.WriteLine(1.HumanReadable()); Console.WriteLine(53.HumanReadable()); Console.WriteLine(723603.HumanReadable()); Console.WriteLine(1456376562.HumanReadable()); Console.ReadLine(); } } } A: There's one huge problem about this function implementation. It is it's future localization. That function, written by english native speaker, most probably wouldn't work right for any other language than english. It is nearly impossible to write general easy localizable function for any human language dialect in a world, unless you really need to keep it general. Actually in real world you do not need to operate with huge integer numbers, so you can just keep all the numbers in a big (or even not so big) string array. A: agreed that there are a number of real world applications. as such there's already a number of real world implementations. it's been part of bsdgames since pretty much forever... > man number
{ "language": "en", "url": "https://stackoverflow.com/questions/11381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: GCC issue: using a member of a base class that depends on a template argument The following code doesn't compile with gcc, but does with Visual Studio: template <typename T> class A { public: T foo; }; template <typename T> class B: public A <T> { public: void bar() { cout << foo << endl; } }; I get the error: test.cpp: In member function ‘void B::bar()’: test.cpp:11: error: ‘foo’ was not declared in this scope But it should be! If I change bar to void bar() { cout << this->foo << endl; } then it does compile, but I don't think I have to do this. Is there something in the official specs of C++ that GCC is following here, or is it just a quirk? A: The main reason C++ cannot assume anything here is that the base template can be specialized for a type later. Continuing the original example: template<> class A<int> {}; B<int> x; x.bar();//this will fail because there is no member foo in A<int> A: David Joyner had the history, here is the reason. The problem when compiling B<T> is that its base class A<T> is unknown from the compiler, being a template class, so no way for the compiler to know any members from the base class. Earlier versions did some inference by actually parsing the base template class, but ISO C++ stated that this inference can lead to conflicts where there should not be. The solution to reference a base class member in a template is to use this (like you did) or specifically name the base class: template <typename T> class A { public: T foo; }; template <typename T> class B: public A <T> { public: void bar() { cout << A<T>::foo << endl; } }; More information in gcc manual. A: VC doesn't implemented two-phase lookup, while GCC does. So GCC parses templates before they are instantiated and thus finds more errors than VC. In your example, foo is a dependent name, since it depends on 'T'. Unless you tell the compiler where it comes from, it cannot check the validity of the template at all, before you instantiate it. That's why you have to tell the compiler where it comes from. A: Wow. C++ never ceases to surprise me with its weirdness. In a template definition, unqualified names will no longer find members of a dependent base (as specified by [temp.dep]/3 in the C++ standard). For example, template <typename T> struct B { int m; int n; int f (); int g (); }; int n; int g (); template <typename T> struct C : B<T> { void h () { m = 0; // error f (); // error n = 0; // ::n is modified g (); // ::g is called } }; You must make the names dependent, e.g. by prefixing them with this->. Here is the corrected definition of C::h, template <typename T> void C<T>::h () { this->m = 0; this->f (); this->n = 0 this->g (); } As an alternative solution (unfortunately not backwards compatible with GCC 3.3), you may use using declarations instead of this->: template <typename T> struct C : B<T> { using B<T>::m; using B<T>::f; using B<T>::n; using B<T>::g; void h () { m = 0; f (); n = 0; g (); } }; That's just all kinds of crazy. Thanks, David. Here's the "temp.dep/3" section of the standard [ISO/IEC 14882:2003] that they are referring to: In the definition of a class template or a member of a class template, if a base class of the class template depends on a template-parameter, the base class scope is not examined during unqualified name lookup either at the point of definition of the class template or member or during an instantiation of the class template or member. [Example: typedef double A; template<class T> class B { typedef int A; }; template<class T> struct X : B<T> { A a; // a has typedouble }; The type name A in the definition of X<T> binds to the typedef name defined in the global namespace scope, not to the typedef name defined in the base class B<T>. ] [Example: struct A { struct B { /* ... */ }; int a; int Y; }; int a; template<class T> struct Y : T { struct B { /* ... */ }; B b; //The B defined in Y void f(int i) { a = i; } // ::a Y* p; // Y<T> }; Y<A> ya; The members A::B, A::a, and A::Y of the template argument A do not affect the binding of names in Y<A>. ] A: This changed in gcc-3.4. The C++ parser got much more strict in that release -- per the spec but still kinda annoying for people with legacy or multi-platform code bases.
{ "language": "en", "url": "https://stackoverflow.com/questions/11405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How to host a WPF form in a MFC application I'm looking for any resources on hosting a WPF form within an existing MFC application. Can anyone point me in the right direction on how to do this? A: From what I understand (haven't tried myself), it's almost as simple as just giving the WPF control the parent's handle. Here's a Walkthrough: Hosting WPF Content in Win32.
{ "language": "en", "url": "https://stackoverflow.com/questions/11423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What version of .Net framework ships with SQL Server 2008? Does SQL Server 2008 ship with the .NET 3.5 CLR, so that stored procedures written in CLR can use 3.5 features? A: I swear this isn't being pedantic, but is an important distinction -- I don't know what specifically you need when you say ".NET 3.5 CLR" -- probably the .NET 3.5 Framework? Possibly C# 3.0 language features? But the CLR that .NET 3.5 runs on is still CLR 2.0. (the link is to the same explanation re: .NET 3.0; I couldn't immediately find this info on 3.5. Actually, the best explanation of CLR vs. Framework vs. language version numbers I've yet found is on page 12 of Teach Yourself WPF in 24 Hours*) So, my point is that you can even use the features of .NET 3.5 and C# 3.0 on SQL 2005 CLR stored procedures -- we do, at my company -- and there's not even really any trickery to it. All you have to do is have the free 3.5 framework on your server. Obviously the SQL 2005 answer isn't that relevant for your specific question, but hopefully this will be helpful to the person who eventually comes across this page via Google. *disclosure: I'm friends with the authors A: Actually it ships with .NET 3.5 SP1. So yes, the stored procs can use 3.5 features and libraries.
{ "language": "en", "url": "https://stackoverflow.com/questions/11430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Visual Studio Setup Project Custom Dialog I have created a custom dialog for Visual Studio Setup Project using the steps described here Now I have a combobox in one of my dialogs. I want to populate the combobox with a list of all SQL Server instances running on the local network. It's trivial to get the server list ... but I'm completely lost on how to make them display in the combobox. I would appreciate your help and some code might also be nice as I'm beginner :). A: I've always found the custom dialogs in visual studio setup projects to be woefully limited and barely functional. By contrast, I normally create custom actions that display winforms gui's for any remotely difficult tasks during setup. Works really well and you can do just about anything you want by creating a custom action and passing a few parameters across. In the dayjob we built a collection of common custom actions for tasks like application config and database creation / script execution to get around custom dialog limitations. A: I guess you'll have to go beyond the out-of-the-box setup and deployment package and try a third party app. You may want to look at: * *Wix *Nullsoft Scriptable Install System Both are free; they might give you the customization that you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/11439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: ASP.NET Proxy Application Let me try to explain what I need. I have a server that is visible from the internet. What I need is to create a ASP.NET application that get the request of a web Site and send to a internal server, then it gets the response and publish the the info. For the client this should be totally transparent. For different reasons I cannot redirect the port to the internal server. What I can do but no know how - maybe the answer is there - is to create a new Web Site that its host in the other server. A: Why won't any old proxy software work for this? Why does it need to be an ASP.NET application? There are TONS of tools out there (both Windows and *nix) that will get the job done quite easily. Check Squid or NetProxy for starters. If you need to integrate with IIS, IISProxy looks like it would do the trick too. A: I use apache mod_proxy and mod_proxy_balancer. Works awesome running 5 domains a cluster of 4 web boxes.
{ "language": "en", "url": "https://stackoverflow.com/questions/11460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Sharepoint Wikis Ok, I've seen a few posts that mention a few other posts about not using SP wikis because they suck. Since we are looking at doing our wiki in SP, I need to know why we shouldn't do it for a group of 6 automation-developers to document the steps in various automated processes and the changes that have to be made from time to time. A: My two cents worth as a wiki content creator and super-user, rather than an administrator or developer: I am currently editing a document in Sharepoint Wiki as I type this, and it is by far the worst editor I have ever come across. To be precise, I'm using Sharepoint Foundation 2010 (previously known as WSS), editing pages using IE 9. To sum up the problems I've faced: When creating wiki content you want to concentrate on the content and the wiki engine should be so easy to use as to be almost invisible. With Sharepoint that is not the case. I really struggle with the pseudo-WYSIWYG editor, having to fix frequent formatting problems. I estimate that I'm about 15% less productive writing wiki content with Sharepoint than I am with ScrewTurn or Wikimedia because I have to deal with the formatting issues. If I spend a day writing wiki pages I would lose about an hour trying to fix formatting issues. For background: I've created four internal wikis at our company - the first in Wikimedia, the wiki engine behind Wikipedia, the next two in ScrewTurn, and a final one in Sharepoint. In each wiki I've written about 50-100 pages. In both ScrewTurn and Wikimedia the editor looks fairly primitive - a plain text editor that uses simple wiki mark-up codes for formatting. Each has a row of buttons that can apply mark-up codes for simple things like bold and italic formatting, and to create links, so beginners do not need to learn the mark-up codes by heart. While the editors look plain they turn out to be really simple to use, especially for fixing formatting problems. Sharepoint Wiki, on the other hand, looks slick but is terrible for editing. Instead of using a plain text editor with wiki mark-up it has a WYSIWYG editor that looks much more sophisticated than other wiki editors. However it has personality, an evil one. It frequently adds blank lines or changes the colour of text. When I select text to format then go to the Markup Styles dropdown to format it, sometimes the act of selecting an item from the dropdown list deselects the selected text so the formatting applies to text at a random location. Inserting text copied from Word sometimes causes the editor to double or triple up on blank lines between paragraphs at other places on the page. There appears to be no easy way of creating a table, apart from writing HTML. The biggest problem about the editor, however, is that you can't easily see what is going on behind the scenes so it's difficult to fix it. Yes, it's possible to edit the page's HTML but that really defeats the purpose of a wiki. The overall impression I get as a user, is that this is alpha-level code that has been knocked up by a summer intern. I know Foundation is the free version so perhaps I get what we've paid for but I cannot believe a professional software company put out this product. A: For a group of 6 people that will be making "every now and then" edits, the built-in wiki will be fine. A: The Sharepoint Wiki is essentially a list of Static HTML Pages, with the only Wiki-feature being [[article]] links. No Templates, No Categories, nothing. We ended up having a separate MediaWiki and only use the Sharepoint wiki for text-based content that does not need much layout. A: Don't forget the Community Kit for Sharepoint - Enhanced Wiki Edition. This adds some features to the out of the box version. A: Before the rant, here is my overall experience with SharePoint as a wiki. It is a poorly implemented feature that failed becouse there was a fundemental lack of investigation into what current wiki environments provide. That is why it failed in it's editor and why it misses on points like: tagging, history comparison, and poorly generated html code. You need to skip it and get something else that does the job better and link to it from SharePoint. Having production experience with both products, I'd recommend ScrewTurn over SharePoint. see edit history for rant A: My company rolled out sharepoint recently, and I have to say my user experience was Very Bad. And I'm not just saying I was apprehensive to using it: I went in with an open mind and tried it, and many things just felt like they didn't really work right. The reasons Luke mentioned more or less cover it. Why wouldn't you consider using something else like Screwturn Wiki which Jeff donated to a short while ago? I haven't used Screwturn myself, but it is free and open source, and may be a faster lightweight solution for what you need. A: We looked at Sharepoint for a department wiki a few months ago. Even though we're primarily an MS shop, we went with DokuWiki. Open-source, so easy to keep up to date, great plugins, and a file-based back end. A: I would also temper the ratings of the OOB wiki and its lack of functionality with the technical level of the authors here. I agree that the SP wiki might qualify in name only - certainly when compared to some more robust offerings - but remember as an admin - your primary success is determined by end user adoption. In short - for every feature that a wiki like Confluence adds, it also adds user education, syntax, etc. While I would love SP wiki to have more "wiki-like" - there is a certain, undescribable satisfaction you can take when your CIO adds an entry in the company wiki - or you are recognized by a group of administrative assistants who find the new wiki "revolutionary". In short - the built in functionality may be lacking to the jaded eyes of us tech professionals, but to the technologically naive, its pretty easy to train on, and can expose them to a technology they may have heard of but could never (before this) understand or imagine using. A: Here are some caveats I came across that will vanish if you use a wiki other than Sharepoint. Sharepoint lets you create tons of separate wikis, but I'd recommend having one big wiki for everything. My company made a bunch of little wikis for each project/feature, but only admins can create the individual wikis, so if I want to write about something that isn't doesn't match one of the predefined categories, I have to find a manager to create the wiki first. Secondly, if you use Sharepoint make sure everyone on your staff only uses IE, since Firefox doesn't support the WYSIWIG editor. This is a good thing for most wikis, but makes collaborating difficult in Sharepoint. Imagine editing auto-generated HTML in a tiny little box all day. Third, try to write up your project documentation in the wiki and resist the temptation to upload Word docs to the Sharepoint library. No point in writing up all your docs twice and watching things get more and more out of sync. Finally, image support in Sharepoint wikis is terrible. You have to add a file to a document library somewhere and type in the URL. My images were forever getting deleted because they don't seem to make much sense out of context. A: I've played very briefly with SharePoint Wiki Plus. It's a third-party extension that adds features to the SharePoint Wiki. For serious wiki users then you probably need something more than the SharePoint provided Wiki - either via an extension or a dedicated Wiki product. A: Maybe try http://wordtosharepoint.codeplex.com/ for migrating your Word content to SharePoint? It takes care of linking images and most other things. A: Screwturn is wicked awesome - and it is C# / .Net. Sharepoint 2010 is supposed to have better wiki features, and there is always the community kit of sharepoint. If you are able to leave the Sharepoint Wiki behind - you could always head over to the http://www.wikimatrix.org to find the wiki that works for you. A: I fully concur with the above (Keng). Whatever that thing is within SharePoint (currently using 2010), it is NOT a Wiki by a long shot. I am implementing an automated documenting solution, where I extract config and other info (like perldoc markup) from source code and XML config files. It inserts the info in a set of DokuWIKI pages, complete with formatting markup (including tables). It comes out perfectly formatted and works with a couple of tens of lines of perl, includes internal links to manually edited static doc pages, and support for namespaces so I can have my information logically organised. There is no way I could do that in SharePoint (sigh - company direction)... The best I can do is try to make DokuWIKI template resemble sort of the SharePoint site (to keep the look and feel similar) and link out of SharePoint. :-( A: I have a much more positive view of Microsoft's Sharepoint Wiki. In many ways it reminds me of FrontPage 98 -- and that was an unfairly maligned product. The comment about using a list is misguided. Sharepoint Wikis ARE Sharepoint lists, in which each page is a list item with an HTML attachment. It's true that you can't link into a page, but if the pages are short I don't see that as a problem. SP Wiki makes it very easy to have short pages. You can manipulate the Wiki attributes from access 2008 if you wish, and you can add attributes to the wiki list items as desired. For example -- do you want categories? Just add them by editing the list. Want specific views? of list items. Create them too. There's real genius in the way Microsoft built their Wiki framework atop Sharepoint lists -- which are undeniablly well done. The TRUE drawback of Sharepoint Wiki was mentioned by famerchris. The approach to image management is surprisingly awful. It's such a serious problem that you should consider other Wikis for this reason alone. There is a convoluted workaround that I use. It takes advantage of the superb Sharepoint support and image editing integrated with Windows Live Writer. * *Create a SP blog that will hold the images that will be referenced in the wiki. *Use Windows Live Writer to post to the wiki-image-blog. Drop your image into WLW, resize it as needed, etc. If you like, use WLW to write your image associated wiki text first draft as well. *After you post to the Wiki, copy and paste image and text into the Wiki editor rich text field. This takes suprisingly little time, far less than any other option I've read of. I admit, it is convoluted. Other than the image problems I'm pleased and impressed with the product. If only Microsoft had thought harder about images ... if only ... A: The default wiki included with Sharepoint doesn't support common wiki features well at all. There is no way to edit a single section of a page, and no way to link directly to a particular section on another page. The backend is in HTML so you lose the ability to edit in plaintext using simple syntax. The diff feature can't span multiple versions. Poor cross browser support of WYSIWYG editing. No way to auto-insert a table of contents... There are, however, other wiki add-ins for Sharepoint which I can't categorically dismiss, for instance Confluence makes an add-in for Sharepoint. I haven't evaluated this software myself, and Confluence is somewhat expensive ($1,200 for 25 user license) although if you are already on Sharepoint I sense large corporate coffers :P. There also appear to be some free add-ins like CKS Enhanced Wiki but that appears to have a lot of the same problems mentioned above. A: We run into this topic all the time, and the first question I have taken to asking people is "Why do you need a wiki"? Almost always the answers are things "ease of editing", "multiple contributors", and "Word is to heavyweight". Very rarely have we seen anyone ask for what I consider to be uniquely wiki-like features (special "magic" markup, fine grained version history showing changes, etc). Also, they usually want some kind of categorization of things, not just completely free-form pages. In the SharePoint world these things should scream "list" at you if you've been working with the tool for a while. There is basically no particular reason to use a wiki for these knowledge base-style applications, especially since "ease of editing" usually directly conflicts with the idea of learning a special markup language for most user. Through a couple of rich-text columns in there, and you're all set. If you really don't like the built-in rich-text editor (yes the image uploading process is clunky and it doesn't work in Firefox), have someone in your organization go drop the 8 Benjamins and go get the RadEditor for SharePoint. It should pretty much handle those concerns. Generally once we've gotten over the "but it needs to be a wiki" dogma, we've had pretty good customer reception to just using lists. In some cases, where a little more of a page templating facility was required we turned to using the WCM features of MOSS, which requires a little more up-front thought about templates, but also has a better out of the box experience for things like content snippets and image handling. A: Because the default implementation is not a wiki, it is an HTML editor. If you've used a wiki before you'll know the difference. Just look at "Your answer" at the bottom of this page to see the difference. You use markup in a wiki, which is relatively easy to read and edit. Formatted HTML completely obscures what is written.
{ "language": "en", "url": "https://stackoverflow.com/questions/11462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: String To Lower/Upper in C++ What is the best way people have found to do String to Lower case / Upper case in C++? The issue is complicated by the fact that C++ isn't an English only programming language. Is there a good multilingual method? A: For copy-pasters hoping to use Nic Strong's answer, note the spelling error in "use_factet" and the missing third parameter to std::transform: locale loc(""); const ctype<char>& ct = use_factet<ctype<char> >(loc); transform(str.begin(), str.end(), std::bind1st(std::mem_fun(&ctype<char>::tolower), &ct)); should be locale loc(""); const ctype<char>& ct = use_facet<ctype<char> >(loc); transform(str.begin(), str.end(), str.begin(), std::bind1st(std::mem_fun(&ctype<char>::tolower), &ct)); A: You should also review this question. Basically the problem is that the standard C/C++ libraries weren't built to handle Unicode data, so you will have to look to other libraries. This may change as the C++ standard is updated. I know the next compiler from Borland (CodeGear) will have Unicode support, and I would guess Microsoft's C++ compiler will have, or already has string libraries that support Unicode. A: As Darren told you, the easiest method is to use std::transform. But beware that in some language, like German for instance, there isn't always a one to one mapping between lower and uppercase. The "esset" lowercase character (look like the Greek character beta) is transformed to "SS" in uppercase. A: #include <algorithm> std::string data = "Abc"; std::transform(data.begin(), data.end(), data.begin(), ::toupper); http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/ Also, CodeProject article for common string methods: http://www.codeproject.com/KB/stl/STL_string_util.aspx A: > std::string data = “Abc”; > std::transform(data.begin(), data.end(), data.begin(), ::toupper); This will work, but this will use the standard "C" locale. You can use facets if you need to get a tolower for another locale. The above code using facets would be: locale loc(""); const ctype<char>& ct = use_facet<ctype<char> >(loc); transform(str.begin(), str.end(), std::bind1st(std::mem_fun(&ctype<char>::tolower), &ct)); A: If you have Boost, then it has the simplest way. Have a look at to_upper()/to_lower() in Boost string algorithms. A: I have found a way to convert the case of unicode (and multilingual) characters, but you need to know/find (somehow) the locale of the character: #include <locale.h> _locale_t locale = _create_locale(LC_CTYPE, "Greek"); AfxMessageBox((CString)""+(TCHAR)_totupper_l(_T('α'), locale)); _free_locale(locale); I haven't found a way to do that yet... I someone knows how, let me know. Setting locale to NULL doesn't work... A: The VCL has a SysUtils.hpp which has LowerCase(unicodeStringVar) and UpperCase(unicodeStringVar) which might work for you. I use this in C++ Builder 2009. A: What Steve says is right, but I guess that if your code had to support several languages, you could have a factory method that encapsulates a set of methods that do the relevant toUpper or toLower based on that language. A: Based on Kyle_the_hacker's -----> answer with my extras. Ubuntu In terminal List all locales locale -a Install all locales sudo apt-get install -y locales locales-all Compile main.cpp $ g++ main.cpp Run compiled program $ ./a.out Results Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë ZOË SALDAÑA PLAYED IN LA MALDICIÓN DEL PADRE CARDONA. ËÈÑ ΑΩ ÓÓCHLOË ZOË SALDAÑA PLAYED IN LA MALDICIÓN DEL PADRE CARDONA. ËÈÑ ΑΩ ÓÓCHLOË zoë saldaña played in la maldición del padre cardona. ëèñ αω óóchloë zoë saldaña played in la maldición del padre cardona. ëèñ αω óóchloë Windows In cmd run VCVARS developer tools "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" Compile main.cpp > cl /EHa main.cpp /D "_DEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /std:c++17 /DYNAMICBASE "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /MTd Compilador de optimización de C/C++ de Microsoft (R) versión 19.27.29111 para x64 (C) Microsoft Corporation. Todos los derechos reservados. main.cpp Microsoft (R) Incremental Linker Version 14.27.29111.0 Copyright (C) Microsoft Corporation. All rights reserved. /out:main.exe main.obj kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib Run main.exe >main.exe Results Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë ZOË SALDAÑA PLAYED IN LA MALDICIÓN DEL PADRE CARDONA. ËÈÑ ΑΩ ÓÓCHLOË ZOË SALDAÑA PLAYED IN LA MALDICIÓN DEL PADRE CARDONA. ËÈÑ ΑΩ ÓÓCHLOË zoë saldaña played in la maldición del padre cardona. ëèñ αω óóchloë zoë saldaña played in la maldición del padre cardona. ëèñ αω óóchloë The code - main.cpp This code was only tested on Windows x64 and Ubuntu Linux x64. /* * Filename: c:\Users\x\Cpp\main.cpp * Path: c:\Users\x\Cpp * Filename: /home/x/Cpp/main.cpp * Path: /home/x/Cpp * Created Date: Saturday, October 17th 2020, 10:43:31 pm * Author: Joma * * No Copyright 2020 */ #include <iostream> #include <set> #include <string> #include <locale> // WINDOWS #if (_WIN32) #include <Windows.h> #include <conio.h> #define WINDOWS_PLATFORM 1 #define DLLCALL STDCALL #define DLLIMPORT _declspec(dllimport) #define DLLEXPORT _declspec(dllexport) #define DLLPRIVATE #define NOMINMAX //EMSCRIPTEN #elif defined(__EMSCRIPTEN__) #include <emscripten/emscripten.h> #include <emscripten/bind.h> #include <unistd.h> #include <termios.h> #define EMSCRIPTEN_PLATFORM 1 #define DLLCALL #define DLLIMPORT #define DLLEXPORT __attribute__((visibility("default"))) #define DLLPRIVATE __attribute__((visibility("hidden"))) // LINUX - Ubuntu, Fedora, , Centos, Debian, RedHat #elif (__LINUX__ || __gnu_linux__ || __linux__ || __linux || linux) #define LINUX_PLATFORM 1 #include <unistd.h> #include <termios.h> #define DLLCALL CDECL #define DLLIMPORT #define DLLEXPORT __attribute__((visibility("default"))) #define DLLPRIVATE __attribute__((visibility("hidden"))) #define CoTaskMemAlloc(p) malloc(p) #define CoTaskMemFree(p) free(p) //ANDROID #elif (__ANDROID__ || ANDROID) #define ANDROID_PLATFORM 1 #define DLLCALL #define DLLIMPORT #define DLLEXPORT __attribute__((visibility("default"))) #define DLLPRIVATE __attribute__((visibility("hidden"))) //MACOS #elif defined(__APPLE__) #include <unistd.h> #include <termios.h> #define DLLCALL #define DLLIMPORT #define DLLEXPORT __attribute__((visibility("default"))) #define DLLPRIVATE __attribute__((visibility("hidden"))) #include "TargetConditionals.h" #if TARGET_OS_IPHONE && TARGET_IPHONE_SIMULATOR #define IOS_SIMULATOR_PLATFORM 1 #elif TARGET_OS_IPHONE #define IOS_PLATFORM 1 #elif TARGET_OS_MAC #define MACOS_PLATFORM 1 #else #endif #endif typedef std::string String; typedef std::wstring WString; #define EMPTY_STRING u8""s #define EMPTY_WSTRING L""s using namespace std::literals::string_literals; class Strings { public: static String WideStringToString(const WString& wstr) { if (wstr.empty()) { return String(); } size_t pos; size_t begin = 0; String ret; #if WINDOWS_PLATFORM int size; pos = wstr.find(static_cast<wchar_t>(0), begin); while (pos != WString::npos && begin < wstr.length()) { WString segment = WString(&wstr[begin], pos - begin); size = WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, &segment[0], segment.size(), NULL, 0, NULL, NULL); String converted = String(size, 0); WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, &segment[0], segment.size(), &converted[0], converted.size(), NULL, NULL); ret.append(converted); ret.append({ 0 }); begin = pos + 1; pos = wstr.find(static_cast<wchar_t>(0), begin); } if (begin <= wstr.length()) { WString segment = WString(&wstr[begin], wstr.length() - begin); size = WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, &segment[0], segment.size(), NULL, 0, NULL, NULL); String converted = String(size, 0); WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, &segment[0], segment.size(), &converted[0], converted.size(), NULL, NULL); ret.append(converted); } #elif LINUX_PLATFORM || MACOS_PLATFORM || EMSCRIPTEN_PLATFORM size_t size; pos = wstr.find(static_cast<wchar_t>(0), begin); while (pos != WString::npos && begin < wstr.length()) { WString segment = WString(&wstr[begin], pos - begin); size = wcstombs(nullptr, segment.c_str(), 0); String converted = String(size, 0); wcstombs(&converted[0], segment.c_str(), converted.size()); ret.append(converted); ret.append({ 0 }); begin = pos + 1; pos = wstr.find(static_cast<wchar_t>(0), begin); } if (begin <= wstr.length()) { WString segment = WString(&wstr[begin], wstr.length() - begin); size = wcstombs(nullptr, segment.c_str(), 0); String converted = String(size, 0); wcstombs(&converted[0], segment.c_str(), converted.size()); ret.append(converted); } #else static_assert(false, "Unknown Platform"); #endif return ret; } static WString StringToWideString(const String& str) { if (str.empty()) { return WString(); } size_t pos; size_t begin = 0; WString ret; #ifdef WINDOWS_PLATFORM int size = 0; pos = str.find(static_cast<char>(0), begin); while (pos != std::string::npos) { std::string segment = std::string(&str[begin], pos - begin); std::wstring converted = std::wstring(segment.size() + 1, 0); size = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, &segment[0], segment.size(), &converted[0], converted.length()); converted.resize(size); ret.append(converted); ret.append({ 0 }); begin = pos + 1; pos = str.find(static_cast<char>(0), begin); } if (begin < str.length()) { std::string segment = std::string(&str[begin], str.length() - begin); std::wstring converted = std::wstring(segment.size() + 1, 0); size = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, segment.c_str(), segment.size(), &converted[0], converted.length()); converted.resize(size); ret.append(converted); } #elif LINUX_PLATFORM || MACOS_PLATFORM || EMSCRIPTEN_PLATFORM size_t size; pos = str.find(static_cast<char>(0), begin); while (pos != String::npos) { String segment = String(&str[begin], pos - begin); WString converted = WString(segment.size(), 0); size = mbstowcs(&converted[0], &segment[0], converted.size()); converted.resize(size); ret.append(converted); ret.append({ 0 }); begin = pos + 1; pos = str.find(static_cast<char>(0), begin); } if (begin < str.length()) { String segment = String(&str[begin], str.length() - begin); WString converted = WString(segment.size(), 0); size = mbstowcs(&converted[0], &segment[0], converted.size()); converted.resize(size); ret.append(converted); } #else static_assert(false, "Unknown Platform"); #endif return ret; } static WString ToUpper(const WString& data) { WString result = data; auto& f = std::use_facet<std::ctype<wchar_t>>(std::locale()); f.toupper(&result[0], &result[0] + result.size()); return result; } static String ToUpper(const String& data) { return WideStringToString(ToUpper(StringToWideString(data))); } static WString ToLower(const WString& data) { WString result = data; auto& f = std::use_facet<std::ctype<wchar_t>>(std::locale()); f.tolower(&result[0], &result[0] + result.size()); return result; } static String ToLower(const String& data) { return WideStringToString(ToLower(StringToWideString(data))); } }; enum class ConsoleTextStyle { DEFAULT = 0, BOLD = 1, FAINT = 2, ITALIC = 3, UNDERLINE = 4, SLOW_BLINK = 5, RAPID_BLINK = 6, REVERSE = 7, }; enum class ConsoleForeground { DEFAULT = 39, BLACK = 30, DARK_RED = 31, DARK_GREEN = 32, DARK_YELLOW = 33, DARK_BLUE = 34, DARK_MAGENTA = 35, DARK_CYAN = 36, GRAY = 37, DARK_GRAY = 90, RED = 91, GREEN = 92, YELLOW = 93, BLUE = 94, MAGENTA = 95, CYAN = 96, WHITE = 97 }; enum class ConsoleBackground { DEFAULT = 49, BLACK = 40, DARK_RED = 41, DARK_GREEN = 42, DARK_YELLOW = 43, DARK_BLUE = 44, DARK_MAGENTA = 45, DARK_CYAN = 46, GRAY = 47, DARK_GRAY = 100, RED = 101, GREEN = 102, YELLOW = 103, BLUE = 104, MAGENTA = 105, CYAN = 106, WHITE = 107 }; class Console { private: static void EnableVirtualTermimalProcessing() { #if defined WINDOWS_PLATFORM HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE); DWORD dwMode = 0; GetConsoleMode(hOut, &dwMode); if (!(dwMode & ENABLE_VIRTUAL_TERMINAL_PROCESSING)) { dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING; SetConsoleMode(hOut, dwMode); } #endif } static void ResetTerminalFormat() { std::cout << u8"\033[0m"; } static void SetVirtualTerminalFormat(ConsoleForeground foreground, ConsoleBackground background, std::set<ConsoleTextStyle> styles) { String format = u8"\033["; format.append(std::to_string(static_cast<int>(foreground))); format.append(u8";"); format.append(std::to_string(static_cast<int>(background))); if (styles.size() > 0) { for (auto it = styles.begin(); it != styles.end(); ++it) { format.append(u8";"); format.append(std::to_string(static_cast<int>(*it))); } } format.append(u8"m"); std::cout << format; } public: static void Clear() { #ifdef WINDOWS_PLATFORM std::system(u8"cls"); #elif LINUX_PLATFORM || defined MACOS_PLATFORM std::system(u8"clear"); #elif EMSCRIPTEN_PLATFORM emscripten::val::global()["console"].call<void>(u8"clear"); #else static_assert(false, "Unknown Platform"); #endif } static void Write(const String& s, ConsoleForeground foreground = ConsoleForeground::DEFAULT, ConsoleBackground background = ConsoleBackground::DEFAULT, std::set<ConsoleTextStyle> styles = {}) { #ifndef EMSCRIPTEN_PLATFORM EnableVirtualTermimalProcessing(); SetVirtualTerminalFormat(foreground, background, styles); #endif String str = s; #ifdef WINDOWS_PLATFORM WString unicode = Strings::StringToWideString(str); WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE), unicode.c_str(), static_cast<DWORD>(unicode.length()), nullptr, nullptr); #elif defined LINUX_PLATFORM || defined MACOS_PLATFORM || EMSCRIPTEN_PLATFORM std::cout << str; #else static_assert(false, "Unknown Platform"); #endif #ifndef EMSCRIPTEN_PLATFORM ResetTerminalFormat(); #endif } static void WriteLine(const String& s, ConsoleForeground foreground = ConsoleForeground::DEFAULT, ConsoleBackground background = ConsoleBackground::DEFAULT, std::set<ConsoleTextStyle> styles = {}) { Write(s, foreground, background, styles); std::cout << std::endl; } static void Write(const WString& s, ConsoleForeground foreground = ConsoleForeground::DEFAULT, ConsoleBackground background = ConsoleBackground::DEFAULT, std::set<ConsoleTextStyle> styles = {}) { #ifndef EMSCRIPTEN_PLATFORM EnableVirtualTermimalProcessing(); SetVirtualTerminalFormat(foreground, background, styles); #endif WString str = s; #ifdef WINDOWS_PLATFORM WriteConsole(GetStdHandle(STD_OUTPUT_HANDLE), str.c_str(), static_cast<DWORD>(str.length()), nullptr, nullptr); #elif LINUX_PLATFORM || MACOS_PLATFORM || EMSCRIPTEN_PLATFORM std::cout << Strings::WideStringToString(str); #else static_assert(false, "Unknown Platform"); #endif #ifndef EMSCRIPTEN_PLATFORM ResetTerminalFormat(); #endif } static void WriteLine(const WString& s, ConsoleForeground foreground = ConsoleForeground::DEFAULT, ConsoleBackground background = ConsoleBackground::DEFAULT, std::set<ConsoleTextStyle> styles = {}) { Write(s, foreground, background, styles); std::cout << std::endl; } static void WriteLine() { std::cout << std::endl; } static void Pause() { char c; do { c = getchar(); std::cout << "Press Key " << std::endl; } while (c != 64); std::cout << "KeyPressed" << std::endl; } static int PauseAny(bool printWhenPressed = false, ConsoleForeground foreground = ConsoleForeground::DEFAULT, ConsoleBackground background = ConsoleBackground::DEFAULT, std::set<ConsoleTextStyle> styles = {}) { int ch; #ifdef WINDOWS_PLATFORM ch = _getch(); #elif LINUX_PLATFORM || MACOS_PLATFORM || EMSCRIPTEN_PLATFORM struct termios oldt, newt; tcgetattr(STDIN_FILENO, &oldt); newt = oldt; newt.c_lflag &= ~(ICANON | ECHO); tcsetattr(STDIN_FILENO, TCSANOW, &newt); ch = getchar(); tcsetattr(STDIN_FILENO, TCSANOW, &oldt); #else static_assert(false, "Unknown Platform"); #endif if (printWhenPressed) { Console::Write(String(1, ch), foreground, background, styles); } return ch; } }; int main() { std::locale::global(std::locale(u8"en_US.UTF-8")); String dataStr = u8"Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë"; WString dataWStr = L"Zoë Saldaña played in La maldición del padre Cardona. ëèñ αω óóChloë"; std::string locale = u8""; //std::string locale = u8"de_DE.UTF-8"; //std::string locale = u8"en_US.UTF-8"; Console::WriteLine(dataStr); Console::WriteLine(dataWStr); dataStr = Strings::ToUpper(dataStr); dataWStr = Strings::ToUpper(dataWStr); Console::WriteLine(dataStr); Console::WriteLine(dataWStr); dataStr = Strings::ToLower(dataStr); dataWStr = Strings::ToLower(dataWStr); Console::WriteLine(dataStr); Console::WriteLine(dataWStr); Console::WriteLine(u8"Press any key to exit"s, ConsoleForeground::DARK_GRAY); Console::PauseAny(); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/11491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Speeding up an ASP.Net Web Site or Application I have an Ajax.Net enabled ASP.Net 2.0 web site. Hosting for both the site and the database are out of my control as is the database's schema. In testing on hardware I do control the site performs well however on the client's hardware, there are noticeable delays when reloading or changing pages. What I would like to do is make my application as compact and speedy as possible when I deliver it. One idea is to set expiration dates for all of the site's static resources so they aren't recalled on page loads. By resources I mean images, linked style sheets and JavaScript source files. Is there an easy way to do this? What other ways are there to optimize a .Net web site? UPDATE: I've run YSlow on the site and the areas where I am getting hit the hardest are in the number of JavaScript and Style Sheets being loaded (23 JS files and 5 style sheets). All but one (the main style sheet) has been inserted by Ajax.net and Asp. Why so many? A: If you are using Firefox to test your website, you might want to try a nifty Firefox extension from Yahoo! called YSlow. It analyzes your web pages and provides grades from A-F (A being the Best and F being the worst) for each of the best practices, for high performance websites. It will help you to track down the elements of your website which you could optimize to gain speedups. UPDATE Now YSlow extension is compatible with all modern browsers such as Firefox, Chrome, Opera, Safari and others, read more here. A: Turn viewstate off by default, it will be a night and day difference on even the most simple pages. A: * *Script Combining in .net 3.5 SP1 *Best Practices for fast websites *HTTP Compression (gzip) *Compress JS / CSS (different than http compression, minify javascript) * *YUI Compressor *.NET YUI Compressor My best advice is to check out the YUI content. They have some great articles that talk about things like CSS sprites and have some nice javascript libraries to help reduce the number of requests the browser is making. A: I wrote a blog post about improving ASP.NET page performance this a couple months back. Here are some quick & easy ways - * *Turn off view state *Turn off event validation *Implement HTTP gzip/deflate compression to reduce the response size (number of bytes the server has to send back to the client) *Try to optimize/minimize your database calls for each request A: I think you really need to be able to get some actual PerfMon data/telemetry from the app whilst running in production to be able to make an enlightened decision about what to optimise. As a throw away tip I'd make sure your app is deployed as a Release build and set debug="false" in the 'compilation' section of your web.config. A: You seem to be starting by assuming that your problem is download size - that may not necessarily be the case. You should do some experimentation with your ASP.NET site to determine if there are areas in your code which are causing undue delays. If it turns out that download size is not your problem, you'll need to find ways to cache your results (look into output caching, which is an ASP.NET feature) or optimize your code. In any case - the first step when looking at a performance issue is always to verify your assumptions first, then decide on a course of action. A: Have you tried these tips? http://weblogs.asp.net/haroonwaheed/archive/2008/06/30/ASP.NET-Performance-Tips.aspx A: You could start looking at caching strategies. Static files like CSS (even compressed ones) and images (even optimized ones) should only need to be downloaded once by the browser for a period of time. Scirpt combining for AJAX has already been mentioned, but I didn't notice reference to the ScriptReferenceProfiler MS has released on codeplex to help figure out what to combine. Mike Ormond has a good start point on this. Another tip if you're doing a lot of INSERTs to your database is to double check your server's disk caching is switched on. Case in point, I had an data importer doing 1.2 million inserts during a run. Took 4 hours and change without caching on. Took 16 minutes with it on. A: A general thing when using ASP.NET and Ajax (any Ajax library) together is to avoid elephanting your Page_Load and Page_Init (and their method counterparts) things since these will be executing on every Ajax Request. When that is said I would seriously ditch ASP.NET AJAX and use anything else... Anthem.NET, AjaxPRO.NET, jQuery or whatever else than ASP.NET AJAX... Of course I would use Ra-Ajax myself since that's my project. But then again I am biased... A: You could turn on compression based on your client supporting it. See this article: link text A: Static resources shouldn't be resent unless changed. IIS will send a response code which tells the browser to use the cached version. A: You could also look at ASP.NET output caching, which can be applied fairly granularly to different portions of your page: http://msdn.microsoft.com/en-us/library/xsbfdd8c(VS.71).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/11500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Is it possible to be ambikeyboardrous? I switched to the dvorak keyboard layout about a year ago. I now use dvorak full-time at work and at home. Recently, I went on vacation to Peru and found myself in quite a conundrum. Internet cafes were qwerty-only (and Spanish qwerty, at that). I was stuck with a hunt-and-peck routine that grew old fairly quickly. That said, is it possible to be "fluent" in both qwerty and dvorak at the same time? If not, are there any good solutions to the situation I found myself in? A: Very possible. Although I became sadly mono-keyboarded after learning Dvorak, my wife is equally speedy at both. She recommends learning the other layout slowly, with frequent breaks to reacquaint with the previous layout. DVAssist on a USB stick should make it easy to switch layouts on random computers. I A: Yes, it is very possible. Just remember to use Qwerty every once in a while. I've been leaning Dvorak myself for about 2 weeks, and I'm up to 75wpm average. I use Qwerty every day for a bit, but most of the time I'm using Dvorak. My Qwerty speed is still averaging around 100wpm. I also learnt the Korean Dubeolshik layout several years ago, and I average about 100wpm on that too. Learning different keyboard layouts is much easier than learning multiple languages. And people still manage to remember their native tongue! So if you ever knew Qwerty, with a good bit of practice you should be able to get back up to speed on that fairly quickly. A: That said, is it possible to be "fluent" in both qwerty and dvorak at the same time? If not, are there any good solutions to the situation I found myself in? I've switched to Dvorak a few years ago and I've been fluent in qwerty and dvorak most of the time since (only slightly slower in qwerty than I used to be; much faster than I used to be when I use dvorak instead). I found that the only part where switching back and forth was really hard was the beginning: I needed a few months until dvorak felt natural enough so that switching back and forth wouldn't confuse me (and hurt the learning curve). After a few months, switching back and forth was a little awkward, but quickly became entirely natural after I got used to it (school computers and a few games didn't easily let me change to dvorak, so it was nice to be able to work either way). So I think if you practice this a bit, it should be fine :) Incidentally, once I was comfortable with both layouts, learning other layouts was extremely easy in comparison to learning dvorak. It took me only a few hours until I felt more comfortable typing Japanese characters (Hiragana direct input) than I did spelling them out, even though I had to learn it by hitting all keys and seeing what produced which character. It felt a bit like it did with languages -- once you knew two well, learning other similar languages is a lot easier than it was for the second :) A: FWIW, I did finally find solution to the situation. I had my travel buddy (who is still stuck in the qwerty stone age) type while I dictated. That was a 10x speed improvement over my hunting-and-pecking. And much easier, too. A: I've never used a public computer, but carry a keyboard and(/or, if you are good enough) just change the settings on the machine. There's a special place in hell for people that change keyboard mappings on public computers. A: I'm ambidkeyboardwhatever, but in two different languages, so that helps with the muscle memory that someone mentioned. I use a Qwerty in English and Azerty in French. My colleagues curse every time they try to use my computer! I looked briefly at learning Dvorak, but that would only be able to replace Qwerty, because it doesn't have the accented characters. Having said all that, whatever keyboard layout you choose, the most important thing is to learn to touch-type! A: Myself, I type 40wpm Dvorak versus 80wpm QWERTY, which roughly correlates with how often I use these layouts. It takes me about a minute of typing to fully make the switchover. My sister has managed to train herself to type QWERTY on full-size keyboards but Dvorak on the miniature keyboard on her ASUS Eee, and has no trouble switching between two keyboards at will. She does have major problems when trying to use Dvorak on a full-size keyboard or QWERTY on the Eee, so I guess it's something related to muscle memory. So, with some qualifications, I'd say that yes, it is completely possible to be "ambikeyboardrous". A: Yes, it's completely possible to be fluent in both Dvorak and Qwerty, but you have to specifically work at it to develop the dual fluency. When I began to learn Dvorak it initially crippled me in Qwerty, so I wasn't able to type easily in either layout. But I was dealing with carpal tunnel and unable to do much typing anyway, so learning Dvorak couldn't make me any slower. After several months of switching between Dvorak and Qwerty increasingly often, the switch got easier and easier every time. Now I can switch instantly. It's like there's a keyboard layout mode switch in my subconscious. I can't see it, but I can tell my brain what layout I want to use, and my fingers do the rest. But if you're interested in learning Dvorak, consider whether it's worth the effort: * *If you're interested in Dvorak to improve your typing speed, my advice is not to get your hopes up. It takes a long time to learn and I'm not convinced it improves typing speed. And it makes basic key shortcuts like those for cut/copy/paste a lot more annoying. *If you're interested in Dvorak because it seems like a cool ability or would look good on a CV, don't bother. Learning a foreign language is far more interesting. *If you're interested in Dvorak to reduce hand pain, give it a go. I'm not sure if it reduced my hand pain or not, but I can believe that it would, because it definitely reduces the distance one's fingers have to travel. A: I've been typing Dvorak for about 10 yearns now. I find that I can switch back to qwerty pretty fluently after a few min. I have to look at the keyboard at first but, it comes back. What's funny is that I can only switch back to Qwerty if I'm using a computer or keyboard thats not mine. If I switch the mode to Qwerty on my own laptop, I just struggle. It has to be in Dvorak. =) A: Web For your situation of being at a public computer that you cannot switch the keyboard layout on, you can go to this website: http://www.dvzine.org/type/DVconverter.html Use this to translate your typing and then use copy paste. I found this very useful when I was out of the country and had to write a bunch of emails at public computers. USB Drive Put this Dvorak Utility on your USB drive. Run this app and it will put a icon in the system tray on windows. This icon will switch the computer between the two keyboard layouts and it works. (If you have tried switching back and forth from dvorak to qwerty you will know what I mean. Windows does the worst job of this one bit of functionality.) A: Yes. I type Dvorak on a Kinesis Advantage keyboard on my desktops, but type qwerty on my Macbook. Possibly it helps that they are so different, my muscles figure out what it is that they are typing on. A: I would say no. I have used both, and they are different for very good reason (warning, history lesson) The Dvorak keyboard is optimal, the qwerty layout was designed so that the pegs on a typewrier would not collide (so letters that often come next to each other are split up) Because these are so different, its really not possible to be really good on both. You will find that even if you look at the keyboard while typing you eventually develop muscle memory that allows you to know where the keys are. This will get ALL messed up if you start moving where the keys are. A: @Thomas Owens - the person in that cafe after you is going to be proper befuddled :-D I guess to be good on both you'd have to alternate all the time. I have enough trouble switching between my laptop and desktop keyboards :-) A: I'm using a homemade version of QWERTY (with all french letters mapped) at home and at work. I am personally stuck when I have to use the usual layout here (AZERTY). I feel your pain. For what I have witnessed, everyone gets used to a single mapping, and trying to use an other layout is quite hard. A: I've never used a public computer, but carry a keyboard and(/or, if you are good enough) just change the settings on the machine.
{ "language": "en", "url": "https://stackoverflow.com/questions/11514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Variable Bindings in WPF I’m creating a UserControl for a rich TreeView (one that has context menus for renaming nodes, adding child nodes, etc.). I want to be able to use this control to manage or navigate any hierarchical data structures I will create. I currently have it working for any data structure that implements the following interface (the interface need not actually be implemented, however, only the presence of these members is required): interface ITreeItem { string Header { get; set; } IEnumerable Children { get; } } Then in my UserControl, I use templates to bind my tree to the data structure, like so: <TextBlock x:Name="HeaderTextBlock" Text="{Binding Path=Header}" /> What I would like to do is define the name of each of these members in my RichTreeView, allowing it to adapt to a range of different data structures, like so: class MyItem { string Name { get; set; } ObservableCollection<MyItem> Items; } <uc:RichTreeView ItemSource={Binding Source={StaticResource MyItemsProvider}} HeaderProperty="Name" ChildrenProperty="Items" /> Is there any way to expose the Path of a binding inside a UserControl as a public property of that UserControl? Is there some other way to go about solving this problem? A: Perhaps this might help: Create a new Binding when you set the HeaderProperty property on the Header dependency property: Header property is your normal everyday DependencyProperty: public string Header { get { return (string)GetValue(HeaderProperty); } set { SetValue(HeaderProperty, value); } } public static readonly DependencyProperty HeaderProperty = DependencyProperty.Register("Header", typeof(string), typeof(ownerclass)); and the property of your HeaderProperty works as follows: public static readonly DependencyProperty HeaderPropertyProperty = DependencyProperty.Register("HeaderProperty", typeof(string), typeof(ownerclass), new PropertyMetadata(OnHeaderPropertyChanged)); public string HeaderProperty { get { return (string)GetValue(HeaderPropertyProperty); } set { SetValue(HeaderPropertyProperty, value); } } public static void OnHeaderPropertyChanged(DependencyObject obj, DependencyPropertyChangedEventArgs args) { if (args.NewValue != null) { ownerclass c = (ownerclass) obj; Binding b = new Binding(); b.Path = new PropertyPath(args.NewValue.ToString()); c.SetBinding(ownerclass.HeaderProperty, b); } } HeaderProperty is your normal everyday DependencyProperty, with a method that is invoked as soon as the HeaderProperty changes. So when it changes , it creates a binding on the Header which will bind to the path you set in the HeaderProperty. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/11516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }