text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Predawn theme for Sublime or Atom A dark interface and syntax theme for Sublime Text and Atom. The repo also includes new dock icons for Sublime. This appeared first on Laravel News Newsletter Join the weekly newsletter and never miss out on new tips, tutorials, and more. The Artisan Files: Yitzchok Willroth This week I’m happy to introduce you to the most interesting developer in the world, Yitzchok Willroth aka @cod… Laravel 5.0 – Directory structure and namespace Matt Stauffer has a new post outlining the new directory structure in Laravel 5.0 and why he thinks it better.
https://laravel-news.com/predawn-theme-sublime-atom/
CC-MAIN-2018-09
en
refinedweb
Greetings! I'm getting my head around user auth in React by building a simple app with Auth0 features. So far so good. The problem starts when I try to hit the /userinfo endpoint. I'm using Superagent to make my API request. import auth0 from 'auth0-js'; import request from 'superagent'; import { auth0Globals } from '../config.js'; import { getAccessToken } from './AuthService'; function userProfile(auth) { request .get( 'https://' + auth.CLIENT_DOMAIN + '/userinfo' ) .set( 'authorization', 'Bearer ' + getAccessToken() ) .end( function(err, res) { if ( err ) { console.log(err); } }); } const user = userProfile(auth0Globals); The server responds with a 401 error, citing Invalid Credentials as the culprit. I'm getting this in the browser as well as in Postman with Auth0's pre-built GET request for /userinfo. This leads me to believe that my client is probably set up incorrectly, but I'm also finding that I can't access user info for the example client that comes with new Auth0 accounts. Any ideas for what I might be doing wrong? I can confirm that auth.CLIENT_DOMAIN and the getAccessToken() function are returning data and that the data is correct. Answer by pelaez89 · Sep 26, 2017 at 10:35 PM I could replicate this problem while having a rule modifying the scope claim of the access token of the open id token. When I disabled the rule everything started working again. This was not happening yesterday and I don't see any other feasible explanation besides a change in Auth0 code since nothing changed on my rule code or lock config. Maybe you were doing something similar in a rule? Answer by jmangelo · Aug 31, 2017 at 09:11 AM I could not reproduce this situation, the 401, unless I provided an incorrect access token. The access token you're using is an opaque access token, around 16 characters, or a JWT access token? If it's a JWT access token consider including the header and payload components in your question (you can redact the values of any claims you deem sensitive). In addition, it may be useful to update the question with the exact contents of the WWW-Authenticate response header you get along with the 401. Answers Answers and Comments 2 suddenly results in 401, unauthorized (CRITICAL!) 13 Answers user.login_count undefined 1 Answer
https://community.auth0.com/questions/7839/invalid-credentials-when-hitting-the-userinfo-endp
CC-MAIN-2018-09
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How print a debug message on log file? I want to print a debug message in a method. I override project.task.work.create method and i don't know if program use my method or the original.In java i print a simple message to see it. How do the same with OpenERP API? I use OpenERP 7 on debian linux. To display a debug log, you can use the standard python logging module. Basically, you need to get a logger and you can use it in your methods to output log messages. Usually, in OpenERP, a _logger is get at the top of the python module, its name being the name of the python module. import logging from openerp.osv import orm _logger = logging.getLogger(__name__) class project_task_work(orm.Model): _inherit = 'project.task.work' def create(self, cr, uid, vals, context=None): _logger.debug('Create a %s with vals %s', self._name, vals) return super(project_task_work, self).create(cr, uid, vals, context=context) Note that you will need to start your server using the --debug option to show debug logs. Other options for logging at a higher level (that doesn't require the --debug option) are: _logger.info('FYI: This is happening') _logger.warning('WARNING: I don't think you want this to happen!') _logger.error('ERROR: Something really bad happened!') It doesn't work(I see nothing on log file but i have a lot of debug info on it). I don't know how debug this personal add-on. I just want to override this method. should I create a new question for this problem? If you don't see your logs that's probably because your module is not loaded correctly, or the module's dependencies are wrong. You can create a question focused to the loading of modules with details on your module (people will have to find out why it is not loaded, give them useful information). thanks a lot if it didn't work that's because i hadn't restart openERP. I just restart and it's ok. Thanks again. Hi,,,when i am giving _logging.info after _columns in .py file then it is printing to log file but when i am calling to the method then it didn't work?? Please help me out.... About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-print-a-debug-message-on-log-file-1037
CC-MAIN-2018-09
en
refinedweb
Using multiple header and .cpp files #1 Members - Reputation: 158 Posted 17 December 2012 - 03:54 AM I've done a couple projects and so far I have just put everything in 1 file because I keep having trouble. This time though Id like to really figure this out so Im hoping someone here can explain it to me. Ive researched it online but its not working out and i just dont understand. The files I have are: pong.cpp, pong.h, paddle.cpp, and paddle.h I'm using MS Visual Studio C++ 2010 Express. I've tried several ways of using #ifndef, #define, #endif and no combination has worked, I just keep getting an error "...already defined in Paddle.obj" I think I get the basic idea, that if the header file has been defined already, ignore it and skip to the bottom(#endif), otherwise define it, but again no way I have done it is working. I'd love to finally learn this and get a good project created in the next couple days. Thank you! #2 Members - Reputation: 307 Posted 17 December 2012 - 03:59 AM /* header.h */ #ifndef _HEADER_H_INCLUDED_ #define _HEADER_H_INCLUDED_ /* code goes here */ #endif #3 Crossbones+ - Reputation: 3863 Posted 17 December 2012 - 04:04 AM [source lang="cpp"]#pragma once[/source] If that doesn't fix your problem, paste the complete error/errors you're getting. Edited by Mussi, 17 December 2012 - 04:08 AM. #4 Moderators - Reputation: 9974 Posted 17 December 2012 - 04:14 AM #5 Members - Reputation: 508 Posted 17 December 2012 - 04:27 AM The #pragma once and #ifndef #define #endif, only makes sure that a file is included only once, but if you have several declarations of functions globally in the files you will still get definition errors. Edited by PwFClockWise, 17 December 2012 - 04:40 AM. #6 Crossbones+ - Reputation: 5551 Posted 17 December 2012 - 04:57 AM Then you easily can get multiple symbols at link, because the symbol will be defined once for each cpp-file that includes it. Edited by Olof Hedman, 17 December 2012 - 04:57 AM. #7 Members - Reputation: 376 Posted 17 December 2012 - 09:23 AM Edited by landagen, 17 December 2012 - 09:24 AM.
http://www.gamedev.net/topic/635931-using-multiple-header-and-cpp-files/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2016-36
en
refinedweb
In this geeky life, everyone has heard of some notorious thing known as Random Number. Initially, when I had to find a random number, I used to google some cool tool or website which could give me really cool random number. But, after a while, I came to know a cool thing called /dev/random and /dev/urandom. Basically, these are the devices ( files ) on Linux systems to provide user a random number. /dev/random and /dev/urandom are two read only files in Linux system which when used to read gives random numbers to user. /dev/random generates quality random numbers compared to /dev/urandom. The following sample code illustrates how to use these device files to generate random number. #include <stdio.h> void main(void) { int fp; long randNo; fp=open("/dev/random","r"); read(fp, &randNo,sizeof(randNo)); printf("%ld\n",randNo); close(fp); } Basically, /dev/random file is an interface for user to access kernel’s random number generator. System internally collects environmental and device driver noise in form of bits and collects it in the entropy pool. This way system generates high quality (true) random numbers which can be used for various purposes. Also, as the random number using /dev/random is generated from entropy pool, read from /dev/random will be blocked until sufficient noise is not available for generating random number. Opposed to that, /dev/urandom generates the random number with the whatever amount of noise is available in pool. Random numbers generated this way may or may not be true random number and may be vulnerable to cryptographic attack. More information on configuration and usage can be found at linux.die.net or by command man random.
https://secalert.wordpress.com/
CC-MAIN-2016-36
en
refinedweb
Upcasting and Downcasting - 2016 Upcasting is converting a derived-class reference or pointer to a base-class. In other words, upcasting allows us to treat a derived type as though it were its base type. It is always allowed for public inheritance, without an explicit type cast. This is a result of the is-a relationship between the base and derived classes. Here is the code dealing with shapes. We created Shape class, and derived Circle, Square, and Triangle classes from the Shape class. Then, we made a member function that talks to the base class: void play(Shape& s) { s.draw(); s.move(); s.shrink(); .... } The function speaks to any Shape, so it is independent of the specific type of object that it's drawing, moving, and shrinking. If in some other part of the program we use the play( ) function like below: Circle c; Triangle t; Square sq; play(c); play(t); play(sq); it were its base type. That's how we decouple ourselves from knowing about the exact type we are dealing with. Note that it doesn't say "If you're a Triangle, do this, if you're a Circle, do that, and so on." If we write that kind of code, which checks for all the possible types of a Shape, it will soon become a messy code, and we need to change it every time we add a new kind of Shape. Here, however, we just say "You're a Shape, I know you can move(), draw(), and shrink( ) yourself, do it, and take care of the details correctly." The compiler and runtime linker handle the details. If a member function is virtual, then when we send a message to an object, the object will do the right thing, even when upcasting is involved. Note that the most important aspect of inheritance is not that it provides member functions for the new class, however. It's the relationship expressed between the new class and the base class. This relationship can be summarized by saying, "The new class is a type of the existing class." class Parent { public: void sleep() {} }; class Child: public Parent { public: void gotoSchool(){} }; int main( ) { Parent parent; Child child; // upcast - implicit type cast allowed Parent *pParent = &child; // downcast - explicit type case required Child *pChild = (Child *) &parent; pParent -> sleep(); pChild -> gotoSchool(); return 0; } A Child object is a Parent object in that it inherits all the data members and member functions of a Parent object. So, anything that we can do to a Parent object, we can do to a Child object. Therefore, a function designed to handle a Parent pointer (reference) can perform the same acts on a Child object without any problems. The same idea applies if we pass a pointer to an object as a function argument.). The opposite process, converting a base-class pointer (reference) to a derived-class pointer (reference) is called downcasting. Downcasting is not allowed without an explicit type cast. The reason for this restriction is that the is-a relationship is not, in most of the cases, symmetric. A derived class could add new data members, and the class member functions that used these data members wouldn't apply to the base class. As in the example, we derived Child class from a Parent class, adding a member function, gotoSchool(). It wouldn't make sense to apply the gotoSchool() method to a Parent object. However, if implicit downcasting were allowed, we could accidentally assign the address of a Parent object to a pointer-to-Child Child *pChild = &parent; // actually this won't compile // error: cannot convert from 'Parent *' to 'Child *' and use the pointer to invoke the gotoSchool() method as in the following line. pChild -> gotoSchool(); Because a Parent isn't a Child (a Parent need not have a gotoSchool() method), the downcasting in the above line can lead to an unsafe operation.C++ provides a special explicit cast called dynamic_cast that performs this conversion. Downcasting is the opposite of the basic object-oriented rule, which states objects of a derived class, can always be assigned to variables of a base class. One more thing about the upcasting: Because implicit upcasting makes it possible for a base-class pointer (reference) to refer to a base-class object or a derived-class object, there is the need for dynamic binding. That's why we have virtual member functions. - Pointer (Reference) type: known at compile time. - Object type: not known until run time. The dynamic_cast operator answers the question of whether we can safely assign the address of an object to a pointer of a particular type. Here is a similar example to the previous one. #include <string> class Parent { public: void sleep() { } }; class Child: public Parent { private: std::string classes[10]; public: void gotoSchool(){} }; int main( ) { Parent *pParent = new Parent; Parent *pChild = new Child; Child *p1 = (Child *) pParent; // #1 Parent *p2 = (Child *) pChild; // #2 return 0; } Let look at the lines where we do type cast. Child *p1 = (Child *) pParent; // #1 Parent *p2 = (Child *) pChild; // #2 Which of the type cast is safe? The only one guaranteed to be safe is the ones in which the pointer is the same type as the object or else a base type for the object. Type cast #1 is not safe because it assigns the address of a base-class object (Parent) to a derived class (Child) pointer. So, the code would expect the base-class object to have derived class properties such as gotoSchool() method, and that is false. Also, Child object, for example, has a member classes that a Parent object is lacking. Type case #2, however, is safe because it assigns the address of a derived-class object to a base-class pointer. In other words, public derivation promises that a Child object is also a Parent object. The question of whether a type conversion is safe is more useful than the question of what kind of object is pointed to. The usual reason for wanting to know the type is so that we can know if it's safe to invoke a particular method. Here is the syntax of dynamic_cast. Child *p = dynamic_cast<Child *>(pParent) This code is asking whether the pointer pParent can be type cast safely to the type Child *. - It returns the address of the object, if it can. - It returns 0, otherwise. How do we use the dynamic_cast? void f(Parent* p) { Child *ptr = dynamic_cast<Child*>(p); if(ptr) { // we can safely use ptr } } In the code, if (ptr) is of the type Child or else derived directly or indirectly from the type Child, the dynamic_cast converts the pointer p to a pointer of type Child. Otherwise, the expression evaluates to 0, the null pointer. In other words, we want to check if we can use the passed in pointer p before we do some operation on a child class object even though it's a pointer to base class. "The need for dynamic_cast generally arises because we want perform derived class operation on a derived class object, but we have only a pointer-or reference-to-base." -Scott Meyers Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization
http://www.bogotobogo.com/cplusplus/upcasting_downcasting.php
CC-MAIN-2016-36
en
refinedweb
Reading SCORM variablesmerobertsjr Oct 20, 2009 2:36 PM I was curious if anyone has any insight on how to read SCORM variables into captivate once a project is published on an LMS. Here is my problem. I would like to use captivate to pull the current user data during runtime from the cmi.learner_name variable, and store the data in a captivate variable, so I can then display the users name (or otherwise use the data). Does anyone have experience implimenting this? I was considering using javascript to assign the value to a captivate variable, but I am a bit lost as to how to make javascript pass the variable to captivate. As well as when to have this happen, since I am sure there are some initialization issues. I would also like to be able to use captivate to make a simple debugger project to use when working with a new LMS using this same principle. Any suggestions? 1. Re: Reading SCORM variables Oct 21, 2009 12:36 AM (in response to merobertsjr) I have done this through a combination of JavaScript and Flash. You need JavaScript to get the value from the LMS and then I used a Flash file embedded in Captivate to show / process the value. Once you have the value in Flash you can assign the value to a Captivate User defined variable. The "issue" with these SCORM things is the difference between SCORM 1.2 and SCORM 2004. For some reason a lot of major LMS' are still using the old SCORM 1.2 standard while some are using the "new" SCORM 2004 standard. There are some fundamental changes in these two SCORM versions, so basically you would need to make a SCORM 1.2 file and a SCORM 2004 file. If you don't know Flash then you could probably (just guessing here) assign the value from your SCORM variable to a Captivate user defined variable through JavaScript, but I have no idea how to do that. /Michael Visit my Captivate blog with tips & tricks, tutorials and Widgets. 2. Re: Reading SCORM variablesmerobertsjr Oct 21, 2009 8:42 AM (in response to merobertsjr) Thanks, I was pretty sure I would have to use a different script for 1.2 vs 2004. Thanks for confirming. I have also noticed that many LMS deal better with 1.2. The current one we are using is setup for 1.2 for the most part, but the LMS has a web based aggrigator that only uses 2004 content??? Strange. I was hoping that the javascript to captivate would be possible, since I am not great with Flash. But, I guess I just need to get in there and figure it out. I have Flash CS4 and I know some basics, but it is not my forte. Would this need to be a full blown widget? 3. Re: Reading SCORM variables Oct 21, 2009 10:33 AM (in response to merobertsjr)1 person found this helpful The main difference between the SCORM calls in 1.2 and 2004 is that the SCORM fields have different names. I can't remember if that applies for learner_name though. To answer your question - No your wouldn't need a full blown widget to do this. That would be total overkill as the code in Flash is maximum 10 lines. I'll see if I can dig up some code in Flash to get you started once I get in the office tomorrow. /Michael Visit my Captivate blog with tips & tricks, tutorials and Widgets. 4. Re: Reading SCORM variables Oct 22, 2009 12:01 AM (in response to)1 person found this helpful Alright here is some code.. You probably need to adapt if for your own needs, but this will get you started. In JavaScript create a function like this: function returnName(str) { return doGetValue('cmi.learner_name'); } In Flash create code like this: // stop the Flash timeline this.stop(); // import external interface to handle JavaScript communication import flash.external.*; // The name of the JavaScript function to call var callJS:String = "returnName"; //parameter to send to the JavaScript function. This is not needed now var msg:String = "test"; // Execute the function call and store the result in the variable returnValue var returnValue:String = ExternalInterface.call(callJS, msg).toString(); // Put the value into a textfield on the Flash stage this.return_txt.text = returnValue; Also create a dynamic text field on the stage with the instance name "return_txt" That should more or less be it. Let me know if it works out for you. /Michael Visit my Captivate blog with tips & tricks, tutorials and Widgets.) Yes if you want to use the data in Captivate then you need to "transfer" it from Flash to Captivate. Create one or more user defined variables in Captivate. From Flash send the values to these variables like _root.v_myCaptivateVariable = FlashVariableYouWantToTransfer; /Michael Visit my Captivate blog with tips & tricks, tutorials and Widgets. 11. Re: Reading SCORM variablesmerobertsjr Sep 2, 2010 10:47 AM (in response to) I am still having trouble with the first part, but once I can get the value to display in flash, then I will work on "transfering" it to captivate. I have decided to test in SCORM1.2, because that is what works best on our LMS... So, I have modified the standard.js file in my captivate templates folder so that the javascript below is included in all published projects: function returnName(str) { return doGetValue('cmi.core.student_name'); } I will later create a function that works with SCORM 2004 and just name that function something like returnNameST Then I created the flash object and embeded it in my captivate project. No changes to the code you suggested. The problem is that nothing displays. I tested the animation by supplying some text to it (then inserting into captivate), and that displays, but it does not work with the javascript function. I get nothing displayed at all??? Kind of fustrating because I have to publish all the way to the LMS to test, and then see no results... I'm not sure if it is just not executing the javascript call, or if it is not pulling any data from the LMS, or if there is something else going on... Shouldn't it be giving me at least the appended "test" message as the string value if nothing is actually returned from the javascript function?) Thank you all very much for the information. FYI to get this to work in Captivate 5 and SCORM 1.2, you'll want to modify the javascript returnName function to the following: function returnName(str) { return g_objAPI.LMSGetValue("cmi.core.student_name"); //returns a string } 15. Re: Reading SCORM variablesmarkdins Dec 9, 2010 1:53 PM (in response to merobertsjr) Hello Merobertsjr! I was looking at this thread in the Adobe Forums and would like to look at your tutorial, unfortunately the link seems to have expired. Is it possible to see this anywhere else? I would like to take a look. Thank you! Mark) Thanks for putting up the tutorial, it was very helpful. Unfortunately I am still having problems getting the variables out of my LMS (moodle). When I play the Captivate 5 slide, all I am seeing is the variable name v_myUserName appearing in the slide, rather than the desired student name. Any ideas what I might be doing wrong? Many thanks J 19. Re: Reading SCORM variablessporschi82 May 5, 2011 10:04 PM (in response to merobertsjr) hi. i have the same problem as jumper. i followed your tutorial but for some reason all that is displayed on the slide is the variable name. Reading the information from the file works, I've tested that so the problem is with the flash getting the information from the LMS/Java Script. Any ideas? I added this function to the SCROM_support java script file in the Captivate Publish templates for SCORM. We are using SCORM 1.2: function returName(str) { return (g_objAPI.cmi.core.student_name); } Does this need to be placed in a specific position in the file? And I used the action script code you provided in your tutorial: import flash.external.*; this.stop(); var returnValue:String = ExternalInterface.call("returnName",returnValue); var myRoot:MovieClip = MovieClip(root); var mainmov:MovieClip = MovieClip(myRoot.parent.root); mainmov.v_myUserName = returnValue; Do I need to change the * ater import flash.external.* to something else? Any ideas? Thanks 20. Re: Reading SCORM variablesmerobertsjr May 5, 2011 10:27 PM (in response to sporschi82) Hello, The function does not need to be in a specific place per say, but you have a typo on the javascript function. The name of the function should be "returnName(str)" not "returName(str)". Missing an "n". Give that a try and let me know if that works. Michael 21. Re: Reading SCORM variablessporschi82 May 6, 2011 1:37 AM (in response to merobertsjr) Thanks for the quick reply. I only made the typo here in the forum. It's actually working now, but I had to use this function: function returnName(str) { return g_objAPI.LMSGetValue("cmi.core.student_name"); //returns a string } Now I'm wondering how I can get the last and first name seperated because the LMS gives it back as a string (lastname, first name). Sorry. I'm not very familiar with flash and java script. 22. Re: Reading SCORM variablesjohnswift May 26, 2011 3:36 PM (in response to merobertsjr) You can actually take it a step further, and not use any JavaScript at all... This script will get you where you need to go.... import flash.external.ExternalInterface; try{ //Assigns the LMS value to a string var UserName:String = String(ExternalInterface.call('g_objAPI.LMSGetValue','cmi.core.student_name')); //Assigns the string to a textfeild nameTxt.text = UserName; }catch(err:Error){ trace("Not in an LMS Currently"); } The nice thing about the above script is that you don't need to remember to add the JavaScript function anywhere which is nice if you're making widgets that you plan to distribute (ie people wont know to add the JavaScript snippit.) I usually wrap SCORM related content in a 'try catch' so that if flash chokes on the LMS API call, it wont kill the program. I hope this helps ~Jsswift 23. Re: Reading SCORM variablesjohnswift May 26, 2011 3:43 PM (in response to sporschi82) To get the first and last name separated. In Flash... import flash.external.ExternalInterface; try{ //Assigns the LMS value"); } ~Jsswift 24. Re: Reading SCORM variablesFrubisher Bold Sep 27, 2011 8:08 AM (in response to johnswift) Jsswift (or anyone else!), For a complete Flash novice, would you be able to expand on how to get your solution working. I've tried creating a flash project using your code above and then inserting it as an animation within a captivate project, adding a text caption displaying user variables student_name, firstname and lastname but they all come out blank once published. I'm working with Captivate 5.5, Flash CS4 and Moodle 2.0. Alternatively, if anyone knows an easier way to display the student name (taken from Moodle) to appear on the results page of the course, please advise. Alistair 25. Re: Reading SCORM variablesosbornsm Sep 11, 2012 9:01 AM (in response to Frubisher Bold) Hello all, I am attempting to solve this issue as well. Has there been a tutorial that is still active that we can look at to get this all solved? I am piecing together different bits of information but cannot get this to work. Any steps to complete this would be greatly useful. For example... for the Flash code... does it have to be within a widget? Or just added to captivate as an animation? Thank you so much, ~ Sean
https://forums.adobe.com/message/3522026
CC-MAIN-2016-36
en
refinedweb
oops. It works with VisualStudio 2003 but not 2005!! Mathew Yeates wrote: > Hi > I know I succeeded at this once before. But I just can't get it right. > I have an assembly Test.dll with a namespace Foo. Inside Foo is a method > called RunMe. > > What is the sequence of calls I need to do for me to call RunMe??? > Mathew > > _________________________________________________ > Python.NET mailing list - PythonDotNet at python.org > > >
https://mail.python.org/pipermail/pythondotnet/2006-March/000453.html
CC-MAIN-2016-36
en
refinedweb
Gearman::Worker - Worker for gearman distributed job system use Gearman::Worker; my $worker = Gearman::Worker->new; $worker->job_servers('127.0.0.1'); $worker->register_function($funcname => $subref); $worker->work while 1; Gearman::Worker is a worker class for the Gearman distributed job system, providing a framework for receiving and serving jobs from a Gearman server. Callers instantiate a Gearman::Worker object, register a list of functions and capabilities that they can handle, then enter an event loop, waiting for the server to send jobs. The worker can send a return value back to the server, which then gets sent back to the client that requested the job; or it can simply execute silently. Creates a new Gearman::Worker object, and returns the object. If %options is provided, initializes the new worker object with the settings in %options, which can contain: Calls job_servers (see below) to initialize the list of job servers. It will be ignored if this worker is running as a child process of a gearman server. Calls prefix (see below) to set the prefix / namespace. Initializes the worker $worker with the list of job servers in @servers. @servers should contain a list of IP addresses, with optional port numbers. For example: $worker->job_servers('127.0.0.1', '192.168.1.100:7003'); If the port number is not provided, 7003 is used as the default. Calling this method will do nothing in a worker that is running as a child process of a gearman server. Registers the function $funcname as being provided by the worker $worker, and advertises these capabilities to all of the job servers defined in this worker. $subref must be a subroutine reference that will be invoked when the worker receives a request for this function. It will be passed a Gearman::Job object representing the job that has been received by the worker. $timeout is an optional parameter specifying how long the jobserver will wait for your subroutine to give an answer. Exceeding this time will result in the jobserver reassigning the task and ignoring your result. This prevents a gimpy worker from ruining the 'user experience' in many situations. The subroutine reference can return a return value, which will be sent back to the job server. Sets the namespace / prefix for the function names. This is useful for sharing job servers between different applications or different instances of the same application (different development sandboxes for example). The namespace is currently implemented as a simple tab separated concatentation of the prefix and the function name. Returns the scalar argument that the client sent to the job server. Updates the status of the job (most likely, a long-running job) and sends it back to the job server. $numerator and $denominator should represent the percentage completion of the job. Do one job and returns (no value returned). You can pass "on_start" "on_complete" and "on_fail" callbacks in %opts. Gearman workers can be run run as child processes of a parent process which embeds Gearman::Server. When such a parent process fork/execs a worker, it sets the environment variable GEARMAN_WORKER_USE_STDIO to true before launching the worker. If this variable is set to true, then the jobservers function and option for new() are ignored and the unix socket bound to STDIN/OUT are used instead as the IO path to the gearman server. This is an example worker that receives a request to sum up a list of integers. use Gearman::Worker; use Storable qw( thaw ); use List::Util qw( sum ); my $worker = Gearman::Worker->new; $worker->job_servers('127.0.0.1'); $worker->register_function(sum => sub { sum @{ thaw($_[0]->arg) } }); $worker->work while 1; See the Gearman::Client documentation for a sample client sending the sum job.
http://search.cpan.org/~bradfitz/Gearman-1.09/lib/Gearman/Worker.pm
CC-MAIN-2013-48
en
refinedweb
#include <Xm/Xm.h> Boolean XmInstallImage( XImage * image, char * image_name);. The image caching functions provide a set of eight preinstalled images. These names can be used within a .Xdefaults file for generating pixmaps for the resource for which they are provided. Returns True when successful; returns False if NULL image, NULL image_name, or duplicate image_name is used as a parameter value. XmUninstallImage(3), XmGetPixmap(3), and XmDestroyPixmap(3).
http://www.makelinux.net/man/3/X/XmInstallImage
CC-MAIN-2013-48
en
refinedweb
>>. PSEUDOCODE int vop_lookup(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp) { int error; int nameiop = cnp->cn_nameiop; int flags = cnp->cn_flags; int lockparent = flags & LOCKPARENT; int islastcn = flags & ISLASTCN; struct vnode *vp = NULL; /* * Check accessibility of directory. */ if (dvp->v_type != VDIR) return ENOTDIR; error = VOP_ACCESS(dvp, VEXEC, cred, cnp->cn_thread); if (error) return (error); if (islastcn && (dvp->v_mount->mnt_flag & MNT_RDONLY) && (cnp->cn_nameiop == DELETE || cnp->cn_nameiop == RENAME)) return (EROFS); /* * Check name cache for directory/name pair. This returns ENOENT * if the name is known not to exist, -1 if the name was found, or * zero if not. */ error = cache_lookup(dvp, vpp, cnp); if (error) { int vpid; if (error = ENOENT) return error; vp = *vpp; if (dvp == vp) { /* lookup on "." */ VREF(vp); error = 0; } else if (flags & ISDOTDOT) { /* * We need to unlock the directory before getting * the locked vnode for ".." to avoid deadlocks. */ VOP_UNLOCK(dvp); error = vget(vp, 1); if (!error) { if (lockparent && islastcn) error = VOP_LOCK(dvp); } } else { error = vget(vp, 1); if (error || !(lockparent && islastcn)) { VOP_UNLOCK(dvp); } } /* * Check that the capability number did not change * while we were waiting for the lock. */ if (!error) { if (vpid == vp->v_id) { /* * dvp is locked if lockparent && islastcn. * vp is locked. */ return (0); } vput(vp); if (dvp != vp && lockparent && islastcn) VOP_UNLOCK(pdp); } /* * Re-lock dvp for the directory search below. */ error = VOP_LOCK(dvp); if (error) { return (error); } *vpp = NULL; } /* * Search dvp for the component cnp->cn_nameptr. */ ...; if (!found) { if ((nameiop == CREATE || nameiop == RENAME) && islastcn && directory dvp has not been removed) { /* * Check for write access on directory. */ /* * Possibly record the position of a slot in the directory * large enough for the new component name. This can be * recorded in the vnode private data for dvp. * Set the SAVENAME flag to hold onto the pathname for use * later in VOP_CREATE or VOP_RENAME. */ cnp->cn_flags |= SAVENAME; if (!lockparent) /* * Note that the extra data recorded above is only * useful if lockparent is specified. */ VOP_UNLOCK(dvp); return EJUSTRETURN; } /* * Consider inserting name into cache. */ if ((cnp->cn_flags & MAKEENTRY) && nameiop != CREATE) cache_enter(dvp, NULL, cnp); return ENOENT; } else { /* * If deleting, and at end of pathname, return parameters * which can be used to remove file. If the wantparent flag * isn’t set, we return only the directory, otherwise we go on * and lock the inode, being careful with ".". */ if (nameiop == DELETE && islastcn) { /* * Check for write access on directory. */ error = VOP_ACCESS(dvp, VWRITE, cred, cnp->cn_thread); if (error) return (error); if (found entry is same as dvp) { VREF(dvp); *vpp = dvp; return 0; } error = VFS_VGET(dvp->v_mount, ..., &vp); if (error) return error; if (directory is sticky && cred->cr_uid != 0 && cred->cr_uid != owner of dvp && owner of vp != cred->cr_uid) { vput(vp); return EPERM; } *vpp = vp; if (!lockparent) VOP_UNLOCK(dvp); return 0; } /* * If rewriting (RENAME), return the inode and the * information required to rewrite the present directory * Must get inode of directory entry to verify it’s a * regular file, or empty directory. */ if (nameiop == RENAME && wantparent && islastcn) { error = VOP_ACCESS(dvp, VWRITE, cred, cnp->cn_thread); if (error) return (error); /* * Check for "." */ if (found entry is same as dvp) return EISDIR; error = VFS_VGET(dvp->v_mount, ..., &vp); if (error) return error; *vpp = vp; /* * Save the name for use in VOP_RENAME later. */ cnp->cn_flags |= SAVENAME; if (!lockparent) VOP_UNLOCK(dvp); return 0; } /* * Step through the translation in the name. We do not ‘vput’ the * directory because we may need it again if a symbolic link * is relative to the current directory. Instead we save it * unlocked as "pdp". We must get the target inode before unlocking * the directory to insure that the inode will not be removed * before we get it. We prevent deadlock by always fetching * inodes from the root, moving down the directory tree. Thus * when following backward pointers ".." we must unlock the * parent directory before getting the requested directory. * There is a potential race condition here if both the current * and parent directories are removed before the VFS_VGET for the * inode associated with ".." returns. We hope that this occurs * infrequently since we cannot avoid this race condition without * implementing a sophisticated deadlock detection algorithm. * Note also that this simple deadlock detection scheme will not * work if the file system has any hard links other than ".." * that point backwards in the directory structure. */ if (flags & ISDOTDOT) { VOP_UNLOCK(dvp); /* race to get the inode */ error = VFS_VGET(dvp->v_mount, ..., &vp); if (error) { VOP_LOCK(dvp); return (error); } if (lockparent && islastcn) { error = VOP_LOCK(dvp); if (error) { vput(vp); return error; } } *vpp = vp; } else if (found entry is same as dvp) { VREF(dvp); /* we want ourself, ie "." */ *vpp = dvp; } else { error = VFS_VGET(dvp->v_mount, ..., &vp); if (error) return (error); if (!lockparent || !islastcn) VOP_UNLOCK(dvp); *vpp = vp; } /* * Insert name into cache if appropriate. */ if (cnp->cn_flags & MAKEENTRY) cache_enter(dvp, *vpp, cnp); return (0); } }.
http://manpages.ubuntu.com/manpages/hardy/man9/VOP_LOOKUP.9.html
CC-MAIN-2013-48
en
refinedweb
Tripta Learns Python Friday, April 27, 2012 Golden Rule of Debugging Monday, December 19, 2011 Expert Beginner This habit of "moving stuff around" in the code, or what some refer to as blind tinkering, is not useful when that's all you're doing. I know that playing with the code is a great way to learn, but it should be done methodically and with the intention of caring about what you're learning. That is, either while or after you've played around and got the desired result, draw reusable conclusions as to why that experimentation was useful. So I'm trying to build the practice in which I encourage myself to constantly ask questions like, "What do I know so far?", "Do I really understand how each line of code works?", "How else can I better organize what I've written?", and most importantly, "Am I just blindly tinkering?". Ask yourself these questions, think about the code, recognize what mistakes you made during the experimentation, and avoid blind tinkering. This type of concentration is key before declaring the job done and moving on to the next task. Doing this has not only provided me with a richer learning experience, but with more confidence in what I know. For me, the value of the final product now also lies in my complete understanding of the code itself. I suspect that if this mindset becomes habitual, I can fulfill the role of an expert beginner in no time. Tuesday, December 13, 2011 A Mantra (of sorts) -- Jae Kwon Thursday, October 13, 2011 Hippocampal Anatomy Game, Take 2 I refactored the code from the Hippocampal Anatomy game I made a little while ago. After incorporating lists, dictionaries, and throwing in module, I was able to cut down the code by 36 lines! Some major topics I learned from this part of the project: 1) Mutable Objects -- I'm still trying to wrap my head around the concept of mutable objects, i.e. objects whose value can change. I found it difficult to remember to apply the concept that I can modify a list, for example, after it's been created. And so it took some playing around with to really start understanding this. 2) Approximate string matching (aka fuzzy string searching) & the Levenshtein distance -- I asked a good friend if there was an easier way to accept variations of inputs, like for "hippocampal region CA3," "hippocapal region ca3" or "hippocampal regin ca3" and so on would be an acceptable response. He directed me toward fuzzy string searching and the Levenshtein package. The Levenshtein distance can calculate string similarities and is one application of approximate string matching, which is a way to find strings that approximately match a pattern. This was my first foray into installing and using an external library. While working on this for quite some time, I had the realization that, "Oh my god, there must be tons of packages out there." And as one would expect, my mind was blown with all the possibilities. 3) randint -- A friend suggested that I use random numbers to vary the messages passed to dead(). This ended up being a useful challenge, so I appreciate the recommendation (see line 24). Pending Question: -- I'm still a little confused about the functional differences between modules, libraries, and packages. Oh and while I was working on improving the game, my first version was on the O'Reilly Radar blog. Sweet. Here's the link & description: Hippocampus Text Adventure -- written as an exercise in learning Python, you explore the hippocampus. It's simple, but I like the idea of educational text adventures. (Well, educational in that you learn about more than the axe-throwing behaviour of the cave-dwelling dwarf) *Although I probably won't be updating this project much in the near future, if you have any suggestions for improvements on the code, please share. New code and the imported dictionary are below. Also, I've added the code to github. Enjoy. from sys import exit from random import randint from hippodict import responses import Levenshtein def geta_response(response_list, tolerance=2, ") for response in response_list: if Levenshtein.distance(next.lower(), response) <= tolerance: return response print error_msg def go(directions): response = geta_response(directions.keys(), 2) destination = directions[response] destination() def dead(): death = ["Have a good one.", "Aww, why not? Ok, maybe you'll want to play another day.", "The hippocampus will miss you. I bet you'll miss it.", "You missed out on a damn good game.", "You just missed out on such a good adventure. Bummer.", "Ok, you missed out. But no problem.", "Aww, too bad. Guess you don't *really* want to explore the hippocampus.", "Maybe another time!"] print death[randint(0, len(death)-1)] exit(1) def entorhinal_cortex(): print responses['Layer2and3'] response = geta_response(["2", "3"], 0 , responses['TryAgain']) how_much = int(response) if how_much == 2: print "Welcome! Now you have the option of exploring the dentate gyrus or the hippocampal region CA3. Enter your choice." go({"dentate gyrus": dentate_gyrus,"hippocampal region ca3": hippocampal_region_CA3}) elif how_much == 3: print "Sweet pick. Since the entorhinal cortex has 2 major projections, you have the option of exploring the hippocampal region CA1 or the subiculum. Enter your choice." go({"hippocampal region ca1": hippocampal_region_CA1,"subiculum": subiculum}) def dentate_gyrus(): print responses['DentateGyrus'] response = geta_response(["yes", "no"], 1 , responses['TryAgain']) if response == "yes": print """The dentate gyrus primarily consists of granule cells, interneurons and pyramidal cells.""" """The dentate gyrus' main projection is to the CA3. So off to there you go!""" hippocampal_region_CA3() elif response == "no": dead() def hippocampal_region_CA3(): print responses['HippocampalRegionCA3'] response = geta_response(["yes", "no"], 1, responses['TryAgain']) if response == "yes": print "The CA3 plays a role in the encoding of new spatial information within short-term memory with a duration of seconds and minutes.""" """The CA3's main projection is to the CA1. So that's where you're headed to next.""" hippocampal_region_CA1() elif response == "no": dead() def subiculum(): print responses['Subiculum'] response = geta_response(["stay", "leave"], 1, responses['TryAgain']) go({"stay": dead(), "leave": entorhinal_cortex_2()}) def hippocampal_region_CA1(): print responses['HippocampalRegionCA1'] response = geta_response(["subiculum", "entorhinal cortex"], 1, responses['TryAgain']) go({"subiculum": subiculum(), "entorhinal cortex": entorhinal_cortex_2()}) def entorhinal_cortex_2(): print responses['EntorhinalCortex2'] response = geta_response(["yes"], 1, responses['TryAgain']) if response == "yes": entorhinal_cortex() else: print "Thanks for playing." def start(): print responses['StartHere'] response = geta_response(["ready"], 1, responses['TryAgain']) go({"ready": entorhinal_cortex()}) start() responses = { 'Layer2and3' : "You are currently in between layer 2 and 3 of the entorhinal cortex. You have the option of traveling in either layer. Which layer do you want to walk along?", 'DentateGyrus' : """Ah, the dentate gyrus. Good pick. The dentate gyrus is one of the few regions in the adult brain where neurogenesis is thought to take place. It also plays a role in depression.""" """ There are three major types of neurons found here.""" """ Would you like to learn about them?""", 'HippocampalRegionCA3' : """ Now you're in the hippocampal region CA3 or simply, the CA3.""" """ There are a few proposed behavioral functions of the CA3.""" """ Would you like to learn about one?""", 'Subiculum' : """Hey, now you're in the most inferior region of the hippcampus called the subiculum. It lies between the entorhinal cortex and the CA1.""" """ Do you want to stay or leave?""", 'HippocampalRegionCA1' : """Welcome to the hippocampal region CA1 or simply, CA1.""" """This region sends a large amount of output to the subiculum and some inputs back to the entorhinal cortex.""" """Which way would you like to travel?""", 'EntorhinalCortex2' : """You have now, more or less, made a full loop within the connections of the hippocampus. Congratulations!""" """If you want to explore a bit more, please replay the game! Is this what you want to do?""", 'StartHere' : """Hello. You're about to explore the hippocampus. """ """There will be many paths and therfore, many decisions you can make.""" """Enter 'Ready!' when prompted and you can begin your journey.""", 'TryAgain' : "That doesn't make much sense. Try typing that in again."} Wednesday, August 10, 2011 A Hippocampus Anatomy Game Problem: Friday, June 10, 2011 Lessons From A Beginner Programmer I've recently started an exciting journey into programming and thought it might be useful to share a few things that I've learned so far. 1.) Don't Be Intimidated. When I first gave serious consideration to learning how to program, a friend gave me some useful advice that stuck with me: "Whatever you do, however you decide to approach this, don't be intimated." My first thought was full of that naive yet self-confident, "Well, yeah of course." The more time I spent on it, both independently and with others, I understood why she said that. It's easy to be intimidated by the hacker culture, the concepts, the skill level of other programmers and so on. Although I admit to sometimes being discouraged by this, her advice has become a mantra in my mind. Don't be discouraged by programmers (if you're exposed to any), don't be discouraged by the concepts, and more importantly, don't be discouraged by the journey. You'll know if you truly want to learn if you can push through these obstacles. 2.) Pick A Project. I had been wanting to pick up programming since college, but not enough to put any significant energy toward it. I brushed over introductory chapters of CS books and perused through lectures on various college websites but that's about as far as I made it. This desire changed when I began thinking of a web project that I wanted to use for myself. At some point I decided, "Hey, I want to see this out there. Instead of waiting for it to occur, why not start learning and work toward it on my own?" Around the time I started learning programming, I also started learning how to read and write Devanagari (Hindi). So I got some Hindi textbooks from the library and just went at it. But the work quickly began to feel meaningless because I wasn't motivated by a project, like for example, a book in Hindi to read or a letter to write to a relative in India. So pick a project that matters to you. Think of what you want to accomplish and how enabled you will feel by doing it. The passion for your project will help you through the "hard stuff." It’s immensely critical that you really care about the idea and have a project to work toward. If this doesn't exist, learning may feel a little pointless at times. Figure out your intentions and dedicate yourself to it. 3.) Be Patient. Take your time working through the exercises and problem sets. The hard work you do on them really pays off in the long run. Learning the basics may feel like low-level skills, but it's worth it to master these from the get go before moving on. Along with this, keep in mind that you will always have more to learn. There's no end to your proficiency and creations, so regardless of what, take as long as it takes. Especially in the beginning, be patient with the material and yourself as you learn. 4.) Examine Yourself. Since it had been some time since I picked up anything new with such ferocity, I realized that being a beginner, at anything, really gives you key insights into yourself. How you learn, what your motivations are, what discourages you, etc. It can be hard to pick up challenging activities, because, well, they are challenging. It’s a learning process and that can be difficult if you’ve generally been doing the things you know well often. So while you're learning, think seriously about your learning techniques and strengths/weaknesses. 5.) Experiment With It. This one takes a little practice but it's been the most important one for me. When you're learning the basics, really play with each part of the code. Whenever I would ask a friend a coding question, often times the response would include the answer and encouragement to experiment with it. For instance, if an exercise asks you to solve one problem, see how many other ways you can solve it. See how other people tried to solve a similar problem. How is their coding style different than yours? Reading and playing with a lot of code will really help with your fluency in the language. All this experimentation will improve the quality of your learning and will ultimately make you a better coder. Don't forget to seek out programming events/workshops in your area, set up coding dates with friends, and of course refer to online communities.
http://triptalearnspython.blogspot.com/
CC-MAIN-2013-48
en
refinedweb
The official source of information on Managed Providers, DataSet & Entity Framework from Microsoft The information in this post is out of date. Visit msdn.com/data/ef for the latest information on current and past releases of EF.: I also want to be able to query for a product based on ID. Finally, given a customer, I would like to be able to add an Order to the database Before we get into the details, there are two things I’d like to get out of the way first: Let’s start with the work we have to do on Customer entity and look at what a repository for dealing with Customer might look like: public interface { GetCustomerById(string id); < > FindByName(string name); void AddCustomer( customer);} This repository interface seems to meet all the requirements around Customer: This sounds good to me for the moment. Defining an interface like this for your repository is a good idea, especially if you are interested in writing tests using mocks or fakes and it allows for better unit testing by keeping your database out of the equation entirely. There are blog posts coming in the future that cover testability, mocks and fakes, etc. You might take this interface definition a bit further and define a common IRepository for dealing with concerns that are common for multiple repository types. This is fine thing to do if you see that it works for you. I don’t necessarily see the need yet for this particular example and so I’ll pass on it for now. It is entirely possible that this becomes important as you add more repositories and refactor. Let’s take this repository and see how we might build an implementation of it that leverages Entity Framework to enable data access. First of all, I need an ObjectContext that I can use to query for data. You might be tempted to handle ObjectContext instantiation as a part of the repository’s constructor – but it is a good idea to leave that concern out of the repository and deal with that elsewhere. Here’s my constructor: public CustomerRepository( context){ if (context == null) throw new ("context"); _context = context;} In the above snippet, NorthwindContext is my typed ObjectContext type. Let’s now provide implementations for the methods required by our ICustomerRepository interface. GetCustomerById is trivial to implement, thanks to LINQ. Using standard LINQ operators, we can implement GetCustomerById like this: public GetCustomerById(string id){ return _context.Customers.Where(c => c.CustomerID == id).Single();} Similarly, FindByName could look like this. Once again, LINQ support makes this trivial to implement: > FindByName(string name){ return _context.Customers.Where( c => c.ContactName.StartsWith(name) ).ToList(); } Note that I chose to expose the results as IEnumerable<T> – you might choose to expose this as an IQueryable<T> instead. There are implications to doing this – in this case, I am not interested in exposing additional IQueryable based query composition over what I return from my repository. And finally, let’s see how we might implement AddCustomer: public void AddCustomer( customer){ _context.Customers.AddObject(customer);} You may be tempted to also implement the save functionality as a part of the AddCustomer method. While that may work for this simple example, it is generally a bad idea – this is exactly where the Unit of Work comes in and we’ll see in a bit how we can use the this pattern to allow us to implement and coordinate Save behavior. Here’s the complete implementation of CustomerRepository that uses Entity Framework for handling persistence: public class : { private _context; public CustomerRepository( context) { if (context == null) throw new ("context"); _context = context; } public GetCustomerById(string id) { return _context.Customers.Where(c => c.CustomerID == id).Single(); } public > FindByName(string name) { return _context.Customers.Where(c => c.ContactName.StartsWith(name)) .AsEnumerable< >(); } public void AddCustomer( customer) { _context.Customers.AddObject(customer); }} Here’s how we might use the repository from client code: repository = new (context); c = new ( ... );repository.AddCustomer(c);context.SaveChanges(); For dealing with my Product and Order related requirements, I could define the following interfaces (and build implementations much like CustomerRepository). I’ll leave the details out of this post for brevity. { GetProductById(int id);}public interface { void AddOrder( order);} You may have noticed this already; even though we didn’t implement any specific pattern to explicitly allow us to group related operations into a unit of work, we are already getting Unit of Work functionality for free with NorthwindContext (our typed ObjectContext). The idea is that I can use the Unit of Work to group a set of related operations – the Unit of Work keeps track of the changes that I am interested in until I am ready to save them to the database. Eventually, when I am ready to save, I can do that. I can define an interface like this to define a “Unit of Work”: { void Save();} Note that with a Unit of Work, you might also choose to implement Undo / Rollback functionality. When using Entity Framework, the recommended approach to undo is to discard your context with the changes you are interested in undoing. I already mentioned that our typed ObjectContext (NorthwindContext) supports the Unit of Work pattern for the most part. In order to make things a bit more explicit based on the contract I just defined, I can change my NorthwindContext class to implement the IUnitOfWork interface: { public void Save() { SaveChanges(); }. . . I have to make a small adjustment to our repository implementation after this change: _context; public CustomerRepository( unitOfWork) { if (unitOfWork == null) throw new ("unitOfWork"); _context = unitOfWork as ; } public GetCustomerById(string id) { return _context.Customers.Where(c => c.CustomerID == id).Single(); } public >(); } public void AddCustomer( customer) { _context.Customers.AddObject(customer); }} That’s it – we now have our IUnitOfWork friendly repository, and you can use the IUnitOfWork based context to even coordinate work across multiple repositories. Here’s an example of adding an order to the database that requires the work of multiple repositories for querying data, and ultimately saving rows back to the database: unitOfWork = new (); customerRepository = new (unitOfWork); customer = customerRepository.GetCustomerById("ALFKI"); productRepository = new product = productRepository.GetById(1); orderRepository = new order = new (customer); order.AddNewOrderDetail(product, 1); orderRepository.AddOrder(order);unitOfWork.Save();
http://blogs.msdn.com/b/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx
CC-MAIN-2013-48
en
refinedweb
07-01-2012 05:47 AM I have been itching to use some of the C++11 features for quite some time now, and I was glad to find out that QNX SDK uses gcc 4.6.3, which supports some of its features including "auto". However compiling cascades app with it failed, because it's not on by default. I tried adding -std=c++11 flag to the .pro file, but that didn't work, because it uses qcc driver instead of gcc and it doesn't seem to have this feature. Is there a flag to switch it on? 07-04-2012 02:00 PM Try -Wc,-std=c++0x 07-06-2012 09:19 AM 07-11-2012 05:41 PM - edited 07-11-2012 05:41 PM Hi Daniel, Sorry it took me so long to get back, I have tried it and it failed on the next step, in qlist.h there is an include statement: #include <initializer_list> And it couldn't find this header. I can see the header here: <SDK>/target/qnx6/usr/include/c++/4.6.3/initialize /Developer/SDKs/bbndk-10.0.4-beta/target/qnx6/usr/ So I have to fix that by adding the following lines to the .pro file(I'm on Mac): INCLUDEPATH += /Developer/SDKs/bbndk-10.0.4-beta/target/qnx6/usr/ INCLUDEPATH += /Developer/SDKs/bbndk-10.0.4-beta/target/qnx6/usr/ Is there a way to make that not platform specific? 07-12-2012 11:05 AM Hi, the bits/c++config.h file is platform specific, that's why it is hidden in the i486-pc-nto-qnx8.0.0 folder. Normally the default compiler should find this automatically but I suspect you may need to have to either create a new QCC config setting (Project Properties->c++ build->environment->QCC_CONF_PATH) and configure all of that for proper c++11 compilation. (But that kind of defeats the purpose of having something non-platform specific I guess) For now you may have to stick with $(QNX_TARGET)/usr/include/c++/4.6.3 in your .pro file. Cheers Selom 07-12-2012 11:07 AM
http://supportforums.blackberry.com/t5/Native-Development/C-11/m-p/1809457
CC-MAIN-2013-48
en
refinedweb
Threads and Processes “Well, since you last asked us to stop, this thread has moved from discussing languages suitable for professional programmers via accidental users to computer-phobic users. A few more iterations can make this thread really interesting…”eff-bot, June 1996 Overview This chapter describes the thread support modules provided with the standard Python interpreter. Note that thread support is optional, and may not be available in your Python interpreter. This chapter also covers some modules that allow you to run external processes on Unix and Windows systems. Threads When you run a Python program, execution starts at the top of the main module, and proceeds downwards. Loops can be used to repeat portions of the program, and function and method calls transfer control to a different part of the program (but only temporarily). With threads, your program can do several things at one time. Each thread has its own flow of control. While one thread might be reading data from a file, another thread can keep the screen updated. To keep two threads from accessing the same internal data structure at the same time, Python uses a global interpreter lock. Only one thread can execute Python code at the same time; Python automatically switches to the next thread after a short period of time, or when a thread does something that may take a while (like waiting for the next byte to arrive over a network socket, or reading data from a file). The global lock isn’t enough to avoid problems in your own programs, though. If multiple threads attempt to access the same data object, it may end up in an inconsistent state. Consider a simple cache: def getitem(key): item = cache.get(key) if item is None: # not in cache; create a new one item = create_new_item(key) cache[key] = item return item If two threads call the getitem function just after each other with the same missing key, they’re likely to end up calling create_new_item twice with the same argument. While this may be okay in many cases, it can cause serious problems in others. To avoid problems like this, you can use lock objects to synchronize threads. A lock object can only be owned by one thread at a time, and can thus be used to make sure that only one thread is executing the code in the getitem body at any time. Processes On most modern operating systems, each program run in its own process. You usually start a new program/process by entering a command to the shell, or by selecting it in a menu. Python also allows you to start new programs from inside a Python program. Most process-related functions are defined by the os module. See the Working with Processes section for the full story.
http://www.effbot.org/librarybook/threads-and-processes-index.htm
CC-MAIN-2013-48
en
refinedweb
Hi there, I've got two simple questions. 1. Is "delete this" safe (I'm reading it is!?), assumming method will return (void) immediately after deleting. 2. Is the following code safe: ThanksThanksCode:void Layer::StrangeDelete() { shared_ptr<Layer> lockme = this_ptr(); // lockme holds shared_ptr to this now _Parent->DeleteChild(); // delete original shared_ptr // lockme goes out of scope and there is no more strong references // "delete this" will take place, but it is no longer in the method scope, so is it safe or not? }
http://cboard.cprogramming.com/cplusplus-programming/133882-object-self-destruction.html
CC-MAIN-2013-48
en
refinedweb
Man Page Manual Section... (3) - page: sem_timedwait NAMEsem_wait, sem_timedwait, sem_trywait - lock a semaphore SYNOPSIS #include <semaphore.h> int sem_wait(sem_t *sem); int sem_trywait(sem_t *sem); int sem_timedwait(sem_t *sem, const struct timespec *abs_timeout); Link with -lrt or -pthread. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): sem_timedwait(): _POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600 DESCRIPTIONsem_wait() decrements (locks) the semaphore pointed to by sem. If the semaphore's value is greater than zero, then the decrement proceeds, and the function returns, immediately. If the semaphore currently has the value zero, then the call blocks until either it becomes possible to perform the decrement (i.e., the semaphore value rises above zero), or a signal handler interrupts the call.All of these functions return 0 on success; on error, the value of the semaphore is left unchanged, -1 is returned, and errno is set to indicate the error.. CONFORMING TOPOSIX.1-2001. NOTESA signal handler always interrupts a blocked call to one of these functions, regardless of the use of the sigaction(2) SA_RESTART flag._getvalue() from handler; value = 1 ALSOclock_gettime(2), sem_getvalue(3), sem_post(3), sem_overview(7), time:54 GMT, June 11, 2010
http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=sem_timedwait
CC-MAIN-2013-48
en
refinedweb
Pure virtual base class for the vector space used by NOX::Epetra::Vectors. More... #include <NOX_Epetra_VectorSpace.H> Pure virtual base class for the vector space used by NOX::Epetra::Vectors.> ). Implemented in NOX::Epetra::VectorSpaceL2, and NOX::Epetra::VectorSpaceScaledL2.
http://trilinos.sandia.gov/packages/docs/r11.0/packages/nox/doc/html/classNOX_1_1Epetra_1_1VectorSpace.html
CC-MAIN-2013-48
en
refinedweb
realmagick.com The shrine of knowledge. Brethren In Christ A selection of articles related to brethren in christ. Pdf Resources - The Brethren in Christ and Views on Jesus Christ and Salvation - home.messiah.edu - Brethren In Christ Resource Catalog - Proceeds support Brethren in Christ World Missions. 978-1-928915-34-8 ren – the Brethren in Christ, the Old Order River Brethren, and the. - - Home Frontier and Foreign Missionary Society of the United - Home Frontier and Foreign Missionary Society of the United Brethren in Christ. ( United States) v. Great Britain. - untreaty.un.org - BRETHREN IN CHRIST FOUNDATION, INC. GIFT DEPOSIT - The Brethren in Christ Foundation, Inc. - - The Massachusetts Catholic Conference; The Brethren in Christ - International Pentecostal Holiness Church; The Missionary Church; Open Bible. - Suggested News Resources - Pa. church hosts sneaker giveaway in conjunction with Samaritan's Feet - By CJ LOVELACE cj.lovelace@herald-mail.com Lines were already forming Saturday morning when Doug Shatzer arrived at Five Forks Brethren In Christ Church. - HOUSE OF FAITH PROFILE: Five Forks Brethren in Christ Church - By Anonymous Pastors Wilbur F. “Buck” Besecker, H. Ray Kipe, William A. - Antrim Brethren in Christ Church offers school supplies, clothing for free on - By SAMANTHA COSSICK Staff writer KAUFFMAN -- Members of Antrim Brethren in Christ Church will host a free clothing and school supply distribution on Saturday. - Church helps out with school supplies, clothing - By Jim Hook Senior writer Big turnout: Families receive school clothes and supplies from the Antrim Brethren in Christ Church on Saturday. - Police blotter - 8/27; Police seeking witnesses to crash - DWELLING FIRE: Cleona, Annville Union Hose, Neversink and Palmyra Citizens fire companies; fire police; and Central Medical Transport and First Aid and Safety Patrol ambulances responded to Fairland Brethren in Christ Church, 529 W. Penn Ave. Suggested Web Resources - Brethren in Christ Church - Official web site of this pacifist denomination with an emphasis on a simple lifestyle. - - Brethren in Christ Church - History - - Brethren In Christ World Missions - Brethren In Christ World Missions homepage. - - Brethren in Christ Church - Wikipedia, the free encyclopedia - The Brethren in Christ Church (BIC) is an Anabaptist Christian denomination with roots in the Mennonite church, pietism, and Wesleyan holiness. - en.wikipedia.org - Church of the United Brethren in Christ, USA -lampang province geography barium meal coleslaw cavite city the city flag aerophone dream interpretation sorcery def leppard influences Related books About - Contact - Advertising
http://www.realmagick.com/brethren-in-christ
CC-MAIN-2013-48
en
refinedweb
I am trying to check to ensure the user entered a number instead of a character for this program. I was thinking of using isalpha and saying if that isalpha is true then tell user and end. But I have reconsidered and think that what I really need to do is use isdigit, and say that if isdigit is false (i.e the user inputs anything other then a number), then tell the user and exit. But I am still stuck with isdigit like I was isalpha. I have tried various combinations on trying to get it to recognize the if statement and have had no luck. Currently, if I put in a character instead of a digit, it ignores my isdigit statement and thinks it has all the data it needs to go to the end instead of exiting. Can someone please sheed some light onto what I am doing wrong here. Or perhaps should I be trying something else entirely for what I am trying to do? My current code is: Thanks for any help on this.Thanks for any help on this.Code: #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <conio.h> int main (void) { typedef struct { int hr; int min; int sec; } TIME; TIME sTime; TIME fTime; TIME tTime; printf("Calculate total time between start and finish. Format for data is hh:mm:ss\n"); printf("\nEnter a start time:\t"); scanf("%d:%d:%d", &sTime.hr, &sTime.min, &sTime.sec); if (sTime.hr >24 || sTime.min >60 || sTime.sec >60) { printf("\aInvalid Option\n"); printf("\nPress any key to exit.\n"); getch(); return 0; } if (isdigit(sTime.hr || sTime.min || sTime.sec)) { printf("\aInvalid Option\n"); printf("\nPress any key to exit.\n"); getch(); return 0; } printf("\nEnter a finish time:\t"); scanf("%d:%d:%d", &fTime.hr, &fTime.min, &fTime.sec); if (fTime.hr >24 || fTime.min >60 || fTime.sec >60) { printf("\aInvalid Option\n"); printf("\nPress any key to exit.\n"); getch(); return 0; } tTime.hr = fTime.hr - sTime.hr; tTime.min = fTime.min - sTime.min; tTime.sec = fTime.sec - sTime.sec; if (tTime.hr < 0) tTime.hr += 12; printf("\n\n\nThe total time: %2d hr(s),%2d min(s), and%2d sec(s)\n", tTime); return 0; } DD :confused:
http://cboard.cprogramming.com/c-programming/23043-correct-use-isdigit-printable-thread.html
CC-MAIN-2013-48
en
refinedweb
This template inserts boilerplate information about a template, and adds the page to Category:Template. Use it on template pages. The content that it inserts is: This {{Template}} (or {{Template:Template}}, if this page is not in the "Template" namespace). For more details on how to use templates see "A Quick Guide to Templates". [End of content that it inserts]
http://techbase.kde.org/index.php?title=Template:Template&oldid=67424
CC-MAIN-2013-48
en
refinedweb
27 December 2007 10:49 [Source: ICIS news] By Ed Cox LONDON (ICIS news)--Record high first-quarter European ethylene (C2) and propylene (C3) settlements have not offset cracker operators’ fears for 2008 margins because of unprecedented upstream energy pressure. Despite first-quarter ethylene sitting at €1,023/tonne FD (free delivered) NWE (northwest ?xml:namespace> With the 2008 cracker maintenance slate easier than 2007, consumers reason that supply should be less of an issue but the real headache for the market will again stem from fears that energy will remain volatile and expensive. No-one is confident enough to make a firm prediction on where naphtha and crude oil pricing will go. Only facts speak for themselves: 2007 saw Brent crude close to $100/bbl, with naphtha price ideas breaching $850/tonne CIF (cost, insurance and freight) NWE, both record levels. Naphtha prices have doubled in just over two years. "We did our calculations and we needed plus €130/tonne on first-quarter ethylene to reach mid-cycle margin levels. Yes we are back in the black after seeing red numbers in December but margins are narrow,’ said one cracker operator. Finding compromise prices which satisfy both cracker margins and concerns over derivative markets will surely be difficult again. "I accept buyers have good arguments about their own cost pressures," commented another producer. "But if our margins are too low we will have to cease production. The fourth quarter was catastrophic. We cannot compromise on prices, and it’s up to consumers to do their job separately." Producers remain confident over signs of good nominated contract demand, not expecting any great improvement in supply. 2007 saw potentially more than 500,000 tonnes/year of extra ethylene brought on line in Suppliers point to increases in derivative capacity, such as work due to lift BASF’s ethylene oxide production and SABIC’s new low density polyethylene (PE) plant in the Any increase in domestic supply would not change the ethylene balance greatly, said one producer, rather it would limit opportunities for imports from the Middle East, which has looked more actively to Europe during tight periods this year. Furthermore, volatile cracker performance would mean a balanced market rather than any oversupply, added the source. Buyers, however, will judge the situation differently given the widening gap between European and Asian ethylene prices and with one confirming it would look to take advantage of any arbitrage options. First-quarter one ethylene at €1,023/tonne equates to $1,483/tonne, compared with mid-December SE Asian spot at $1,170-1,225/tonne CFR (cost and freight). A real concern is the cost advantage that cheaper ethylene gives derivative manufacturers in Asia, which could attract cheaper imports to Much will depend on the ability of polymer markets to absorb the record high olefins prices. The outlook for the first half of 2008 is good, with both PE and polypropylene (PP) markets expected to perform well. However, the increase in Middle East and Indian polymer capacity is eventually expected to have a clear effect in "If naphtha prices stay around $820/tonne CIF NWE in the first quarter, then polymer markets should enjoy greater margins than cracker operators,’ said one olefins consumer. Sellers have bemoaned the lack of cracker margins in November and negative cost position in December, while working with very good nominated contract volume for the first quarter. General economic fears will persist, especially over the health of the The nature of contract systems is always a topic, with questions persisting over what form the bi-monthly ethylene settlement will take. One of the three producers involved pulled out in 2007, with a large consumer following. This left two producers and four consumers, one of which joined late in the year. Whether praised or criticised, there was no doubt that the December-January settlement up €65/tonne was an often quoted number in the context of first-quarter discussions. In terms of relative pricing, the gap between ethylene and propylene could widen further, with sellers more optimistic on C2 demand than C3. The loss of PP capacity in Reported 2008 European cracker shutdowns * unconfirmed (
http://www.icis.com/Articles/2007/12/27/9088810/outlook-08-costs-key-in-europe-olefins.html
CC-MAIN-2013-48
en
refinedweb
AndyS01 I want to create a slider that has values representing colors, like the slider on the _ChooseColor() GUI in the Misc.au3 UDF, but the max value for a slider is 32767. Here is my test code: #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> #include <SliderConstants.au3> #include <Misc.au3> $hGUI = GUICreate("Test", 100, 560) $BtnID = GUICtrlCreateButton("Choose", 10, 10, 50, 25) ;~ GUICtrlSetOnEvent($BtnID, "handle_choose_btn") $flags = BitOR($TBS_AUTOTICKS, $TBS_VERT) $iSliderID = GUICtrlCreateSlider(10, 50, 40, 420, $flags) $hSlider_hWnd = GUICtrlGetHandle($iSliderID) $iMaxVal = 32767 GUICtrlSetLimit($iSliderID, $iMaxVal, 0) ; change min/max value GUISetState() GUIRegisterMsg($WM_VSCROLL, "WM_V_Slider") While 1 Switch GUIGetMsg() Case $GUI_EVENT_CLOSE Exit Case $GUI_EVENT_PRIMARYUP ToolTip("") case $BtnID handle_choose_btn() EndSwitch WEnd ; React to a slider movement Func WM_V_Slider($hWnd, $iMsg, $wParam, $lParam) #forceref $hWnd, $iMsg, $wParam If $lParam = $hSlider_hWnd Then $iValue = GUICtrlRead($iSliderID) ConsoleWrite("+++: $iValue = " & $iValue & @CRLF) ToolTip(getRGB($iValue)) EndIf Return $GUI_RUNDEFMSG EndFunc ;==>WM_V_Slider ; Extract the RGB components from the color value Func getRGB($iColorVal) Local $hexstr, $b, $g, $r, $rgb $hexstr = StringFormat("%06X", $iColorVal) $r = StringMid($hexstr, 1, 2) $g = StringMid($hexstr, 3, 2) $b = StringMid($hexstr, 5, 2) $rgb = StringFormat("" & $iColorVal & " - rgb: %02s %02s %02s", $r, $g, $b) Return ($rgb) EndFunc ;==>getRGB Func handle_choose_btn() $color = _ChooseColor(2, 255, 0, $hGUI) ConsoleWrite("+++: $color = " & Hex($color, 8) & @CRLF) EndFunc ;==>handle_choose_btn - By duzers Hello, I have to write script ( as simple as possible) to read messages from other scripts (something like queue to run). How to start? Which groups of functions? Any ida? THX - By xiantez Hey!! - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/196921-reading-pixel-colors-from-few-windows-at-same-time/
CC-MAIN-2022-05
en
refinedweb
Hi, In this post I am keen to illustrate the usage of entity framework 6 with MariaDB. In my years of experience using entity framework it has improved a lot and came a long way, and now I like it so much that usually all my app’s which have interaction with databases will most likely have sprinkle of entity framework in them. So getting to the business, Create a new console application, and add the following nuget packages: Once the above packages are added to the project, then I would go ahead to create the following class: MySqlDemoDbContext.cs namespace ConsoleApp1 { [DbConfigurationType(typeof(MySqlEFConfiguration))] class MySqlDemoDbContext : DbContext { public MySqlDemoDbContext() : base("name=MySqlDemoDb") { } } } and the corresponding config file in the project will have the following entry app.config <connectionStrings> <add name="MySqlDemoDb" connectionString="server=ServerName;port=3306;database=DemoDB;uid=UserName;password=****" providerName="MySql.Data.MySqlClient" /> </connectionStrings> and voila done, now the application is ready configured to leverage entity framework feature’s with MariaDB backend. Program.cs class Program { static void Main(string[] args) { using (var db = new MySqlDemoDbContext()) { var zz = db.Database.SqlQuery<string>("SELECT FirstName FROM Test;").ToList(); db.Database.ExecuteSqlCommand("UPDATE Test SET FirstName = 'User' WHERE Id = 4;"); } } } The above configuration, refrence articles are listed below: Hope it helps.
https://amolpandey.com/2020/12/30/entity-framework-6-with-mariadb-using-c/
CC-MAIN-2022-05
en
refinedweb
Note: Scala API for Kafka Streams have been accepted for inclusion in Apache Kafka. We have been working with the Kafka team since the last couple of months working towards meeting the standards and guidelines for this activity. Lightbend and Alexis Seigneurin have contributed this library (with some changes) to the Kafka community. This is already available on Apache Kafka trunk and will be included in the upcoming release of Kafka. Hence it does not make much sense to update this project on a regular basis. For some time however, we will continue to provide support for fixing bugs only. A Thin Scala Wrapper Around the Kafka Streams Java APIA Thin Scala Wrapper Around the Kafka Streams Java API The library wraps Java APIs in Scala thereby providing: - much better type inference in Scala - less boilerplate in application code - the usual builder-style composition that developers get with the original Java API - complete compile time type safety The design of the library was inspired by the work started by Alexis Seigneurin in this repository. Quick StartQuick Start kafka-streams-scala is published and cross-built for Scala 2.11, and 2.12, so you can just add the following to your build: val kafka_streams_scala_version = "0.2.1" libraryDependencies ++= Seq("com.lightbend" %% "kafka-streams-scala" % kafka_streams_scala_version) Note: kafka-streams-scalasupports onwards Kafka Streams 1.0.0. The API docs for kafka-streams-scala is available here for Scala 2.12 and here for Scala 2.11. Running the TestsRunning the Tests The library comes with an embedded Kafka server. To run the tests, simply run sbt testOnly and all tests will run on the local embedded server. The embedded server is started and stopped for every test and takes quite a bit of resources. Hence it's recommended that you allocate more heap space to sbtwhen running the tests. e.g. sbt -mem 2000. $ sbt -mem 2000 > +clean > +test Type Inference and CompositionType Inference and Composition Here's a sample code fragment using the Scala wrapper library. Compare this with the Scala code from the same example in Confluent's repository. // Compute the total per region by summing the individual click counts per region.(_ + _) Implicit SerdesImplicit Serdes One of the areas where the Java APIs' verbosity can be reduced is through a succinct way to pass serializers and de-serializers to the various functions. The library uses the power of Scala implicits towards this end. The library makes some decisions that help implement more succinct serdes in a type safe manner: - No use of configuration based default serdes. Java APIs allow the user to define default key and value serdes as part of the configuration. This configuration, being implemented as java.util.Propertiesis type-unsafe and hence can result in runtime errors in case the user misses any of the serdes to be specified or plugs in an incorrect serde. kafka-streams-scalamakes this completely type-safe by allowing all serdes to be specified through Scala implicits. - The library offers implicit conversions from serdes to Serialized, Produced, Consumedor Joined. Hence as a user you just have to pass in the implicit serde and all conversions to Serialized, Produced, Consumedor Joinedwill be taken care of automatically. Default SerdesDefault Serdes The library offers a module that contains all the default serdes for the primitives. Importing the object will bring in scope all such primitives and helps reduce implicit hell. object DefaultSerdes { implicit val stringSerde: Serde[String] = Serdes.String() implicit val longSerde: Serde[Long] = Serdes.Long().asInstanceOf[Serde[Long]] implicit val byteArraySerde: Serde[Array[Byte]] = Serdes.ByteArray() implicit val bytesSerde: Serde[org.apache.kafka.common.utils.Bytes] = Serdes.Bytes() implicit val floatSerde: Serde[Float] = Serdes.Float().asInstanceOf[Serde[Float]] implicit val doubleSerde: Serde[Double] = Serdes.Double().asInstanceOf[Serde[Double]] implicit val integerSerde: Serde[Int] = Serdes.Integer().asInstanceOf[Serde[Int]] } Compile time typesafeCompile time typesafe Not only the serdes, but DefaultSerdes also brings into scope implicit Serialized, Produced, Consumed and Joined instances. So all APIs that accept Serialized, Produced, Consumed or Joined will get these instances automatically with an import DefaultSerdes._. Just one import of DefaultSerdes._ and the following code does not need a bit of Serialized, Produced, Consumed or Joined to be specified explicitly or through the default config. And the best part is that for any missing instances of these you get a compilation error. .. import DefaultSerdes._(_ + _) // Write the (continuously updating) results to the output topic. clicksPerRegion.toStream.to(outputTopic)
https://index.scala-lang.org/lightbend/kafka-streams-scala/kafka-streams-scala/0.2.1?target=_2.12
CC-MAIN-2022-05
en
refinedweb
Test 4.1.1.3 Test that concatenation functionality works properly. More... #include "lte-test-rlc-am-transmitter.h" Test 4.1.1.3 Test that concatenation functionality works properly. Test check if n SDUs are correctly contactenate to single PDU. Definition at line 149 of file lte-test-rlc-am-transmitter.h. Constructor. Test 4.1.1.3 Concatenation (n SDUs => One PDU) Definition at line 193 of file lte-test-rlc-am-transmitter.cc. Definition at line 198 of file lte-test-rlc-am-transmitter.cc. Implementation to actually run this TestCase. Subclasses should override this method to conduct their tests. Reimplemented from LteRlcAmTransmitterTestCase. Definition at line 203.
https://www.nsnam.org/docs/release/3.30/doxygen/class_lte_rlc_am_transmitter_concatenation_test_case.html
CC-MAIN-2022-05
en
refinedweb
SimpleRNN¶ - class paddle.nn. SimpleRNN ( input_size, hidden_size, num_layers=1, direction='forward', time_major=False, dropout=0.0, activation='tanh', weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None ) [source] Multilayer Elman network(SimpleRNN). It takes input sequences and initial states as inputs, and returns the output sequences and the final states. Each layer inside the SimpleRNN maps the input sequences and initial states to the output sequences and final states in the following manner: at each step, it takes step inputs(\(x_{t}\)) and previous states(\(h_{t-1}\)) as inputs, and returns step outputs(\(y_{t}\)) and new states(\(h_{t}\)).\[ \begin{align}\begin{aligned}h_{t} & = act(W_{ih}x_{t} + b_{ih} + W_{hh}h_{t-1} + b_{hh})\\y_{t} & = h_{t}\end{aligned}\end{align} \] where \(act\) is for activation. Using key word arguments to construct is recommended. - Parameters input_size (int) – The input size for the first layer’s cell. hidden_size (int) – The hidden size for each layer’s cell. num_layers (int, optional) – Number of layers. Defaults to 1. direction (str, optional) – The direction of the network. It can be “forward” or “bidirect”(or “bidirectional”). When “bidirect”, the way to merge outputs of forward and backward is concatenating. Defaults to “forward”. time_major (bool, optional) – Whether the first dimension of the input means the time steps. Defaults to False. dropout (float, optional) – The droput probability. Dropout is applied to the input of each layer except for the first layer. Defaults to 0. activation (str, optional) – The activation in each SimpleRNN cell. It can be tanh or relu. Defaults to tanh. weight_ih_attr (ParamAttr, optional) – The parameter attribute for weight_ih of each cell. Defaults to None. weight_hh_attr (ParamAttr, optional) – The parameter attribute for weight_hh of each cell. Defaults to None. bias_ih_attr (ParamAttr, optional) – The parameter attribute for the bias_ih of each cells. Defaults to None. bias_hh_attr (ParamAttr, optional) – The parameter attribute for the bias_hh of each cells. Defaults to None. name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name. - Inputs: inputs (Tensor): the input sequence. If time_major is True, the shape is [time_steps, batch_size, input_size], else, the shape is [batch_size, time_steps, hidden_size]. initial_states (Tensor, optional): the initial state. The shape is [num_layers * num_directions, batch_size, hidden_size]. If initial_state is not given, zero initial states are used. sequence_length (Tensor, optional): shape [batch_size], dtype: int64 or int32. The valid lengths of input sequences. Defaults to None. If sequence_length is not None, the inputs are treated as padded sequences. In each input sequence, elements whose time step index are not less than the valid length are treated as paddings. - Returns the output sequence. If time_major is True, the shape is [time_steps, batch_size, num_directions * hidden_size], else, the shape is [batch_size, time_steps, num_directions * hidden_size]. Note that num_directions is 2 if direction is “bidirectional” else 1. final_states (Tensor): final states. The shape is [num_layers * num_directions, batch_size, hidden_size]. Note that num_directions is 2 if direction is “bidirectional” (the index of forward states are 0, 2, 4, 6… and the index of backward states are 1, 3, 5, 7…), else 1. - Return type outputs (Tensor) - Variables: weight_ih_l[k]: the learnable input-hidden weights of the k-th layer. If k = 0, the shape is [hidden_size, input_size]. Otherwise, the shape is [hidden_size, num_directions * hidden_size]. weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer, with shape [hidden_size, hidden_size]. bias_ih_l[k]: the learnable input-hidden bias of the k-th layer, with shape [hidden_size]. bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer, with shape [hidden_size]. Examples import paddle rnn = paddle.nn.SimpleRNN(16, 32, 2) x = paddle.randn((4, 23, 16)) prev_h = paddle.randn((2, 4, 32)) y, h = rnn(x, prev_h) print(y.shape) print(h.shape) #[4,23,32] #[2,4,32]
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/nn/SimpleRNN_en.html
CC-MAIN-2022-05
en
refinedweb
Reacting to Blob storage events. See the Blob storage events schema article to view the full list of the events that Blob storage supports. blob storage events, see any of these quickstart articles: To view in-depth examples of reacting to Blob storage events by using Azure functions, see these articles: - Tutorial: Use Azure Data Lake Storage Gen2 events to update a Databricks Delta table. - Tutorial: Automate resizing uploaded images using Event Grid Note Storage (general after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see Managing concurrency in Blob storage. - As messages can arrive out of order, use the sequencer fields to understand the order of events on any particular object. The sequencer field is a string value that represents the logical sequence of events for any particular blob name. You can use standard string comparison to understand the relative sequence of two events on the same blob name. - Storage events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries between backend nodes and services or availability of subscriptions, duplicate messages may occur. To learn more about message delivery and retry, see Event Grid message delivery and retry. -. Feature support This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities. 1 Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a hierarchical namespace enabled. 1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled. Next steps Learn more about Event Grid and give Blob storage events a try:
https://docs.microsoft.com/en-au/azure/storage/blobs/storage-blob-event-overview
CC-MAIN-2022-05
en
refinedweb
Context Menu Data Binding Basics This article explains the different ways to provide data to a Context Menu component, the properties related to data binding and their results. For details on Value Binding and Data Binding, and the differences between them, see the Value Binding vs Data Binding article. First, review: - The available (bindable) features of a context menu item. - How to match fields in the model with the menu item data bindings. There are two modes of providing data to a menu, and they all use the items' features. Once you are familiar with the current article, choose the data binding more you wish to use: - Hierarchical data - separate collections of items and their child items. - Flat data - a single collection of items with defined parent-child relationships. Context Menu Item Features The menu items provide the following features that you control through the corresponding fields in their data binding: Id- a unique identifier for the item. Required for binding to flat data. ParentId- identifies the parent to whom the item belongs. Required only when binding to flat data. All items with the same ParentIdwill be rendered at the same level. For a root level item, this must be null. There should be at least one root-level item. HasChildren- can hide child items. The menu will fetch its children from the data source based on the Id- ParentIdrelationships (for flat data) or on the presence of the Itemscollection (for hierarchical data). If you set HasChildrento false, child items will not be rendered even if they are present in the data. If there are no child items in the data, an expand icon will not be rendered regardless of its value. Items- the collection of child items that will be rendered under the current item. Required only when binding to hierarchical data. Text- the text that will be shown on the item. ImageUrl/ Icon/ ImageClass- the URL to a raster image, the Telerik icon, or a class for a custom font icon that will be rendered in the item. They have the listed order of precedence in case more than one is present in the data (that is, an ImageUrlwill have the highest importance). Url- the view the item will navigate to by generating a link. Separator- when set to true, the item will be just a line that makes a distinction between its neighbors clearly visible. Thus, you can place logically grouped items between two separators to distinguish them. A separator item does not render text, icons, children or a navigable link. Disabled- You can disable items by setting this field to true. Such items will keep rendering but will not be clickable. Data Bindings The properties of a menu item match directly to a field of the model the menu is bound to. You provide that relationship by providing the name of the field from which the corresponding information is present. To do this, use the properties in the main TelerikMenu tag: - IdField => Id - ParentIdField => ParentId - TextField => Text - IconClassField => IconClass - IconField => Icon - ImageUrlField => ImageUrl - UrlField => Url - HasChildrenField => HasChildren - ItemsField => Items - DisabledField => DisabledField - SeparatorField => Separator There are default values for the field names. If your model names match the defaults, you don't have to define them in the bindings settings. If your model field names match any of the default names, the component will try to use them. For example, a field called Icon will try to produce a Telerik icon out of those values and that may not be what you want. If you want to override such behaviors, you can set IconField="someNonExistingField". You can read more about this here. This also applies to other fields too. Another example would be a field called Url - in case you want to perform navigation yourself through templates, you may want to set UrlField="someFakeField" so that the component does not navigate on its own. Default field names for menu item bindings. If you use these, you don't have to specify them in the TelerikMenu tag explicitly. public class ContextMenuItem { public int Id { get; set; } public string Text { get; set; } public int? ParentId { get; set; } public bool HasChildren { get; set; } public string Icon { get; set; } public string Url { get; set; } public bool Disabled { get; set; } public bool Separator { get; set; } } Data bind the context menu to a model with custom field names @* This example shows flat data binding with custom fields, and two separator items around a disabled item at the root level and in the nested menu *@ <div class="menuTarget"> right click this context menu target </div> <TelerikContextMenu Data="@ContextMenuItems" Selector=".menuTarget" ParentIdField="@nameof(ContextMenuItem.SectionId)" IdField="@nameof(ContextMenuItem.Id)" TextField="@nameof(ContextMenuItem.Section)" UrlField="@nameof(ContextMenuItem.Page)" DisabledField="@nameof(ContextMenuItem.IsDisabled)" SeparatorField="@nameof(ContextMenuItem.IsItemSeparator)"> </TelerikContextMenu> @code { public List<ContextMenuItem> ContextMenuItems { get; set; } public class ContextMenuItem { public int Id { get; set; } public int? SectionId { get; set; } public string Section { get; set; } public string Page { get; set; } public bool IsDisabled { get; set; } public bool IsItemSeparator { get; set; } } protected override void OnInitialized() { ContextMenuItems = new List<ContextMenuItem>() { // sample URLs for SPA navigation new ContextMenuItem() { Id = 1, Section = "Overview", Page = "contextmenu/overview" }, new ContextMenuItem() { Id = 2, Section = "Demos", Page = "contextmenu/demos" }, new ContextMenuItem() // separator item { Id = 3, IsItemSeparator = true }, new ContextMenuItem() // disabled item { Id = 4, Section = "Disbled Item", IsDisabled = true }, new ContextMenuItem() { Id = 5, IsItemSeparator = true }, new ContextMenuItem() { Id = 6, Section = "Roadmap" }, // sample URLs for external navigation new ContextMenuItem() { Id = 7, SectionId = 6, Section = "What's new", Page = "" }, new ContextMenuItem() { Id = 9, SectionId = 6, Section = "Release History", Page = "" }, new ContextMenuItem() { Id = 10, IsItemSeparator = true, SectionId = 6 }, new ContextMenuItem() { Id = 11, SectionId = 6, Section = "Roadmap", Page = "" } }; base.OnInitialized(); } } <style> .menuTarget { width: 100px; background: yellow; margin: 50px; } </style> The result from the snippet above
https://docs.telerik.com/blazor-ui/components/contextmenu/data-binding/overview
CC-MAIN-2022-05
en
refinedweb
- 01 Feb, 2008 40 commits - Joe Perches authored Signed-off-by: Joe Perches <joe@perches.com> Acked-by: David Brownell <david-b@pacbell.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Yoshihiro Shimoda authored Add support for SuperH SH7722 USB Function. M66592 is similar to SH7722 USBF. It can support SH7722 USBF by changing several M66592 code. Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com> Acked-by: David Brownell <david-b@pacbell.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Matthias Kaehlcke authored TI 3410/5052 USB Serial: convert semaphore td_open_close_lock to the mutex API. Signed-off-by: Matthias Kaehlcke <matthias.kaehlcke@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Kyungmin Park authored The current omap udc dosen't support the DMA mode> Cc: David Brownell <david-b@pacbell.net> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Adrian Bunk authored - make the needlessly global struct mon_fops_binary static - #if 0 the unused mon_bin_mmap() and related code Signed-off-by: Adrian Bunk <bunk@kernel.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Greg KH <greg@kroah.com> Cc: Pete Zaitcev <zaitcev@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Jeff Garzik authored - 'irq' argument is merely used in place of a constant; replace its usage with that constant. Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Cc: David Brownell <david-b@pacbell.net> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>> Cc: Thomas Winischhofer <thomas@winischhofer.net> Cc: Greg KH <greg@kroah.com> Cc: "Antonino A. Daplas" <adaplas@pol.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Alain Degreffe authored Signed-off-by: Alain Degreffe <eczema@ecze.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Oliver Neukum <oliver@neukum.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Bartlomiej Zolnierkiewicz authored Now that commit 3794ade5 removed incorrect dependency on CONFIG_IDE we can fix the driver to not include <linux/ide.h>: * add ATA_REG_{ERROR,LCYL,HCYL,STATUS}_OFFSET defines and use them instead of IDE_{ERROR,LCYL,HCYL,STATUS}_OFFSET from <linux/ide.h> * remove no longer needed <linux/ide.h> include * remove incorrect comment added by the last commit: - isd200.c is not the only user of struct hd_driveid besides IDE (see drivers/block/xsysace.c and arch/um/drivers/ubd_kern.c) Cc: Alan Cox <alan@redhat.com> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Oliver Neukum authored Here we go. This patch implements suspend/resume and autosuspend for the CDC ACM driver. Signed-off-by: Oliver Neukum <oneukum@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Pete Zaitcev authored These zeroings were taken from usb-storage long time ago. I examined the submission paths and usb_fill_bulk_urb and found them unnecessary. Signed-off-by: Pete Zaitcev <zaitcev@yahoo.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Tony Jones authored Convert from class_device to device for drivers/usb/core. Signed-off-by: Tony Jones <tonyj@suse.de> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Greg Kroah-Hartman authored Some crazy devices in the wild have a vendor id of 0x0000. If we try to add a module alias with this id, we just can't do it due to a check in the file2alias.c file. Change the test to verify that both the vendor and product ids are 0x0000 to show a real "blank" module alias. Note, the module-init-tools package also needs to be changed to properly generate the depmod tables. Cc: Janusz <janumix@poczta.fm> Cc: stable <stable@kernel.org> Cc: Jon Masters <jcm@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Jan Andersson authored usbtest did not swap the received status information when checking for a non-zero value and failed to discover halted endpoints on big endian systems. Cc: stable <stable@kernel.org> Signed-off-by: Jan Andersson <jan@gaisler.com> Acked-by: David Brownell <david-b@pacbell.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Grant Grundler authored Add "FIX_CAPACITY" entry for HP Photosmart r707 Camera in "Disk" mode. Camera will wedge when /lib/udev/vol_id attempts to access the last sector, EIO gets reported to dmesg, and block device is marked "offline" (it is). Reproduced vol_id behavior with: "dd if=/dev/sda of=/dev/null skip=60800 count=1" Cc: stable <stable@kernel.org> Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Signed-off-by: Phil Dibowitz <phil@ipom.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Nate Carlson authored I've got a Dell wireless 5520 card with a different USB ID - specifically, 8136 instead of 8137. Attached a small patch to add support, and the output of an 'ati3'. If we could get this in, that'd be sweet. ;) Thanks! nc@knight:~/tmp/linux-2.6.24-rc8/drivers/usb/serial$ lsusb | grep 8136 Bus 001 Device 005: ID 413c:8136 Dell Computer Corp. nc@knight:~/tmp/linux-source-2.6.23/drivers/usb/serial$ cu -l ttyUSB0 -s 115200 Connected. ati3 Manufacturer: Novatel Wireless Incorporated Model: Expedite EU860D MiniCard Revision: 10.10.04.01-01 [2007-04-11 14:07:19] IMEI: 011186000228043 +GCAP: +CGSM,+DS,+ES From: Nate Carlson <natecars@natecarlson.com> Cc: stable <stable@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Oliver Neukum authored this function will run in the context of the scsi error handler thread. It must use GFP_NOIO instead of GFP_KERNEL to avoid a possible deadlock. Cc: stable <stable@kernel.org> Signed-off-by: Oliver Neukum <oneukum@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Ed Beroset authored Added support for the Elster Unicom III Optical Probe. The device ID has already been added to the usb.ids file. Cc: stable <stable@kernel.org> Signed-off-by: Ed Beroset <beroset@mindspring.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Akira Tsukamoto authored pl2303: add support for RATOC REX-USB60F This patch adds support for RATOC REX-USB60F Serial Adapters, which is widely used in Japan recently. Cc: stable <stable@kernel.org> Signed-off-by: Akira Tsukamoto <akirat@rd.scei.sony.co.jp> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Daniel Kozák authored Remove entry for Huawei E620 UMTS/HSDPA card (ID: 12d1:1001) in pl2303 driver Option driver is use instead Cc: stable <stable@kernel.org> Signed-off-by: Daniel Kozák <kozzi11@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Bruno Redondi authored Added support for Onda H600/Zte MF330 GPRS/UMTS/HSDPA datacard Cc: stable <stable@kernel.org> Signed-off-by: Bruno Redondi <bruno.redondi@altarisoluzione.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Franco Lanza authored little patches only to add vendor/device id of ATK_16IC CCD cam for astronomy. From: Franco Lanza <nextime@nexlab.it> Cc: stable <stable@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Peter Stark authored I work with a group of people on a free home automation tool called FHEM. Some of the users own more than one USB-serial device by ELV. The ftdi_sio driver has most of the ELV devices disabled by default and needs to be re-enabled every time you get a new kernel. Additionally a new device (EM 1010 PC - enegry monitor) is missing in the list. Currently our users have to follow the instructions we provide at ... However, to some users it is too complicated to compile their own kernel module. We are aware that you can specify one additional device using the vendor/product option of the module. But lot's of users own more than one device. Cc: stable <stable@kernel.org> Signed-off-by: Peter Stark <peter.stark@t-online.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Manish Katiyar authored This patch corrects the wrong function name mentioned in the comments of usb_unregister_notify function. Signed-off-by: Manish Katiyar <mkatiyar@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Kevin Lloyd authored The following improvements were made: - Added new product support: MC5725, AC 880 U, MP 3G (UMTS & CDMA) Cc: stable <stable@kernel.org> Signed-off-by: Kevin Lloyd <linux@sierrawireless.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Damien Stuart authored This simply adds the "YC Cable" as a vendor and its pl2303-based USB<->Serial adapter as a product. This particular adapter is sold by Radio Shack. I've done limited testing on a few different systems with no issues. Cc: stable <stable@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Jessica L. Blank authored Adds the appropriate vendor and device IDs for the AirCard 881U to sierra.c. (This device is often rebadged by AT&T as the USBConnect 881). Cc: stable <stable@kernel.org> Signed-off-by: Jessica L Blank <j@twu.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Piotr Roszatycki authored add support for: 4348:5523 WinChipHead USB->RS 232 adapter with Prolifec PL 2303 chipset [ mingo@elte.hu: merged it and nursed it upstream ] Cc: stable <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - Craig Shelley authored Six new device IDs for CP2101 driver. Cc: stable <stable@kernel.org> Signed-off-by: Craig Shelley <craig@microtron.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> For fsl_usb2_udc driver, ep0 also has a descriptor. Current code is misleading and contains a logical mistake. Here is the patch to fix it. Cc: stable <stable@kernel.org> Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> If we get a data URB back from the hardware after we have put the tty to bed we go kaboom. Fortunately all we need to do is process the URB without trying to ram its contents down the throat of an ex-tty. Cc: stable <stable@kernel.org> Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> - git://git.kernel.dk/linux-2.6-block * 'for-linus' of git://git.kernel.dk/linux-2.6-block: block: kill swap_io_context() as-iosched: fix inconsistent ioc->lock context ide-cd: fix leftover data BUG block: make elevator lib checkpatch compliant cfq-iosched: make checkpatch compliant block: make core bits checkpatch compliant block: new end request handling interface should take unsigned byte counts unexport add_disk_randomness block/sunvdc.c:print_version() must be __devinit splice: always updated atime in direct splice - Jens Axboe authored> - Randy Dunlap authored Fix docbook fatal error (files were renamed): docproc: linux-2.6.24-git9/arch/ppc/kernel/rio.c: No such file or directory Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> - Steven Rostedt authored Doing a make randconfig I came across this error in the Makefile. This patch makes a directory out of arch/x86/mach-default for CONFIG_X86_RDC321X Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> - git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6 * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: [SPARC64]: Fix inconsistent .section usage in lib/ [SPARC/SPARC64]: Fix usage of .section .sched.text in assembler code. - git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6 * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (173 commits) [NETNS]: Lookup in FIB semantic hashes taking into account the namespace. [NETNS]: Add a namespace mark to fib_info. [IPV4]: fib_sync_down rework. [NETNS]: Process interface address manipulation routines in the namespace. [IPV4]: Small style cleanup of the error path in rtm_to_ifaddr. [IPV4]: Fix memory leak on error path during FIB initialization. [NETFILTER]: Ipv6-related xt_hashlimit compilation fix. [NET_SCHED]: Add flow classifier [NET_SCHED]: sch_sfq: make internal queues visible as classes [NET_SCHED]: sch_sfq: add support for external classifiers [NET_SCHED]: Constify struct tcf_ext_map [BLUETOOTH]: Fix bugs in previous conn add/del workqueue changes. [TCP]: Unexport sysctl_tcp_tso_win_divisor [IPV4]: Make struct ipv4_devconf static. [TR] net/802/tr.c: sysctl_tr_rif_timeout static [XFRM]: Fix statistics. [XFRM]: Remove unused exports. [PKT_SCHED] sch_teql.c: Duplicate IFF_BROADCAST in FMASK, remove 2nd. [BNX2]: Fix ASYM PAUSE advertisement for remote PHY. [IPV4] route cache: Introduce rt_genid for smooth cache invalidation ... - Olof Johansson authored [POWERPC] pasemi: Fix thinko in dma_direct_ops setup The first patch will just fall through and still set dma_data to a bad value, make it return directly instead. Signed-off-by: Olof Johansson <olof@lixom.net> Acked-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> - Greg Ungerer authored Remove all the dead timer interrupt checking functions for the ColdFire CPU "timers" hardware that are not used after switching to GENERIC_TIME. Signed-off-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/-/commits/fec8de3aada6338a4069ee1df4726dd7bbbdf476
CC-MAIN-2022-05
en
refinedweb
On Wed, 2013-05-08 at 03:33 +0100, Julien Grall wrote: > +/** > + * Dump device tree message with printk > + * TODO: Find another way to switch between early_printk and printk > + * int the device tree code > + */ > +void __init dt_switch_to_printk(void); The issue here is that there is code which wants to log which can be called either via dt_unflatten_host_device_tree or later on? There seems to be at least some calls to dt_dprintk which are only called via dt_unfla..., I think these can and should just use early_printk (or a macro to make them a debug thing). Likewise if there are functions which are only called later then they should just use printk direct (or a macro..) Which only leaves ones which are both? How many are these? I'm inclined towards suggesting that if they are debug prints which are disabled by default and require a recompile to enable then the person doing the debugging can select whether they care about early or late messages by #define-ing DEBUG or EARLY_DEBUG or both as required. > +/** > + * Host device tree > + * DO NOT modify it! Can it be const? > + */ > +extern struct dt_device_node *dt_host; > + > +#define dt_node_cmp(s1, s2) strcmp((s1), (s2)) > +#define dt_compat_cmp(s1, s2, l) strnicmp((s1), (s2), l) > + > +#define for_each_property_of_node(dn, pp) \ > + for ( pp = dn->properties; pp != NULL; pp = pp->next ) > + > +#define for_each_device_node(dt, dn) \ > + for ( dn = dt; dn != NULL; dn = dn->allnext ) > + > +static inline const char *dt_node_full_name(const struct dt_device_node *np) > +{ > + return (np && np->full_name) ? np->full_name : "<no-node>"; > +} > + > +/** > + * Find a property with a given name for a given node > + * and return the value. > + */ > +const void *dt_get_property(const struct dt_device_node *np, > + const char *name, u32 *lenp); > + > +/** > + * dt_find_node_by_path - Find a node matching a full DT path > + * @path: The full path to match > + * > + * Returns a node pointer. > + */ > +struct dt_device_node *dt_find_node_by_path(const char *path); > #endif > -- >.
https://old-list-archives.xen.org/archives/html/xen-devel/2013-05/msg00851.html
CC-MAIN-2022-05
en
refinedweb
Integrating SyncFusion UI with ABP Framework Blazor UI Hi, in this step by step article, I will show you how to integrate Syncfusion, a blazor UI components into ABP Framework-based applications. (A screenshot from the example application developed in this article) Create the Project First thing is to the development, we will create a new solution named SyncfusionSample(or whatever you want). We will create a new startup template with EF Core as a database provider and Blazor for UI framework by using ABP CLI: md SyncfusionSample cd SyncfusionSample After we have navigated inside of SyncfusionSample directory, we can create a new project. Note that I am creating Blazor Server ( blazor-server) project. If you want to create a regular Blazor WebAssembly project just use blazor keyword instead. abp new SyncfusionSample -u blazor-server and dotnet restore - Our project boilerplate will be ready after the download is finished. Then, we can open the solution in the Visual Studio (or any other IDE) and run the SyncfusionSample.DbMigratorto create the database and seed initial data (which creates the admin user, admin role, permissions etc.) - After database and initial data created, - Run the SyncfusionSample.Blazorto see our UI working properly. Default login credentials for admin: username is admin and password is 1q2w3E* Install Syncfusion You can follow this documentation to install Syncfusion packages into your computer. Adding Syncfusion NuGet Packages SyncfusionSample.Blazor as the default project and then install NuGet packages. Install-Package Syncfusion.Blazor.Grid Register Syncfusion Resources Add the following line to the HEAD section of the _Host.cshtmlfile within the SyncfusionSample.Blazorproject: <head> <!--...--> <link href="_content/Syncfusion.Blazor.Themes/fabric.css" rel="stylesheet" /> </head> In the SyncfusionSampleBlazorModuleclass, call the AddSyncfusionBlazor()method from your project's ConfigureServices()method: public override void ConfigureServices(ServiceConfigurationContext context) { var hostingEnvironment = context.Services.GetHostingEnvironment(); var configuration = context.Services.GetConfiguration(); // ... context.Services.AddSyncfusionBlazor(); Syncfusion.Licensing.SyncfusionLicenseProvider.RegisterLicense("YOUR LICENSE KEY"); } To get the LICENSE KEYyou can login into your Syncfusion account and request the key. Trial keys are also available. Register the SyncfusionSample.Blazor namespace(s) in the _Imports.razorfile: @using Syncfusion.Blazor @using Syncfusion.Blazor.Buttons @using Syncfusion.Blazor.Inputs @using Syncfusion.Blazor.Calendars @using Syncfusion.Blazor.Popups @using Syncfusion.Blazor.Grids The Sample Application We have created a sample application with SfGrid example. The Source Code You can download the source code from here. The related files for this example are marked in the following screenshots. Conclusion In this article, I've explained how to use Syncfusion components in your application. ABP Framework is designed so that it can work with any UI library/framework. QiMing Tan 5/3/2021 2:00:02 AM In blazor webassembly or angular project,How to implement server side filtering, paging and sorting in SfGird; Instead of returning all the data. Mladen Macanovic 5/3/2021 7:08:50 AM Hi, in this guide we only provided a startup solution to integrate Syncfusion into the ABP framework. For advanced scenarios, you should contact Syncfusion support or go through their documentation as that is the best way to learn. Semih AYYILDIZ 11/25/2021 12:51:55 PM Youtube, Kudvenkat.
https://community.abp.io/articles/using-syncfusion-components-with-the-abp-framework-5ccvi8kc
CC-MAIN-2022-05
en
refinedweb
This post is the second of a series; click here for the previous post. Naming and Scoping Naming Variables and Tensors As we discussed in Part 1, every time you call tf.get_variable(), you need to assign the variable a new, unique name. Actually, it goes deeper than that: every tensor in the graph gets a unique name too. The name can be accessed explicitly with the .name property of tensors, operations, and variables. For the vast majority of cases, the name will be created automatically for you; for example, a constant node will have the name Const, and as you create more of them, they will become Const_1, Const_2, etc.1 You can also explicitly set the name of a node via the name= property, and the enumerative suffix will still be added automatically: Code: import tensorflow as tf a = tf.constant(0.) b = tf.constant(1.) c = tf.constant(2., name="cool_const") d = tf.constant(3., name="cool_const") print a.name, b.name, c.name, d.name Output Const:0 Const_1:0 cool_const:0 cool_const_1:0 Explicitly naming nodes is nonessential, but can be very useful when debugging. Oftentimes, when your Tensorflow code crashes, the error trace will refer to a specific operation. If you have many operations of the same type, it can be tough to figure out which one is problematic. By explicitly naming each of your nodes, you can get much more informative error traces, and identify the issue more quickly. Using Scopes As your graph gets more complex, it becomes difficult to name everything by hand. Tensorflow provides the tf.variable_scope object, which makes it easier to organize your graphs by subdividing them into smaller chunks. By simply wrapping a segment of your graph creation code in a with tf.variable_scope(scope_name): statement, all nodes created will have their names automatically prefixed with the scope_name string. Additionally, these scopes stack; creating a scope within another will simply chain the prefixes together, delimited by a forward-slash. Code: import tensorflow as tf a = tf.constant(0.) b = tf.constant(1.) with tf.variable_scope("first_scope"): c = a + b d = tf.constant(2., name="cool_const") coef1 = tf.get_variable("coef", [], initializer=tf.constant_initializer(2.)) with tf.variable_scope("second_scope"): e = coef1 * d coef2 = tf.get_variable("coef", [], initializer=tf.constant_initializer(3.)) f = tf.constant(1.) g = coef2 * f print a.name, b.name print c.name, d.name print e.name, f.name, g.name print coef1.name print coef2.name Output Const:0 Const_1:0 first_scope/add:0 first_scope/cool_const:0 first_scope/second_scope/mul:0 first_scope/second_scope/Const:0 first_scope/second_scope/mul_1:0 first_scope/coef:0 first_scope/second_scope/coef:0 Notice that we were able to create two variables with the same name - coef - without any issues! This is because the scoping transformed the names into first_scope/coef:0 and first_scope/second_scope/coef:0, which are distinct. Saving and Loading At its core, a trained neural network consists of two essential components: - The weights of the network, which have been learned to optimize for some task - The network graph, which specifies how to actually use the weights to get results Tensorflow separates these two components, but it’s clear that they need to be very tightly paired. Weights are useless without a graph structure describing how to use them, and a graph with random weights is no good either. In fact, even something as small as swapping two weight matrices is likely to totally break your model. This often leads to frustration among beginner Tensorflow users; using a pre-trained model as a component of a neural network is a great way to speed up training, but can break things in a myriad of ways. Saving A Model When working with only a single model, Tensorflow’s built-in tools for saving and loading are straightforward to use: simply create a tf.train.Saver(). Similarly to the tf.train.Optimizer family, a tf.train.Saver is not itself a node, but instead a higher-level class that performs useful functions on top of pre-existing graphs. And, as you may have anticipated, the ‘useful function’ of a tf.train.Saver is saving and loading the model. Let’s see it in action! Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') Output Four new files: checkpoint tftcp.model.data-00000-of-00001 tftcp.model.index tftcp.model.meta There’s a lot of stuff to break down here. First of all: Why does it output four files, when we only saved one model? The information needed to recreate the model is divided among them. If you want to copy or back up a model, make sure you bring all three of the files (the three prefixed by your filename). Here’s a quick description of each: tftcp.model.data-00000-of-00001contains the weights of your model (the first bullet point from above). It’s most likely the largest file here. tftcp.model.metais the network structure of your model (the second bullet point from above). It contains all the information needed to re-create your graph. tftcp.model.indexis an indexing structure linking the first two things. It says “where in the data file do I find the parameters corresponding to this node?” checkpointis not actually needed to reconstruct your model, but if you save multiple versions of your model throughout a training run, it keeps track of everything. Secondly, why did I go through all the trouble of creating a tf.Session and tf.global_variables_initializer for this example? Well, if we’re going to save a model, we need to have something to save. Recall that computations live in the graph, but values live in the session. The tf.train.Saver can access the structure of the network through a global pointer to the graph. But when we go to save the values of the variables (i.e. the weights of the network), we need to access a tf.Session to see what those values are; that’s why sess is passed in as the first argument of the save function. Additionally, attempting to save uninitialized variables will throw an error, because attempting to access the value of an uninitialized variable always throws an error. So, we needed both a session and an initializer (or equivalent, e.g. tf.assign). Now that we’ve saved our model, let’s load it back in. The first step is to recreate the variables: we want variables with all the same names, shapes, and dtypes as we had when we saved it. The second step is to create a tf.train.Saver just as before, and call the restore function. Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) saver = tf.train.Saver() sess = tf.Session() saver.restore(sess, './tftcp.model') sess.run([a,b]) Output [1.3106428, 0.6413864] Note that we didn’t need to initialize a or b before running them! This is because the restore operation moves the values from our files into the session’s variables. Since the session no longer contains any null-valued variables, initialization is no longer needed. (This can backfire if we aren’t careful: running an init after a restore will override the loaded values with randomly-initialized ones.) Choosing Your Variables When a tf.train.Saver is initialized, it looks at the current graph and gets the list of variables; this is permanently stored as the list of variables that that saver “cares about”. We can inspect it with the ._var_list property: Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) saver = tf.train.Saver() c = tf.get_variable('c', []) print saver._var_list Output [<tf.Variable 'a:0' shape=() dtype=float32_ref>, <tf.Variable 'b:0' shape=() dtype=float32_ref>] Since c wasn’t around at the time of our saver’s creation, it does not get to be a part of the fun. So in general, make sure that you already have all your variables created before creating a saver. Of course, there are also some specific circumstances where you may actually want to only save a subset of your variables! tf.train.Saver lets you pass the var_list when you create it to specify which subset of available variables you want it to keep track of. Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) c = tf.get_variable('c', []) saver = tf.train.Saver(var_list=[a,b]) print saver._var_list Output [<tf.Variable 'a:0' shape=() dtype=float32_ref>, <tf.Variable 'b:0' shape=() dtype=float32_ref>] The examples above cover the ‘perfect sphere in frictionless vacuum’ scenario of model-loading. As long as you are saving and loading your own models, using your own code, without changing things in between, saving and loading is a breeze. But in many cases, things are not so clean. And in those cases, we need to get a little fancier. Let’s take a look at a couple of scenarios to illustrate the issues. First, something that works without a problem. What if we want to save a whole model, but we only want to load part of it? (In the following code example, I run the two scripts in order.) Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model') sess.run(a) Output 1.1700551 Good, easy enough! And yet, a failure case emerges when we have the reverse scenario: we want to load one model as a component of a larger model. Code: import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') import tensorflow as tf a = tf.get_variable('a', []) d = tf.get_variable('d', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model') Output Key d not found in checkpoint [[ = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] We just wanted to load a, while ignoring the new variable d. And yet, we got an error, complaining that d was not present in the checkpoint! A third scenario is where you want to load one model’s parameters into a different model’s computation graph. This throws an error too, for obvious reasons: Tensorflow cannot possibly know where to put all those parameters you just loaded. Luckily, there’s a way to give it a hint. Remember var_list from one section-header ago? Well, it turns out to be a bit of a misnomer. A better name might be “var_list_or_dictionary_mapping_names_to_vars”, but that’s a mouthful, so I can sort of see why they stuck with the first bit. Saving models is one of the key reasons that Tensorflow mandates globally-unique variable names. In a saved-model-file, each saved variable’s name is associated with its shape and value. Loading it into a new computational graph is as easy as mapping the original-names of the variables you want to load to variables in your current model. Here’s an example: Code: import tensorflow as tf a = tf.get_variable('a', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') import tensorflow as tf d = tf.get_variable('d', []) init = tf.global_variables_initializer() saver = tf.train.Saver(var_list={'a': d}) sess = tf.Session() sess.run(init) saver.restore(sess, './tftcp.model') sess.run(d) Output -0.9303965 This is the key mechanism by which you can combine models that do not have the exact same computational graph. For example, perhaps you got a pre-trained language model off of the internet, and want to re-use the word embeddings. Or, perhaps you changed the parameterization of your model in between training runs, and you want this new version to pick up where the old one left off; you don’t want to have to re-train the whole thing from scratch. In both of these cases, you would simply need to hand-make a dictionary mapping from the old variable names to the new variables. A word of caution: it’s very important to know exactly how the parameters you are loading are meant to be used. If possible, you should use the exact code the original authors used to build their model, to ensure that that component of your computational graph is identical to how it looked during training. If you need to re-implement, keep in mind that basically any change, no matter how minor, is likely to severely damage the performance of your pre-trained net. Always benchmark your reimplementation against the original! Inspecting Models If the model you want to load came from the internet - or from yourself, >2 months ago - there’s a good chance you won’t know how the original variables were named. To inspect saved models, use these tools, which come from the official Tensorflow repository. For example: Code: import tensorflow as tf a = tf.get_variable('a', []) b = tf.get_variable('b', [10,20]) c = tf.get_variable('c', []) init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) saver.save(sess, './tftcp.model') print tf.contrib.framework.list_variables('./tftcp.model') Output [('a', []), ('b', [10, 20]), ('c', [])] With a little effort and a lot of head-scratching, it’s usually possible to use these tools (in conjunction with the original codebase) to find the names of the variables you want. Conclusion Hopefully this post helped clear up the basics behind saving and loading Tensorflow models. There are a few other advanced tricks, like automatic checkpointing and saving/restoring meta-graphs, that I may touch on in a future post; but in my experience, those use-cases are rare, especially for beginners. As always, please let me know in the comments or via email if I got anything wrong, or there is anything important I missed. Thanks for reading! There will also be a suffix :output_numadded to the tensor names. For now, that’s always :0, since we are only using operations with a single output. See this StackOverflow question for more info. Thanks Su Tang for pointing this out! ↩
https://jacobbuckman.com/2018-09-17-tensorflow-the-confusing-parts-2/
CC-MAIN-2022-21
en
refinedweb
cgroups — Linux control groups Control groups,−v2.txt.) Because of the problems with the initial cgroups implementation (cgroups version 1), starting in Linux 3.10, work began on a new, orthogonal implementation to remedy these problems. Initially marked experimental, and hidden behind the −o _. Under cgroups v1, each controller may be mounted against a separate cgroup filesystem that provides its own hierarchical organization of the processes on the system. It is also possible to.). −t cgroup −o cpu none /sys/fs/cgroup/cpu It is possible to comount multiple controllers against the same hierarchy. For example, here the cpu and cpuacct controllers are comounted against a single hierarchy: mount −t cgroup −o −t cgroup −o all cgroup /sys/fs/cgroup (One can achieve the same result by omitting −o. A mounted cgroup filesystem can be unmounted using the umount(8) command,.−bwc.txt. cpuacct(since Linux 2.6.24; )) CONFIG_CGROUP_CPUACCT This provides accounting for CPU usage by groups of processes. Further information can be found in the kernel source file Documentation/cgroup−v−v1/cpusets.txt. memory(since Linux 2.6.25; )) CONFIG_MEMCG The memory controller supports reporting and limiting of process memory, kernel memory, and swap used by cgroups. Further information can be found in the kernel source file Documentation/cgroup−v1/memory.txt. devices(since Linux 2.6.26; )) CONFIG_CGROUP_DEVICE This supports controlling which processes may create (mknod) devices as well as open them for reading or writing. The policies may be specified as allow-lists and deny-lists.. rdma(since Linux 4.11; )) CONFIG_CGROUP_RDMA The RDMA controller permits limiting the use of RDMA/IB-specific resources per cgroup. Further information can be found in the kernel source file Documentation/cgroup-v1/rdma.txt. it with a process. The default value of the release_agent file is empty, meaning that no release agent is invoked. The content of the release_agent file can also be specified via a mount option when the cgroup filesystem is mounted: mount -o release_agent=pathname .... specifying cgroup_no_v1=named.. Cgroups v2 provides a unified hierarchy against which all controllers are mounted. "Internal" processes are not permitted. With the exception of the root cgroup, processes may reside only in leaf nodes (cgroups that do not themselves contain child cgroups). The details are somewhat more subtle than this, and are described below. Active cgroups must be specified via the files cgroup.controllers and cgroup.subtree_control. The tasks file has been removed. In addition, the cgroup.clone_children file that is employed by the cpuset controller has been removed.). In cgroups v1, the ability to mount different controllers against different hierarchies was intended to allow great flexibility for application design. In practice, though, the flexibility turned out to. disabled. To do this, specify the cgroup_no_v1=list option on the kernel boot command line; list is a comma-separated list of the names of the controllers to disable, or the word all to disable all v1 controllers. (This situation is correctly handled by systemd(1), which falls back to operating without the specified controllers.) Note that on many modern systems, systemd(1) automatically mounts the cgroup2 filesystem at /sys/fs/cgroup/unified during the boot process. The following controllers, documented in the kernel source file Documentation/cgroup-v2.txt, are supported in cgroups version 2: io(since Linux 4.5) This is the successor of the version 1 blkio controller. memory(since Linux 4.5) This is the successor of the version 1 memory controller. pids(since Linux 4.5) This is the same as the version 1 pids controller. perf_event(since Linux 4.11) This is the same as the version 1 perf_event controller. rdma(since Linux 4.11) This is the same as the version 1 rdma controller. cpu(since Linux 4.15) This is the successor to the version 1 cpu and cpuacct controllers. '−' controllers that are exercised in the child cgroups. When a controller (e.g., pids) is present in the cgroup.subtree_control file of a parent cgroup, then the corresponding controller-interface files (e.g., pids.max) are automatically created in the children of that cgroup and can be used to exert resource control in the child cgroups. Cgroups v2 enforces a so-called "no internal processes" rule. Roughly speaking, this rule means that,.. With cgroups v2, a new mechanism is provided to obtain notification about when a cgroup becomes empty. The cgroups v1 release_agent and notify_on_release files are removed, and replaced by a new, more general-purpose file, cgroup.events. This read-only cause the bits POLLPRI and POLLERR to be returned in the revents field. The cgroups v2 release-notification mechanism provided by the populated field of the cgroup.events underneath descendant cgroups. A value of 0 in this file means that no descendant cgroups can be created. An attempt to create a descendant namespace: delegate a subhierarchy under the existing delegated hierarchy. (For example, the delegated hierarchy might be associated with an unprivileged container run by cecilia.) Even if a cgroup namespace was employed, because both hierarchies are owned by the unprivileged user cecilia, the following illegitimate actions could be performed: A process in the inferior hierarchy could change the resource controller possibilities. The nsdelegate mount option only has an effect when performed in the initial mount namespace; in other mount namespaces, the option is silently ignored. cgroup_no_v1=all systemd.legacy_systemd_cgroup_controller These options cause the kernel to boot with the cgroups v1 controllers disabled (meaning that the controllers are available in the v2 hierarchy), and tells systemd(1) not to mount and use the cgroup v2 hierarchy, so that the v2 hierarchy can be manually mounted with the desired options after boot-up.arch delegatee) matches the real user ID or the saved set-user-ID of the target process. Before Linux 4.11, this requirement also applied in cgroups v2 (This was a historical requirement inherited from cgroups v1 that was later deemed unnecessary, since the other rules suffice for containment in cgroups v2.). This is a domain cgroup that serves as the root of a threaded subtree. This cgroup type is also known as "threaded root".. There are two pathways that lead to the creation of a threaded subtree. The first pathway proceeds as follows: We write the string "threaded" to the cgroup.type file of a cgroup y/z that currently has the type domain. This has the following effects: - The type of the cgroup y/zbecomes threaded. - The type of the parent cgroup, y, becomes domain threaded. The parent cgroup is the root of a threaded subtree (also known as the "threaded root"). - All other cgroups under ythat were not already of type threaded(because they were inside already existing threaded subtrees under the new threaded root) are converted to type domain invalid. Any subsequently created cgroups under ywill also have the type domain invalid.: In an existing cgroup, z, that currently has the type domain, we (1) enable one or more threaded controllers and (2) make a process a member of z. (These two steps can be done in either order.) This has the following consequences: - The type of zbecomes domain threaded. - All of the descendant cgroups of xthat were not already of type threadedare converted to type domain invalid.: - domainor domain threaded: start the creation of a threaded subtree (whose root is the parent of this cgroup) via the first of the pathways described above; - domain invalid: convert this cgroup (which is inside a threaded subtree) to a usable (i.e., threaded) state; - name of the controller. - The unique ID of the cgroup hierarchy on which this controller is mounted. If multiple cgroups v1 controllers are bound to the same hierarchy, then each will show the same hierarchy ID in this field. The value in this field will be 0 if: - the controller is not mounted on a cgroups v1 hierarchy; - the controller is bound to the cgroups v2 single unified hierarchy; or - the controller is disabled (see below). - The number of control groups in this hierarchy using this controller. - This field contains the value 1 if this controller is enabled, or 0 if it has been disabled (via the cgroup_disablekernel: hierarchy-ID:controller-list:cgroup-path For example: 5:cpuacct,cpu,cpuset:/daemons The colon-separated fields are, from left to right: - For cgroups version 1 hierarchies, this field contains a unique hierarchy ID number that can be matched to a hierarchy ID in /proc/cgroups. For the cgroups version 2 hierarchy, this field contains the value 0. - For cgroups version 1 hierarchies, this field contains a comma-separated list of the controllers bound to the hierarchy. For the cgroups version 2 hierarchy, this field is empty. - This field contains the pathname of the control group in the hierarchy to which the process belongs. This pathname is relative to the mount point of the hierarchy. applications supports and has enabled. Features are listed one per line: $ cat /sys/kernel/cgroup/features nsdelegate The entries that can appear in this file are: - nsdelegate(since Linux 4.15) - The kernel supports the nsdelegatemount option. prlimit(1), systemd(1), systemd-cgls(1), systemd-cgtop(1), clone(2), ioprio_set(2), perf_event_open(2), setrlimit(2), cgroup_namespaces(7), cpuset(7), namespaces(7), sched(7), user_namespaces(7)
https://manpages.net/htmlman7/cgroups.7.html
CC-MAIN-2022-21
en
refinedweb
Application Services Application Services define the APIs that a CAP application exposes to its clients, for example through OData. This section describes how to add business logic to these services, by extending CRUD events and implementing actions and functions. Content Handling CRUD Events Application Services provide a CQN query API. When running a CQN query on an Application Service CRUD events are triggered. The processing of these events is usually extended when adding business logic to the Application Service. The following table lists the static event name constants that exist for these event names on the CqnService interface and their corresponding event-specific Event Context interfaces. These constants and interfaces should be used, when registering and implementing event handlers: The following example shows how these constants and Event Context interfaces can be leveraged, when adding an event handler to be run when new books are created: @Before(event = CqnService.EVENT_CREATE, entity = Books_.CDS_NAME) public void createBooks(CdsCreateEventContext context, List<Books> books) { } To learn more about the entity data argument List<Books> books of the event handler method, have a look at this section. OData Requests Application Services are used by OData protocol adapters to expose the Application Service’s API as an OData API on a path with the following pattern: Learn more about how OData URLs are configured. The OData protocol adapters use the CQN query APIs to retrieve a response for the requests they receive. They transform OData-specific requests into a CQN query, which is run on the Application Service. The following table shows which CRUD events are triggered by which kind of OData request: In CAP Java versions < 1.9.0, the UPSERTevent was used to implement OData V4 PUTrequests. This has been changed, as the semantics of UPSERTdidn’t really match the semantics of the OData V4 PUT. Deeply Structured Documents Events on deeply structured documents, are only triggered on the target entity of the CRUD event’s CQN statement. This means, that if a document is created or updated, events aren’t automatically triggered on composition entities. Also when reading a deep document, leveraging expand capabilities, READ events aren’t triggered on the expanded entities. The same applies to a deletion of a document, which doesn’t automatically trigger DELETE events on composition entities to which the delete is cascaded. When implementing validation logic, this can be handled like shown in the following example: @Before(event = CqnService.EVENT_CREATE, entity = Orders_.CDS_NAME) public void validateOrders(List<Orders> orders) { for(Orders order : orders) { if (order.getItems() != null) { validateItems(order.getItems()); } } } @Before(event = CqnService.EVENT_CREATE, entity = OrderItems_.CDS_NAME) public void validateItems(List<OrderItems> items) { for(OrderItems item : items) { if (item.getQuantity() <= 0) { throw new ServiceException(ErrorStatuses.BAD_REQUEST, "Invalid quantity"); } } }. Result Handling @On handlers for READ, UPDATE, and DELETE events must set a result, either by returning the result, or using the event context’s setResult method. READ Result READ event handlers must return the data that was read, either as an Iterable<Map> or Result object created via the ResultBuilder. For queries with inline count, a Result object must be used as the inline count is obtained from the Result interface. UPDATE and DELETE Results UPDATE and DELETE statements have an optional filter condition (where clause) which determines the entities to be updated/deleted. Handlers must return a Result object with the number of entities that match this filter condition and have been updated/deleted. Use the ResultBuilder to create the Result object. ❗ Warning If an event handler for an UPDATE or DELETE event does not specify a result the number of updated/deleted rows is automatically set to 0 and the OData protocol adapter will translate this into an HTTP response with status code 404 (Not Found). INSERT and UPSERT Results Event handlers for INSERT and UPSERT events can return a result representing the data that was inserted/upserted. A failed insert is indicated by throwing an exception, for example, a UniqueConstraintException or a CdsServiceException with error status CONFLICT. Result Builder When implementing custom @On handlers for CRUD events, a Result object can be constructed with the ResultBuilder. The semantics of the constructed Result differ between the CRUD events. Clients of Application Services, for example the OData protocol adapters, rely on these specific semantics for each event. It is therefore important that custom ON handlers fulfill these semantics as well, when returning or setting a Result using the setResult() method of the respective event context. The following table lists the events and the expected Result: Use the selectedRows or insertedRows method for query and insert results, with the data given as Map or list of maps: import static java.util.Arrays.asList; import static com.sap.cds.ResultBuilder.selectedRows; Map<String, Object> row = new HashMap<>(); row.put("title", "Capire"); Result res = selectedRows(asList(row)).result(); context.setResult(res); // CdsReadEventContext For query results, the inline count can be set through the inlineCount method: Result r = selectedRows(asList(row)).inlineCount(inlineCount).result(); For update results, use the updatedRows method with the update count and the update data: import static com.sap.cds.ResultBuilder.updatedRows; int updateCount = 1; // number of updated rows Map<String, Object> data = new HashMap<>(); data.put("title", "CAP Java"); Result r = updatedRows(updateCount, data).result(); For delete results, use the deletedRows method and provide the number of deleted rows: import static com.sap.cds.ResultBuilder.deletedRows; int deleteCount = 7; Result r = deletedRows(deleteCount).result(); Actions and Functions Actions and Functions enhance the API provided by an Application Service with custom operations. They have well-defined input parameters and a return value, that are modelled in CDS. Actions or functions are handled - just like CRUD events - using event handlers. To trigger an action or function on an Application Service an event with the action’s or function’s name is emitted on it. Actions and functions are therefore implemented through event handlers. For each action or function an event handler of the On phase should be defined, which implements the business logic and provides the return value of the operation, if applicable. The event handler needs to take care of completing the event processing. The CAP Java SDK Maven Plugin is capable of generating event-specific Event Context interfaces for the action or function, based on its CDS model definition. These Event Context interfaces give direct access to the parameters and the return value of the action or function. If an action or function is bound to an entity, the entity needs to be specified while registering the event handler. For bound actions or functions the Event Context interface provides a CqnSelect statement, which targets the entity the action or function was triggered on. The following example shows how all of this plays together to implement an event handler for an action: CDS Model: service CatalogService { entity Books { key ID: UUID; title: String; } actions { action review(stars: Integer) returns Reviews; }; entity Reviews { book : Association to Books; stars: Integer; } } Event-specific Event Context, generated by the CAP Java SDK Maven Plugin: on the service. Best Practices and FAQs This section summarizes some best practices for implementing event handlers and provides answers to frequently asked questions. On which service should I register my event handler? Event handlers implementing business or domain logic should be registered on an Application Service. When implementing rather technical requirements, like triggering some code whenever an entity is written to the database, you can register event handlers on the Persistence Service. Which services should my event handlers usually interact with? The CAP Java SDK provides APIs that can be used in event handlers to interact with other services. These other services can be used to request data, that is required by the event handler implementation. If you’re implementing an event handler of an Application Service, and require additional data of other entities part of that service for validation purposes, it’s a good practice to read this data from the database using the Persistence Service. When using the Persistence Service, no user authentication checks are performed. If you’re mashing up your service with another Application do not require this decoupling, you can also access the service’s entities directly from the database. In case you’re working with draft-enabled entities and your event handler requires access to draft states, you should use the Draft Service. Serve Configuration Configure how application services are served. You can define per service which ones are served by which protocol adapters. In addition, you configure on which path they are available. Finally, the combined path an application service is served on, is composed of the base path of a protocol adapter and the relative path of the application service. Configure Base Path Each protocol adapter has its own and unique base path. By default, the CAP Java SDK provides protocol adapters for OData V4 and V2 and the base paths of both can be configured with CDS Properties in the application.yaml: The following example shows, how to deviate from the defaults: cds: odataV4.endpoint.path: '/api' odataV2.endpoint.path: '/api-v2' Configure Path and Protocol With the annotation @path, you can configure the relative path of a service under which it’s served by protocol adapters. The path is appended to the protocol adapter’s base path. With the annotation @protocols, you can configure a list of protocol adapters a service should be served by. By default, a service is served by all protocol adapters. If you explicitly define a protocol, the service is only served by that protocol adapter. In the following example, the service CatalogService is available on the combined paths /odata/v4/browse with OData V4 and /odata/v2/browse with OData V2: @path : 'browse' @protocols: [ 'odata-v4', 'odata-v2' ] service CatalogService { ... } The same can also be configured in the application.yaml in the cds.application.services.<key>.serve section. Replace <key> with the service name to configure path and protocols: cds.application.services.CatalogService.serve: path: 'browse' protocols: - 'odata-v4' - 'odata-v2' Learn more about the available CDS Properties. Configure Endpoints With the annotations @endpoints.path and @endpoints.protocol, you can provide more complex service endpoint configurations. Use them to serve an application service on different paths for different protocols. The value of @endpoints.path is appended to the protocol adapter’s base path. In the following example, the service CatalogService is available on different paths for the different OData protocols: @endpoints: [ {path : 'browse', protocol: 'odata-v4'}, {path : 'list', protocol: 'odata-v2'} ] service CatalogService { ... } The CatalogService is accessible on the combined path /odata/v4/browse with the OData V4 protocol and on /odata/v2/list with the OData V2 protocol. The same can also be configured in the application.yaml in the cds.application.services.<key>.serve.endpoints section. Replace <key> with the service name to configure the endpoints: cds.application.services.CatalogService.serve.endpoints: - path: 'browse' protocol: 'odata-v4' - path: 'list' protocol: 'odata-v2' Learn more about the available CDS Properties.
https://cap.cloud.sap/docs/java/application-services
CC-MAIN-2022-21
en
refinedweb
Migration to store project full path in repository In we added a new configuration item to GitLab-managed git repositories, storing the project name as part of the configuration. In the hashed storage case, this allows us to determine the namespace and project we should import a repository as, as part of a last-ditch "restore from this backup of /var/lib/git/repositories that I have" case. However, existing installations won't get the new configuration item written unless they are migrated or transferred at some point. To be sure we can rely on this being present, we should consider having a rake task or background migration (latter preferred) to create this configuration item once, for all repositories. I don't consider it very high-priority - it's only an issue if we're importing hashed storage repos, and when repos are migrated to hashed storage, the configuration will be written. So this only affects repos migrated between %10.0 and %10.3. Still, it's a bit of technical debt that I think is worth clearing up at some stage.
https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41776
CC-MAIN-2022-21
en
refinedweb
Using Secrets in a Task¶ Flyte supports running a wide variety of tasks, from containers to SQL queries and service calls. In order for Flyte-run containers to request and access secrets, Flyte provides a native Secret construct. For a simple task that launches a Pod, the flow will look something like this: Where: Flyte invokes a plugin to create the K8s object. This can be a Pod or a more complex CRD (e.g. Spark, PyTorch, etc.) Tip The plugin will ensure that labels and annotations are passed through to any Pod that will be spawned due to the creation of the CRD. Flyte will apply labels and annotations that are referenced to all secrets the task is requesting access to. Flyte will send a POST request to ApiServer to create the object. Before persisting the Pod, ApiServer will invoke all registered Pod Webhooks. Flyte’s Pod Webhook will be called. Flyte Pod Webhook will then, using the labels and annotiations attached in step 2, lookup globally mounted secrets for each of the requested secrets. If found, Pod Webhook will mount them directly in the Pod. If not found, it will inject the appropriate annotations to load the secrets for K8s (or Vault or Confidant or any other secret management system plugin configured) into the task pod. Once the secret is injected into the task pod, Flytekit can read it using the secret manager (see examples below). The webhook is included in all overlays in the Flytekit repo. The deployment file creates (mainly) two things; a Job and a Deployment. flyte-pod-webhook-secrets Job: This job runs flytepropeller webhook init-certscommand that issues self-signed CA Certificate as well as a derived TLS certificate and its private key. It stores them into a new secret flyte-pod-webhook-secret. flyte-pod-webhook Deployment: This deployment creates the Webhook pod which creates a MutatingWebhookConfiguration on startup. This serves as the registration contract with the ApiServer to know about the Webhook before it starts serving traffic. Secret Discovery¶ Flyte identifies secrets using a secret group and a secret key. In a task decorator you request a secret like this: @task(secret_requests=[Secret(group=SECRET_GROUP, key=SECRET_NAME)]) Flytekit provides a shorthand for loading the requested secret inside a task: secret = flytekit.current_context().secrets.get(SECRET_GROUP, SECRET_NAME) See the python examples further down for more details on how to request and use secrets in a task. Flytekit relies on the following environment variables to load secrets (defined here). When running tasks and workflows locally you should make sure to store your secrets accordingly or to modify these: - FLYTE_SECRETS_DEFAULT_DIR - The directory Flytekit searches for secret files, default: “/etc/secrets” - FLYTE_SECRETS_FILE_PREFIX - a common file prefix for Flyte secrets, default: “” - FLYTE_SECRETS_ENV_PREFIX - a common env var prefix for Flyte secrets, default: “_FSEC_” When running a workflow on a Flyte cluster, the configured secret manager will use the secret Group and Key to try and retrieve a secret. If successful, it will make the secret available as either file or environment variable and will if necessary modify the above variables automatically so that the task can load and use the secrets. Configuring a secret management system plugin into use¶ When a task requests a secret Flytepropeller will try to retrieve secrets in the following order: 1.) checking for global secrets (secrets mounted as files or environment variables on the flyte-pod-webhook pod) and 2.) checking with an additional configurable secret manager. Note that the global secrets take precedence over any secret discoverable by the secret manager plugins. The following additional secret managers are available at the time of writing: - K8s secrets (default) - flyte-pod-webhook will try to look for a K8s secret named after the secret Group and retrieve the value for the secret Key. - AWS Secret Manager - flyte-pod-webhook will add the AWS Secret Manager sidecar container to a task Pod which will mount the secret. - Vault Agent Injector - flyte-pod-webhook will annotate the task Pod with the respective Vault annotations that trigger an existing Vault Agent Injector to retrieve the specified secret Key from a vault path defined as secret Group. You can configure the additional secret manager by defining secretManagerType to be either ‘K8s’, ‘AWS’ or ‘Vault’ in the core config < of the Flytepropeller. When using the K8s secret manager plugin (enabled by default), the secrets need to be available in the same namespace as the task execution (for example flytesnacks-development). K8s secrets can be mounted as either files or injected as environment variables into the task pod, so if you need to make larger files available to the task, then this might be the better option. Furthermore, this method also allows you to have separate credentials for different domains but still using the same name for the secret. The group of the secret request corresponds to the K8s secret name, while the name of the request corresponds to the key of the specific entry in the secret. When using the Vault secret manager, make sure you have Vault Agent deployed on your cluster (step-by-step tutorial). Vault secrets can only be mounted as files and will become available under “/etc/flyte/secrets/SECRET_GROUP/SECRET_NAME”. Vault comes with two versions of the key-value secret store. By default the Vault secret manager will try to retrieve Version 2 secrets. You can specify the KV version by setting webhook.vaultSecretManager.kvVersion in the configmap. Note that the version number needs to be an explicit string (e.g. “1”). You can also configure the Vault role under which Flyte will try to read the secret by setting webhook.vaultSecretManager.role (default: “flyte”). How to Use Secrets Injection in a Task¶ This feature is available in Flytekit v0.17.0+. This example explains how a secret can be accessed in a Flyte Task. Flyte provides different types of Secrets, as part of SecurityContext. But, for users writing python tasks, you can only access secure secrets either as environment variable or injected into a file. Flytekit exposes a type/class called Secrets. It can be imported as follows. Secrets consists of a name and an enum that indicates how the secrets will be accessed. If the mounting_requirement is not specified then the secret will be injected as an environment variable is possible. Ideally, you need not worry about the mounting requirement, just specify the Secret.name that matches the declared secret in Flyte backend Let us declare a secret named user_secret in a secret group user-info. A secret group can have multiple secret associated with the group. Optionally it may also have a group_version. The version helps in rotating secrets. If not specified the task will always retrieve the latest version. Though not recommended some users may want the task version to be bound to a secret version. SECRET_NAME = "user_secret" SECRET_GROUP = "user-info" Now declare the secret in the requests. The request tells Flyte to make the secret available to the task. The secret can then be accessed inside the task using the flytekit.ExecutionParameters, through the global flytekit context as shown below. At runtime, flytekit looks inside the task pod for an environment variable or a mounted file with a predefined name/path and loads the value. @task(secret_requests=[Secret(group=SECRET_GROUP, key=SECRET_NAME)]) def secret_task() -> str: secret_val = flytekit.current_context().secrets.get(SECRET_GROUP, SECRET_NAME) # Please do not print the secret value, we are doing so just as a demonstration print(secret_val) return secret_val Note In case of failure to access the secret (it is not found at execution time) an error is raised. Secrets group and key are required parameters during declaration and usage. Failure to specify will cause a ValueError In some cases you may have multiple secrets and sometimes, they maybe grouped as one secret in the SecretStore. For example, In Kubernetes secrets, it is possible to nest multiple keys under the same secret. In this case, the name would be the actual name of the nested secret, and the group would be the identifier for the kubernetes secret. As an example, define 2 secrets username and password, defined in the group user_info USERNAME_SECRET = "username" PASSWORD_SECRET = "password" The Secret structure allows passing two fields, matching the key and the group, as previously described: @task( secret_requests=[ Secret(key=USERNAME_SECRET, group=SECRET_GROUP), Secret(key=PASSWORD_SECRET, group=SECRET_GROUP), ] ) def user_info_task() -> Tuple[str, str]: secret_username = flytekit.current_context().secrets.get( SECRET_GROUP, USERNAME_SECRET ) secret_pwd = flytekit.current_context().secrets.get(SECRET_GROUP, PASSWORD_SECRET) # Please do not print the secret value, this is just a demonstration. print(f"{secret_username}={secret_pwd}") return secret_username, secret_pwd It is also possible to enforce Flyte to mount the secret as a file or an environment variable. The File type is useful for large secrets that do not fit in environment variables - typically asymmetric keys (certs etc). Another reason may be that a dependent library necessitates that the secret be available as a file. In these scenarios you can specify the mount_requirement. In the following example we force the mounting to be an Env variable @task( secret_requests=[ Secret( group=SECRET_GROUP, key=SECRET_NAME, mount_requirement=Secret.MountType.ENV_VAR, ) ] ) def secret_file_task() -> Tuple[str, str]: # SM here is a handle to the secrets manager sm = flytekit.current_context().secrets f = sm.get_secrets_file(SECRET_GROUP, SECRET_NAME) secret_val = sm.get(SECRET_GROUP, SECRET_NAME) # returning the filename and the secret_val return f, secret_val These tasks can be used in your workflow as usual The simplest way to test Secret accessibility is to export the secret as an environment variable. There are some helper methods available to do so if __name__ == "__main__": sec = SecretsManager() os.environ[sec.get_secrets_env_var(SECRET_GROUP, SECRET_NAME)] = "value" os.environ[ sec.get_secrets_env_var(SECRET_GROUP, USERNAME_SECRET) ] = "username_value" os.environ[ sec.get_secrets_env_var(SECRET_GROUP, PASSWORD_SECRET) ] = "password_value" x, y, z, f, s = my_secret_workflow() assert x == "value" assert y == "username_value" assert z == "password_value" assert f == sec.get_secrets_file(SECRET_GROUP, SECRET_NAME) assert s == "value" Scaling the Webhook¶ Vertical Scaling¶ To scale the Webhook to be able to process the number/rate of pods you need, you may need to configure a vertical pod autoscaler. Horizontal Scaling¶ The Webhook does not make any external API Requests in response to Pod mutation requests. It should be able to handle traffic quickly. For horizontal scaling, adding additional replicas for the Pod in the deployment should be sufficient. A single MutatingWebhookConfiguration object will be used, the same TLS certificate will be shared across the pods and the Service created will automatically load balance traffic across the available pods. Total running time of the script: ( 0 minutes 0.000 seconds) Gallery generated by Sphinx-Gallery
https://docs.flyte.org/projects/cookbook/en/latest/auto/core/containerization/use_secrets.html
CC-MAIN-2022-21
en
refinedweb
Depending on how you count it, Python has about a half-dozen flow control mechanisms, which is much simpler than most programming languages. Fortunately, Python's collection of mechanisms is well chosen, with a high?but not obsessively high?degree of orthogonality between them. From the point of view of this appendix, exception handling is mostly one of Python's flow control techniques. In a language like Java, an application is probably considered "happy" if it does not throw any exceptions at all, but Python programmers find exceptions less "exceptional"?a perfectly good design might exit a block of code only when an exception is raised. Two additional aspects of the Python language are not usually introduced in terms of flow control, but nonetheless amount to such when considered abstractly. Both functional programming style operations on lists and Boolean shortcutting are, at the heart, flow control constructs. Choice between alternate code paths is generally performed with the if statement and its optional elif and else components. An if block is followed by zero or more elif blocks; at the end of the compound statement, zero or one else blocks occur. An if statement is followed by a Boolean expression and a colon. Each elif is likewise followed by a Boolean expression and colon. The else statement, if it occurs, has no Boolean expression after it, just a colon. Each statement introduces a block containing one or more statements (indented on the following lines or on the same line, after the colon). Every expression in Python has a Boolean value, including every bare object name or literal. Any empty container (list, dict, tuple) is considered false; an empty string or Unicode string is false; the number 0 (of any numeric type) is false. As well, an instance whose class defines a .__nonzero__() or .__len__() method is false if these methods return a false value. Without these special methods, every instance is true. Much of the time, Boolean expressions consist of comparisons between objects, where comparisons actually evaluate to the canonical objects "0" or "1". Comparisons are <, >, ==, >=, <=, <>, !=, is, is not, in, and not in. Sometimes the unary operator not precedes such an expression. Only one block in an "if/elif/else" compound statement is executed during any pass?if multiple conditions hold, the first one that evaluates as true is followed. For example: >>> if 2+2 <= 4: ... print "Happy math" ... Happy math >>> x = 3 >>> if x > 4: print "More than 4" ... elif x > 3: print "More than 3" ... elif x > 2: print "More than 2" ... else: print "2 or less" ... More than 2 >>> if isinstance(2, int): ... print "2 is an int" # 2.2+ test ... else: ... print "2 is not an int" Python has no "switch" statement to compare one value with multiple candidate matches. Occasionally, the repetition of an expression being compared on multiple elif lines looks awkward. A "trick" in such a case is to use a dict as a pseudo-switch. The following are equivalent, for example: >>> if var.upper() == 'ONE': val = 1 ... elif var.upper() == 'TWO': val = 2 ... elif var.upper() == 'THREE': val = 3 ... elif var.upper() == 'FOUR': val = 4 ... else: val = 0 ... >>> switch = {'ONE':1, 'TWO':2, 'THREE':3, 'FOUR':4} >>> val = switch.get(var.upper(), 0) The Boolean operators or and and are "lazy." That is, an expression containing or or and evaluates only as far as it needs to determine the overall value. Specifically, if the first disjoin of an or is true, the value of that disjoin becomes the value of the expression, without evaluating the rest; if the first conjoin of an and is false, its value likewise becomes the value of the whole expression. Shortcutting is formally sufficient for switching and is sometimes more readable and concise than "if/elif/else" blocks. For example: >>> if this: # 'if' compound statement ... result = this ... elif that: ... result = that ... else: ... result = 0 ... >>> result = this or that or 0 # boolean shortcutting Compound shortcutting is also possible, but not necessarily easy to read; for example: >>> (cond1 and func1()) or (cond2 and func2()) or func3() The for statement loops over the elements of a sequence. In Python 2.2+, looping utilizes an iterator object (which may not have a predetermined length)?but standard sequences like lists, tuples, and strings are automatically transformed to iterators in for statements. In earlier Python versions, a few special functions like xreadlines() and xrange() also act as iterators. Each time a for statement loops, a sequence/iterator element is bound to the loop variable. The loop variable may be a tuple with named items, thereby creating bindings for multiple names in each loop. For example: >>> for x,y,z in [(1,2,3),(4,5,6),(7,8,9)]: print x, y, z, '*', ... 1 2 3 * 4 5 6 * 7 8 9 * A particularly common idiom for operating on each item in a dictionary is: >>> for key,val in dct.items(): ... print key, val, '*', ... 1 2 * 3 4 * 5 6 * When you wish to loop through a block a certain number of times, a common idiom is to use the range() or xrange() built-in functions to create ad hoc sequences of the needed length. For example: >>> for _ in range(10): ... print "X", # '_' is not used in body ... X X X X X X X X X X However, if you find yourself binding over a range just to repeat a block, this often indicates that you have not properly understood the loop. Usually repetition is a way of operating on a collection of related things that could instead be explicitly bound in the loop, not just a need to do exactly the same thing multiple times. If the continue statement occurs in a for loop, the next loop iteration proceeds without executing later lines in the block. If the break statement occurs in a for loop, control passes past the loop without executing later lines (except the finally block if the break occurs in a try). Much like the for statement, the built-in functions map(), filter(), and reduce() perform actions based on a sequence of items. Unlike a for loop, these functions explicitly return a value resulting from this application to each item. Each of these three functional programming style functions accepts a function object as a first argument and sequence(s) as a subsequent argument(s). The map() function returns a list of items of the same length as the input sequence, where each item in the result is a "transformation" of one item in the input. Where you explicitly want such transformed items, use of map() is often both more concise and clearer than an equivalent for loop; for example: >>> nums = (1,2,3,4) >>> str_nums = [] >>> for n in nums: ... str_nums.append(str(n)) ... >>> str_nums ['1', '2', '3', '4'] >>> str_nums = map(str, nums) >>> str_nums ['1', '2', '3', '4'] If the function argument of map() accepts (or can accept) multiple arguments, multiple sequences can be given as later arguments. If such multiple sequences are of different lengths, the shorter ones are padded with None values. The special value None may be given as the function argument, producing a sequence of tuples of elements from the argument sequences. >>> nums = (1,2,3,4) >>> def add(x, y): ... if x is None: x=0 ... if y is None: y=0 ... return x+y ... >>> map(add, nums, [5,5,5]) [6, 7, 8, 4] >>> map(None, (1,2,3,4), [5,5,5]) [(1, 5), (2, 5), (3, 5), (4, None)] The filter() function returns a list of those items in the input sequence that satisfy a condition given by the function argument. The function argument must accept one parameter, and its return value is interpreted as a Boolean (in the usual manner). For example: >>> nums = (1,2,3,4) >>> odds = filter(lambda n: n%2, nums) >>> odds (1, 3) Both map() and filter() can use function arguments that have side effects, thereby making it possible?but not usually desirable?to replace every for loop with a map() or filter() function. For example: >>> for x in seq: ... # bunch of actions ... pass ... >>> def actions(x): ... # same bunch of actions ... return 0 ... >>> filter(actions, seq) [] Some epicycles are needed for the scoping of block variables and for break and continue statements. But as a general picture, it is worth being aware of the formal equivalence between these very different-seeming techniques. The reduce() function takes as a function argument a function with two parameters. In addition to a sequence second argument, reduce() optionally accepts a third argument as an initializer. For each item in the input sequence, reduce() combines the previous aggregate result with the item, until the sequence is exhausted. While reduce()?like map() and filter()?has a loop-like effect of operating on every item in a sequence, its main purpose is to create some sort of aggregation, tally, or selection across indefinitely many items. For example: >>> from operator import add >>> sum = lambda seq: reduce(add, seq) >>> sum([4,5,23,12]) 44 >>> def tastes_better(x, y): ... # some complex comparison of x, y ... # either return x, or return y ... # ... ... >>> foods = [spam, eggs, bacon, toast] >>> favorite = reduce(tastes_better, foods) List comprehensions (listcomps) are a syntactic form that was introduced with Python 2.0. It is easiest to think of list comprehensions as a sort of cross between for loops and the map() or filter() functions. That is, like the functions, listcomps are expressions that produce lists of items, based on "input" sequences. But listcomps also use the keywords for and if that are familiar from statements. Moreover, it is typically much easier to read a compound list comprehension expression than it is to read corresponding nested map() and filter() functions. For example, consider the following small problem: You have a list of numbers and a string of characters; you would like to construct a list of all pairs that consist of a number from the list and a character from the string, but only if the ASCII ordinal is larger than the number. In traditional imperative style, you might write: >>> bigord_pairs = [] >>> for n in (95,100,105): ... for c in 'aei': ... if ord(c) > n: ... bigord_pairs.append((n,c)) ... >>> bigord_pairs [(95, 'a'), (95, 'e'), (95, 'i'), (100, 'e'), (100, 'i')] In a functional programming style you might write the nearly unreadable: >>> dupelms=lambda lst,n: reduce(lambda s,t:s+t, ... map(lambda l,n=n: [l]*n, 1st)) >>> combine=lambda xs,ys: map(None,xs*len(ys), dupelms(ys,len(xs))) >>> bigord_pairs=lambda ns,cs: filter(lambda (n,c):ord(c)>n, ... combine(ns,cs)) >>> bigord_pairs((95,100,105),'aei') [(95, 'a'), (95, 'e'), (100, 'e'), (95, 'i'), (100, 'i')] In defense of this FP approach, it has not only accomplished the task at hand, but also provided the general combinatorial function combine() along the way. But the code is still rather obfuscated. List comprehensions let you write something that is both concise and clear: >>> [(n,c) for n in (95,100,105) for c in 'aei' if ord(c)>n] [(95, 'a'), (95, 'e'), (95, 'i'), (100, 'e'), (100, 'i')] As long as you have listcomps available, you hardly need a general combine() function, since it just amounts to repeating the for clause in a listcomp. Slightly more formally, a list comprehension consists of the following: (1) Surrounding square brackets (like a list constructor, which it is). (2) An expression that usually, but not by requirement, contains some names that get bound in the for clauses. (3) One or more for clauses that bind a name repeatedly (just like a for loop). (4) Zero or more if clauses that limit the results. Generally, but not by requirement, the if clauses contain some names that were bound by the for clauses. List comprehensions may nest inside each other freely. Sometimes a for clause in a listcomp loops over a list that is defined by another listcomp; once in a while a nested listcomp is even used inside a listcomp's expression or if clauses. However, it is almost as easy to produce difficult-to-read code by excessively nesting listcomps as it is by nesting map() and filter() functions. Use caution and common sense about such nesting. It is worth noting that list comprehensions are not as referentially transparent as functional programming style calls. Specifically, any names bound in for clauses remain bound in the enclosing scope (or global if the name is so declared). These side effects put a minor extra burden on you to choose distinctive or throwaway names for use in listcomps. The while statement loops over a block as long as the expression after the while remains true. If an else block is used within a compound while statement, as soon as the expression becomes false, the else block is executed. The else block is chosen even if the while expression is initially false. If the continue statement occurs in a while loop, the next loop iteration proceeds without executing later lines in the block. If the break statement occurs in a while loop, control passes past the loop without executing later lines (except the finally block if the break occurs in a try). If a break occurs in a while block, the else block is not executed. If a while statement's expression is to go from being true to being false, typically some name in the expression will be re-bound within the while block. At times an expression will depend on an external condition, such as a file handle or a socket, or it may involve a call to a function whose Boolean value changes over invocations. However, probably the most common Python idiom for while statements is to rely on a break to terminate a block. Some examples: >>>>> while command != 'exit': ... command = raw_input('Command > ') ... # if/elif block to dispatch on various commands ... Command > someaction Command > exit >>> while socket.ready(): ... socket.getdata() # do something with the socket ... else: ... socket.close() # cleanup (e.g. close socket) ... >>> while 1: ... command = raw_input('Command > ') ... if command == 'exit': break ... # elif's for other commands ... Command > someaction Command > exit Both functions and object methods allow a kind of nonlocality in terms of program flow, but one that is quite restrictive. A function or method is called from another context, enters at its top, executes any statements encountered, then returns to the calling context as soon as a return statement is reached (or the function body ends). The invocation of a function or method is basically a strictly linear nonlocal flow. Python 2.2 introduced a flow control construct, called generators, that enables a new style of nonlocal branching. If a function or method body contains the statement yield, then it becomes a generator function, and invoking the function returns a generator iterator instead of a simple value. A generator iterator is an object that has a .next() method that returns values. Any instance object can have a .next() method, but a generator iterator's method is special in having "resumable execution." In a standard function, once a return statement is encountered, the Python interpreter discards all information about the function's flow state and local name bindings. The returned value might contain some information about local values, but the flow state is always gone. A generator iterator, in contrast, "remembers" the entire flow state, and all local bindings, between each invocation of its .next() method. A value is returned to a calling context each place a yield statement is encountered in the generator function body, but the calling context (or any context with access to the generator iterator) is able to jump back to the flow point where this last yield occurred. In the abstract, generators seem complex, but in practice they prove quite simple. For example: >>> from __future__ import generators # not needed in 2.3+ >>> def generator_func(): ... for n in [1,2]: ... yield n ... print "Two yields in for loop" ... yield 3 ... >>> generator_iter = generator_func() >>> generator_iter.next() 1 >>> generator_iter.next() 2 >>> generator_iter.next() Two yields in for loop 3 >>> generator_iter.next() Traceback (most recent call last): File "<stdin>", line 1, in ? StopIteration The object generator_iter in the example can be bound in different scopes, and passed to and returned from functions, just like any other object. Any context invoking generator_iter.next() jumps back into the last flow point where the generator function body yielded. In a sense, a generator iterator allows you to perform jumps similar to the "GOTO" statements of some (older) languages, but still retains the advantages of structured programming. The most common usage for generators, however, is simpler than this. Most of the time, generators are used as "iterators" in a loop context; for example: >>> for n in generator_func(): ... print n ... 1 2 Two yields in for loop 3 In recent Python versions, the StopIteration exception is used to signal the end of a for loop. The generator iterator's .next() method is implicitly called as many times as possible by the for statement. The name indicated in the for statement is repeatedly re-bound to the values the yield statement(s) return. Python uses exceptions quite broadly and probably more naturally than any other programming language. In fact there are certain flow control constructs that are awkward to express by means other than raising and catching exceptions. There are two general purposes for exceptions in Python. On the one hand, Python actions can be invalid or disallowed in various ways. You are not allowed to divide by zero; you cannot open (for reading) a filename that does not exist; some functions require arguments of specific types; you cannot use an unbound name on the right side of an assignment; and so on. The exceptions raised by these types of occurrences have names of the form [A?Z].*Error. Catching error exceptions is often a useful way to recover from a problem condition and restore an application to a "happy" state. Even if such error exceptions are not caught in an application, their occurrence provides debugging clues since they appear in tracebacks. The second purpose for exceptions is for circumstances a programmer wishes to flag as "exceptional." But understand "exceptional" in a weak sense?not as something that indicates a programming or computer error, but simply as something unusual or "not the norm." For example, Python 2.2+ iterators raise a StopIteration exception when no more items can be generated. Most such implied sequences are not infinite length, however; it is merely the case that they contain a (large) number of items, and they run out only once at the end. It's not "the norm" for an iterator to run out of items, but it is often expected that this will happen eventually. In a sense, raising an exception can be similar to executing a break statement?both cause control flow to leave a block. For example, compare: >>> n = 0 >>> while 1: ... n = n+1 ... if n > 10: break ... >>> print n 11 >>> n = 0 >>> try: ... while 1: ... n = n+1 ... if n > 10: raise "ExitLoop" ... except: ... print n ... 11 In two closely related ways, exceptions behave differently than do break statements. In the first place, exceptions could be described as having "dynamic scope," which in most contexts is considered a sin akin to "GOTO," but here is quite useful. That is, you never know at compile time exactly where an exception might get caught (if not anywhere else, it is caught by the Python interpreter). It might be caught in the exception's block, or a containing block, and so on; or it might be in the local function, or something that called it, or something that called the caller, and so on. An exception is a fact that winds its way through execution contexts until it finds a place to settle. The upward propagation of exceptions is quite opposite to the downward propagation of lexically scoped bindings (or even to the earlier "three-scope rule"). The corollary of exceptions' dynamic scope is that, unlike break, they can be used to exit gracefully from deeply nested loops. The "Zen of Python" offers a caveat here: "Flat is better than nested." And indeed it is so, if you find yourself nesting loops too deeply, you should probably refactor (e.g., break loops into utility functions). But if you are nesting just deeply enough, dynamically scoped exceptions are just the thing for you. Consider the following small problem: A "Fermat triple" is here defined as a triple of integers (i,j,k) such that "i**2 + j**2 == k**2". Suppose that you wish to determine if any Fermat triples exist with all three integers inside a given numeric range. An obvious (but entirely nonoptimal) solution is: >>> def fermat_triple(beg, end): ... class EndLoop(Exception): pass ... range_ = range(beg, end) ... try: ... for i in range_: ... for j in range_: ... for k in range_: ... if i**2 + j**2 == k**2: ... raise EndLoop, (i,j,k) ... except EndLoop, triple: ... # do something with 'triple' ... return i,j,k ... >>> fermat_triple(1,10) (3, 4, 5) >>> fermat_triple(120,150) >>> fermat_triple(100,150) (100, 105, 145) By raising the EndLoop exception in the middle of the nested loops, it is possible to catch it again outside of all the loops. A simple break in the inner loop would only break out of the most deeply nested block, which is pointless. One might devise some system for setting a "satisfied" flag and testing for this at every level, but the exception approach is much simpler. Since the except block does not actually do anything extra with the triple, it could have just been returned inside the loops; but in the general case, other actions can be required before a return. It is not uncommon to want to leave nested loops when something has "gone wrong" in the sense of an "*Error" exception. Sometimes you might only be in a position to discover a problem condition within nested blocks, but recovery still makes better sense outside the nesting. Some typical examples are problems in I/O, calculation overflows, missing dictionary keys or list indices, and so on. Moreover, it is useful to assign except statements to the calling position that really needs to handle the problems, then write support functions as if nothing can go wrong. For example: >>> try: ... result = complex_file_operation(filename) ... except IOError: ... print "Cannot open file", filename The function complex_file_operation() should not be burdened with trying to figure out what to do if a bad filename is given to it?there is really nothing to be done in that context. Instead, such support functions can simply propagate their exceptions upwards, until some caller takes responsibility for the problem. The try statement has two forms. The try/except/else form is more commonly used, but the try/finally form is useful for "cleanup handlers." In the first form, a try block must be followed by one or more except blocks. Each except may specify an exception or tuple of exceptions to catch; the last except block may omit an exception (tuple), in which case it catches every exception that is not caught by an earlier except block. After the except blocks, you may optionally specify an else block. The else block is run only if no exception occurred in the try block. For example: >>> def except_test(n): ... try: x = 1/n ... except IOError: print "IO Error" ... except ZeroDivisionError: print "Zero Division" ... except: print "Some Other Error" ... else: print "All is Happy" ... >>> except_test(l) All is Happy >>> except_test(0) Zero Division >>> except_test('x') Some Other Error An except test will match either the exception actually listed or any descendent of that exception. It tends to make sense, therefore, in defining your own exceptions to inherit from related ones in the exceptions module. For example: >>> class MyException(IOError): pass >>> try: ... raise MyException ... except IOError: ... print "got it" ... got it In the try/finally form of the try statement, the finally statement acts as general cleanup code. If no exception occurs in the try block, the finally block runs, and that is that. If an exception was raised in the try block, the finally block still runs, but the original exception is re-raised at the end of the block. However, if a return or break statement is executed in a finally block?or if a new exception is raised in the block (including with the raise statement)?the finally block never reaches its end, and the original exception disappears. A finally statement acts as a cleanup block even when its corresponding try block contains a return, break, or continue statement. That is, even though a try block might not run all the way through, finally is still entered to clean up whatever the try did accomplish. A typical use of this compound statement opens a file or other external resource at the very start of the try block, then performs several actions that may or may not succeed in the rest of the block; the finally is responsible for making sure the file gets closed, whether or not all the actions on it prove possible. The try/finally form is never strictly needed since a bare raise statement will reraise the last exception. It is possible, therefore, to have an except block end with the raise statement to propagate an error upward after taking some action. However, when a cleanup action is desired whether or not exceptions were encountered, the try/finally form can save a few lines and express your intent more clearly. For example: >>> def finally_test(x): ... try: ... y = 1/x ... if x > 10: ... return x ... finally: ... print "Cleaning up..." ... return y ... >>> finally_test(0) Cleaning up... Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 3, in finally_test ZeroDivisionError: integer division or modulo by zero >>> finally_test(3) Cleaning up... 0 >>> finally_test(100) Cleaning up... 100 Unlike in languages in the Lisp family, it is usually not a good idea to create Python programs that execute data values. It is possible, however, to create and run Python strings during program runtime using several built-in functions. The modules code, codeop, imp, and new provide additional capabilities in this direction. In fact, the Python interactive shell itself is an example of a program that dynamically reads strings as user input, then executes them. So clearly, this approach is occasionally useful. Other than in providing an interactive environment for advanced users (who themselves know Python), a possible use for the "data as code" model is with applications that themselves generate Python code, either to run later or to communicate with another application. At a simple level, it is not difficult to write compilable Python programs based on templatized functionality; for this to be useful, of course, you would want a program to contain some customization that was determinable only at runtime. Evaluate the expression in string s and return the result of that evaluation. You may specify optional arguments globals and locals to specify the namespaces to use for name lookup. By default, use the regular global and local namespace dictionaries. Note that only an expression can be evaluated, not a statement suite. Most of the time when a (novice) programmer thinks of using eval() it is to compute some value?often numeric?based on data encoded in texts. For example, suppose that a line in a report file contains a list of dollar amounts, and you would like the sum of these numbers. A naive approach to the problem uses eval() : >>>>> eval("+".join([d.replace('$', '') for d in line.split()])) 207 While this approach is generally slow, that is not an important problem. A more significant issue is that eval() runs code that is not known until runtime; potentially line could contain Python code that causes harm to the system it runs on or merely causes an application to malfunction. Imagine that instead of a dollar figure, your data file contained os.rmdir("/"). A better approach is to use the safe type coercion functions int(), float(), and so on. >>> nums = [int(d.replace('$', '')) for d in line.split()] >>> from operator import add >>> reduce(add, nums) 207 The exec statement is a more powerful sibling of the eval() function. Any valid Python code may be run if passed to the exec statement. The format of the exec statement allows optional namespace specification, as with eval() : exec code [in globals [,locals]] For example: >>>>> exec s in globals(), locals() 0 1 2 3 4 5 6 7 8 9 The argument code may be either a string, a code object, or an open file object. As with eval(), the security dangers and speed penalties of exec usually outweigh any convenience provided. However, where code is clearly under application control, there are occasionally uses for this statement. Import the module named s, using namespace dictionaries globals and locals. The argument fromlist may be omitted, but if specified as a nonempty list of strings?e.g., [""]?the fully qualified subpackage will be imported. For normal cases, the import statement is the way you import modules, but in the special circumstance that the value of s is not determined until runtime, use __import__(). >>> op = __import__('os.path',globals(),locals(),['']) >>> op.basename('/this/that/other') 'other' Equivalent to eval(raw_input (prompt)), along with all the dangers associated with eval() generally. Best practice is to always use raw_input(), but you might see input() in existing programs. Return a string from user input at the terminal. Used to obtain values interactive in console-based applications. >>> s = raw_input('Last Name: ') Last Name: Mertz >>> s 'Mertz'
https://etutorials.org/Programming/Python.+Text+processing/Appendix+A.+A+Selective+and+Impressionistic+Short+Review+of+Python/A.4+Flow+Control/
CC-MAIN-2022-21
en
refinedweb
In this tutorial, we'll learn how to integrate TailwindCSS with Next.js. What is Next.js? Next.js is a React framework which provides Server-Side Rendering out of box. It's a very popular Node.js framework with over 43k stars on Github. It provides a several features like Server-Side rendering, Static Exporting, CSS-in-JS, etc. I've worked with Next.js for more than a year and I liked the overall Developer Experience. It's pretty easy to create new pages, add new plugin, routing, etc. On top of all these, they've a lot of starter templates (or examples). Deploying Next.js applications to production using Zeit is also pretty simple. What is TailwindCSS? I've already covered about TailwindCSS in one of my previous posts. In short, TailwindCSS is a utility-first CSS framework which aims to provide us with a set of utlity classes (like flex, block, inline-block, etc.). In addition to that, it also provides us utility classes to create CSS grids, responsive designs as well as style hover, focus and active pseudo-classes. Getting started Let's start by bootstrapping a Next.js application. To do so, we need to run the following command: npx create-next-app If everything works fine, you should get an output like the following: Once the installation is complete, we'll get a new frontend directory as that's the name we gave while creating the application. Let's go inside the frontend and start the server: cd frontend && yarn dev Now,if we visit we'll see the following page: Installing TailwindCSS I've written about how to integrate TailwindCSS with React in one of my previous posts. The process of integrating TailwindCSS with Next.js will be a similar one. First, we need to install TailwindCSS: yarn add tailwindcss If you prefer npm, you can run the following command instead of the above one: npm install tailwindcss Next, we need to use the @tailwind directive to inject Tailwind's base, components, and utilities styles into our CSS. To do that we need to create a new file at public/assets/styles/vendors.css add the following code to it: @tailwind base; @tailwind components; @tailwind utilities; Next, we need to add the build:style script to our package.json file: "scripts": { "dev": "next dev", "build": "next build", "start": "next start", "build:style": "tailwind build public/assets/styles/vendors.css -o public/assets/styles/tailwind.css" }, Now, the build:style command will generate a new public/assets/styles/tailwind.css file whenever we run it: Next, we need to add the generated file to our pages/index.js file: import React from "react"; import Head from "next/head"; import Nav from "../components/nav"; import "../public/assets/styles/tailwind.css"; However, if we visit we'll get the following error: This happens because Next.js doesn't know how to process this file as there is no Webpack loader installed in our application which can help in processing the .css file. To resolve this issue, we'll have to install the next-css plugin: yarn add @zeit/next-css Next, we need to configure Next.js to use that loader. To do that, we need to create a next.config.js in the root of our project and add the following code to it: const withCSS = require("@zeit/next-css"); module.exports = withCSS({}); That's all we need to do to make TailwindCSS work with Next.js. To verify whether TailwindCSS is working or not, we can add a TailwindCSS class. We can add the bg-blue-900 py-8 class to our pages/index.js file: const Home = () => ( <div> <Head> <title>Home</title> <link rel="icon" href="/favicon.ico" /> </Head> <Nav /> <div className="hero bg-blue-100 py-8"> <h1 className="title">Welcome to Next.js!</h1> <p className="description"> To get started, edit <code>pages/index.js</code> and save to reload. </p> Now, if we restart our server and visit we'll see that TailwindCSS is working as expected: If you want to know more about configuring Tailwind, you can read it here. Conclusion In this tutorial, we've learnt how to use TailwindCSS with a Next.js application. In the future, we'll build a ProductHunt clone using React and GraphQL. I hope that this tutorial helps you in your future projects.
https://nirmalyaghosh.com/articles/integrate-tailwindcss-js
CC-MAIN-2022-21
en
refinedweb
Most people who have used both Object Role Modeling (ORM) and Entity Relationship (ER) modeling prefer to use ORM for conceptual analysis, because of its expressiveness , completeness, and flexibility. An added benefit of ORM is the automatic normalization that occurs during the mapping process. However, if you prefer working with entity relationship models, VEA allows you to bypass ORM and create data models using ER techniques. VEA provides two ways to create a logical database model using ER techniques. You can create an ER (Entity Relationship) source model, or you can directly draw a logical database diagram using the database model diagram solution. We'll cover the database model diagram in sections 10.3 to 10.7 of the chapter, and discuss ER source models at the end. Building a database model diagram and building an ER source model are very similar, so nearly all the material in sections 10.3 to 10.7 of this chapter applies to ER source models as well. Unlike ORM and ER source models, database model diagrams cannot be part of larger projects. The database model diagram is well suited for simple or one time tasks . If you intend to model a complex or large universe of discourse that could benefit from being developed as a set of sub models, then you should consider using source models instead of just the database model diagram solution. In pure ER modeling, entity types and attributes are conceptual constructs, and relationships between entity types may be expressed directly. In the VEA database model diagram and ER source model solutions, the correspondence between entity types and tables is 1:1, as is the correspondence between attributes and columns . VEA supports separate conceptual and physical naming for these constructs, which allows the modeler to present users with familiar business names for objects, while maintaining physical naming standards at the database level. For ease of reference, the remainder of this chapter will use the term "entity" loosely to mean either a conceptual entity type or a relational table scheme, and the term "attribute" to mean either a conceptual attribute or relational column. A logical data model can be expressed in different notations. VEA supports a generic relational notation, IDEF1X notation, and some other variations. The choice of notations and display options does not affect the underlying model. Before discussing how to create models using the database model diagram solution, let's take a quick look at some the different logical notations available in VEA. With relational notation, each entity is shown as rectangular box. The name of the entity is in the gray shaded portion of each box. The entities represented by the model in Figure 10-1 are Country, Patient, BloodPressureTest and PapSmear . Figure 10-1: Data Model expressed in relational notation. Each entity's primary key is underlined and is also identified by the letters PK on the left hand side of the entity. The primary key is shown just below the name of the entity, and is separated from the other attributes by a horizontal line. For instance, the primary key of Patient is PatientNr . All of the non primary key attributes are listed below the horizontal line. Required attributes are shown in boldface type, and migrated attributes (i.e., attributes that are part of a foreign key) are marked with the letters FK=ordinal> on the left hand side of the entity. Attributes like Country.CountryName that represent an alternate identifier for an entity are marked with the uniqueness symbol U=ordinal> on the left hand side of the entity. Relationships (foreign key references) are shown as solid arrows. With VEA's relational notation, all relationship lines look the same, as do all entity outlines. As you will discover in the next section, this is not the case with IDEF1X notation. The arrowhead always points to the "parent" (e.g., the "one" side of a l:m) entity in the relationship. For instance, in the relationship between Country and Patient, many patients may be born in one country, but each patient is born in at most one country. If you are used to notations that reverse the arrowheads, it can be confusing to remember which way the arrow points in VEA. Thinking in terms of logical implication may help. In the Country/Patient example, the constraint symbolized by the relationship line can be verbalized thus: " If a country code is listed in the Patient entity, then that country code MUST be listed in the Country entity." Substituting the variable P for the antecedent ("a country code is listed in the Patient entity") and the variable C for the consequent ("that country code must be listed in the Country entity") yields the argument from If P then Q . By longstanding convention, such argument forms are graphically denoted with an arrow pointing from the antecedent to the consequent like this: P ’ Q . If one thinks of the entities as the variables in the statement, the direction of the arrow in VEA is consistent with the conventional direction of the arrow in logical statements. As an option, VEA's relational notation can display cardinality indicators next to each relationship line. Cardinality is the number of instances of each entity that can participate in the relationship. The cardinality display option is turned off by default, and is not enabled for Figure 10-1, so the symbols are not visible in the figure. However, Table 10-1 shows the full set of cardinality indicators that could be displayed if you enabled the cardinality display option. In addition to this UML-style notation, you can choose the popular crowsfoot notation used in approaches such as Information Engineering (see the Glossary for examples). This notation has become quite popular in certain sectors, especially for military contractors, because of its adoption as a FIPS (Federal Information Processing Standard), as outlined in FIPS 184. With IDEF1X notation, each entity is shown as a rectangular box, but the shape of the rectangle depends on the kind of relationships in which the entity participates. An entity whose primary key does not contain any foreign keys is said to be "independent." An entity whose primary key contains at least one foreign key is termed "dependent." IDEF1X uses rectangles with square corners to represent independent entities, and rectangles with rounded corners to depict dependent entities. In Figure 10-2, the Country, Patient, and PapSmear entities are independent, while BloodPressureTest is a dependent entity. Figure 10-2: Data model expressed in IDEF1X notation. Relationships appear as lines between that terminate in a solid dot on the "child" entity of the relationship. The line itself is either dashed or solid, depending on the type of relationship depicted. If the migrated attribute becomes part of the primary key of the "child" entity the relationship is called "identifying." Relationships where the migrated attribute does not become part of the child's primary key are termed "non-identifying." IDEF1X shows identifying relationships as solid lines, and non-identifying relationships as dashed lines. In Figure 10-2, the relationship between "Patient" and "BloodPressureTest" is identifying. The other relationships in the diagram are non-identifying. In contrast to relational notation, IDEF1X uses special symbols to indicate if a relationship is optional. A hollow diamond on the "parent" side (e.g., the side opposite the dot) of the relationship indicates that the relationship, and thus the associated migrated attribute, is optional. In Figure 10-2, the hollow diamond on the relationship line touching the Country entity means that a given instance of Patient (the entity at the other end of the line) may or may not be related to an instance of Country . On a relational level, the optional nature of this relationship causes the attribute Patient.CountryCode to be optional. Contrast this with the relationship between "PapSmear" and "Patient" where the lack of a hollow diamond shows that the relationship is mandatory. Thus, it is impossible to record an instance of PapSmear that is not related to some instance of Patient . As in relational notation, IDEF1X separates the primary key of the entity from the non key attributes with a horizontal line. However, the primary key itself is not underlined, nor are there any additional letters denoting the key. Boldface type is not used for any attribute, regardless of whether the attribute is optional or mandatory. A display option (which has been enabled for Figure 10-2) allows VEA to use the standard IDEF1X method for indicating optional attributes ”letter ˜O enclosed in parentheses after the attribute name. Migrated attributes are indicated by the letters FK enclosed in parentheses after the attribute name. As with relational notation, IDEF1X can optionally display relationship cardinality. Table 10-2 shows the full set of cardinality indicators that that VEA can display if you enable the cardinality option when using IDEF1X notation. VEA's use of notation and other display options are controlled through a number of menus and dialog boxes. It is easier to see the effect of these display options in a simple diagram. In the next few sections, you will learn how create a simple diagram. Chapter 12 explains how to control the appearance of your diagram through the use of document display options. From the main menu, choose File > New > Database > Database Model Diagram to create a blank database model diagram. Your screen should look similar to Figure 10-3. Figure 10-3: Creating a New Database Model Diagram. Because of VEA's anchored windows and dockable toolbars , your screen may look somewhat different. However, you should see at least three major areas: the shape stencils (shown here on the left side of the screen), the drawing pane (upper portion of screen), and the Tables and Views anchored window in the bottom portion of the screen. If you cannot see the Tables and Views window, choose Database > View > Tables and Views from the menu. You may or may not have an Output tab . Anytime you perform an action that causes the tool to generate status messages, the Output window will automatically be displayed. Because the tab will automatically show up when you need it, there is no need to specifically enable the Output tab from the Database > View menu, though you can if you wish. It will be easier to follow the examples in this chapter if you ensure that your display options are set to the original factory defaults. From the main menu, choose Database > Options > Document . A tabbed dialog box like the one in Figure 10-4 will appear. Click on the Defaults button and choose Restore Original from the sub menu. Click the OK button to return to the diagram. Figure 10-4: Database Document Options, General pane. In the next chapter, you will learn how to set up the database drivers that come with VEA. Database drivers translate your model into platform specific DDL (data definition language) code with platform specific data types. You will find it easier to follow the examples in this chapter if you set the Microsoft SQL Server driver as your default driver. To set the default driver, choose Database > Options > Drivers from the main menu. Your screen should look like Figure 10-5. Highlight Microsoft SQL Server if it is not already selected and click the OK button. The other tabs in the dialog box and the Setup button are described in the next chapter, so don't worry about them for now. Figure 10-5: Selecting a default database driver. In contrast to the ORM solution, the database model diagram does not provide a sentence driven tool like the Fact Editor to help you build the data model. Instead, you build the model by dragging and dropping shapes . Before going onto the next section, make sure that the Entity Relationship stencil is open. If you cannot see the Entity Relationship stencil, open it with the command File > Stencils > Database > Entity relationship. VEA also provides an Object-Relational stencil that you can use to create models for servers that support object-relational constructs. The Object-Relational stencil is described in Chapter 13. To add an entity to the diagram, drag the entity shape from upper left portion of the stencil, and drop it on the drawing surface. While you are dragging the shape, you will notice that the object is named Table . As soon as you drop the entity on the drawing surface, it will receive a default physical name of Table1 . The ordinal appended to the word Table will increase with each new entity added, so the next entity added would be called Table2, the third would be called Table3 and so on The entity will show up as box on the drawing surface, and it will be listed in the Tables and Views window, shown at the bottom of Figure 10-6. Figure 10-6: Just added a new entity. To give the entity a meaningful name, double click on the new entity to bring up the Database Properties window in the anchored window portion of the screen. Click on the Definition category if it is not already selected. Your screen should now look like Figure 10-7. The default physical name for the newly created entity is Table1, and the default conceptual name is Entity1 . You can change either the physical or conceptual name of the entity by typing into the appropriate field. For this example, type the word "Patient" into the Physical name field. You will notice that the conceptual name automatically changes to match the Physical name. Because of the default display options, only the physical name of the entity will show on the diagram. In this case, the physical and conceptual names are the same, so the display option chosen doesn't make much difference. Chapter 12 will discuss controlling the display of physical or conceptual names on the diagram. Figure 10-7: The Database Properties Window; Definition category. The Name Space property should rarely, be used. Name spaces are designed for differentiating entities that are actually different in nature, but share the same name (e.g. the "homonym problem"). If you find homonyms in the Universe of Discourse that you are modeling, it is vastly preferable to facilitate a change of terms among users than to perpetuate the confusion in a data model. However, in very large environments, it may be impossible to resolve all homonyms, so the namespace option provides a way for the model to accommodate the issue. The Owner and Source database properties are specific to reverse engineering and will be discussed in the chapter on reverse engineering. The Defining type property is only used in Object-Relational models, which are discussed in chapter 13. To add attributes to the Patient entity, double click on the entity and select the Columns category of the Database Properties window to view the Columns pane. The fastest way to add attributes is to enter text directly into the various fields of the Columns table as shown in Figure 10-8. Figure 10-8: Adding an attributes to an entity. As you fill out the fields of the table in the Database Properties window, VEA creates the attributes for the selected entity. This window makes it easy to add multiple attributes to an entity very quickly. Table 10-3 explains the purpose of each field in the Columns table of the Database Properties window. Adding attributes through in-place editing of the table in the Database Properties window is very fast, but it does not address every property that an attribute can have. The Conceptual name of an attribute can only be set through the Column Properties dialog which is invoked via the Edit button. To set the conceptual name of an attribute, highlight the attribute in the Database Properties table and click the Edit button on the right side of the window. Performing these actions on the screen in Figure 10-8 would invoke the dialog box shown in Figure 10-9. Figure 10-9: Column Properties Dialog. By default, the conceptual name of the attribute will be the same as the physical name. To make the conceptual name different from the physical name, clear the Sync names when typing checkbox, and type "Patient Number" in the Conceptual Name field . Many organizations have strict physical naming conventions that dictate the use of class words (for instance, "Nr" for all numbers ) and forbid embedded spaces in the database. By setting physical and conceptual names independently, the modeler conforms to physical naming standards while retaining the more readable conceptual name for user reviews. The Allow NULL values checkbox in Figure 10-9 is in effect a "mirror image" of the Req'd checkbox in Figure 10-8. Checking the Allow NULL values checkbox in Figure 10-9 has the effect of making the attribute optional, while checking the Req'd checkbox in the Database Properties (Figure 10-8) makes the attribute mandatory. The Data Type pane of the Column Properties dialog allows the user to edit the data type for an attribute. Data Types are dealt with extensively in section 4.10 of chapter four and thus only receive a short explanation here. The radio button at the bottom of the pane switches the mode of the pane between portable and physical data types. In Figure 10-10, the radio button has been set for portable data types. Figure 10-10: Data Type Pane (Portable Data Types). Portable data types are generic, while physical data types are specific to a particular DBMS product. The mapping from portable to physical data types is determined by your choice of database driver. For instance, the Numeric, Signed integer, Small portable data type shown in Figure 10-10 will generate a SMALLINT column when using the Microsoft SQL Server driver. The same portable data type will generate a NUMBER column when using the Oracle driver. Many modelers know their target database, and prefer to work directly with physical data types. Figure 10-11 illustrates the results of changing the radio button selection for the Data Type pane of the Column Properties window. Figure 10-11: Data Type Pane (Physical Data Types). You cannot use this pane to edit a physical data type. If you want to change a data type, click on the Edit button and use the pop up dialog box, as shown in Figure 10-12. Figure 10-12: Editing physical data types. The drop down list is populated with all the data types that are supported by the DBMS driver you have chosen . This window also allows one to select the Identity and Rowguidcol properties for data types that support these features. Constraints are rules that restrict the population of the schema to allowable sets of data. Previous chapters covered the ORM constraints, which are more comprehensive than the constraints supported by Entity Relationship diagrams and source models. This section addresses only four constraints, the Primary Key, Alternate Uniqueness, the Mandatory (also called Not Null ) constraint, and the Foreign Key constraint. The most fundamental constraint is the Primary Key constraint, which ensures that each row in an entity is uniquely identifiable. Section 10.4 showed how to apply a primary key constraint by checking the PK checkbox in the Database Properties window (see Figure 10-8). You can also add (or edit) the primary key of an entity in the Primary ID pane of the Database Properties window, as shown in Figure 10-13 Figure 10-13: Database Properties window, Primary ID pane. The Primary ID pane allows you to choose from the available attributes to construct a Primary key. VEA has special generation and physical naming options that are described in chapter 12. Sometimes there is more than one way to uniquely identify a particular row of data in an entity. The Country entity is a good example. The ISO (international standards organization) assigns a two letter code to each country. Because these codes are unique, and Country_code is the most common way to refer to a country (at least for the purposes of this model), the Country_code attribute is the primary key of the Country entity. However, it is important to keep Country Names from being repeated within the entity. Table 10-4 shows an example of the Country entity improperly populated with repeating country names. Rows one and two cannot both be true, because two countries with different codes should not use the same name . The data modeler should apply a rule so that the value of the attribute CountryName cannot repeat within the entity Country, even though CountryName is not the primary key of the entity. The Alternate unique constraint is designed for these exact situations. The alternate unique constraint is enforced through a unique database index. VEA's database model diagram does not actually use the words "Alternate Unique Constraint." To apply the constraint in VEA, you must apply a unique index. Follow these steps to apply a unique index: Figure 10-14: Creating a new index. Figure 10-15: Making the index unique. The not null constraint roughly corresponds to the ORM simple mandatory constraint that is discussed in section 5.3, and has the effect of requiring a value to be supplied for the attribute to which it is applied. To create a not null constraint, use either the Req'd field of the Database Properties window, shown in Figure 10-8, or the Allow NULL values checkbox shown in Figure 10-9. Section 5.3 also discussed disjunctive mandatory constraints, which involve multiple roles and cannot be enforced by declaring an individual attribute to be required (not null). Enforcement of a mandatory disjunctive constraint involves the creation of database code. If you use the ORM solution, VEA will write the code for you. If you create the logical model directly you will have to write the code yourself. In certain cases, the enforcement of even simple mandatory constraints can require database code. Regardless of the source of your database code, editing and managing database code is covered in chapter 13. A foreign key constraint is a relational implementation of a conceptual subset constraint. Consider the relationship line between Country and Patient shown in Figure 10-1. The relationship line is a graphical notation for the fact type " Patient was born in Country ." The information in the schema will be inconsistent if a user is allowed to record a non-existent country as the birthplace of patient, or to delete country information for a country that is recorded in the Patient entity. The explanation that follows makes use of the sample populations in Table 10-5 and Table 10-6. The set of valid country codes recorded in the Country entity shown in Table 10-5 is { ˜CA , ˜GB , ˜FR , ˜US }. The set of country codes associated with patients 101 “104 in the Patient entity (Table 10-6) is { ˜CA , ˜GB , ˜US }. Every element in the second set is contained in the first, set, so patients 101 “104 present no constraint violations. However, patient 105 was born in Zambia, and the code for that country (ZM), does not exist in the set recorded in the Country entity. To maintain data integrity, the row for patient 105 cannot be inserted into the Patient entity unless a row for Zambia is first added to the Country entity. Figure 10-16: Dropping a relationship shape onto the drawing surface. Conversely, deleting a row of data from the Country entity may also violate the subset constraint. For example, deleting the row for the United States in the Country entity will have the effect of "orphaning" the rows for patients 102 and 103. A foreign key constraint protects against both insert and deletes that would violate the subset rule. Adding a relationship between two entities will automatically create a foreign key constraint when VEA generates DDL for your model. If you want to follow along with the upcoming example, first add a Country entity to your sample model. To add a relationship in VEA, do the following: Figure 10-17: Attaching the relationship to the Country entity. Release the mouse button, and the relationship will be attached to Country as shown in Figure 10-18. Figure 10-18: Relationship successfully attached to the Country entity. Perform the same steps to attach the other end of the relationship to the Patient entity. Successful attachment of the second end of the relationship will yield a screen similar to the one in Figure 10-19. Figure 10-19: Relationship successfully attached to both entities. The attribute Country_code has been automatically migrated to the Patient entity and marked as a foreign key. Note that the migrated attribute is marked with the symbol FK. The second foreign key in a table would be marked FK2, the third, FK3 and so on. An entity may exist in a model without being displayed on a diagram. The same entity may be displayed in many places on the diagram, on the same or different pages. You can delete an entity from the drawing window by selecting it, and then pressing the Delete key. This invokes a message box with the prompt "Remove selected item from the underlying model?" If you answer Yes the entity is removed from the model, so every shape depicting it on the diagram is also removed. If you answer No the selected shape is only removed from the diagram you are viewing. Because the entity still exists in the model, any shapes depicting the entity elsewhere on the diagram remain unchanged. The Tables and Views window contains a list of all entities in a model, regardless of whether they are displayed. To display an entity, drag its icon from the Tables and Views window to the drawing surface. Right clicking on any entity in this window allows you to sort the entities alphabetically . To ensure that an entity you intended to delete is truly gone, sort the list in the Tables and View window, and check for the entity in this list. Invoke the Database Properties window by clicking on the entity containing the attributes you wish to delete, and select the Columns category on the left side of the window. Highlight the attribute you wish to delete, and click the Remove button, as shown in Figure 10-20. Figure 10-20: Ready to delete the attribute FamilyName . Unlike entities, removal of an attribute is immediate and complete, with no confirming dialog box. If you accidentally remove an attribute, use the undo command to restore it.. Chapter 7 discussed VEA's use of the Project construct for mapping ORM models to logical models. Projects also allow the modeler to merge multiple source models into a single logical schema, as part of the build process. The source models contained in a project are not restricted to ORM models. In fact, you can mix ER and ORM source models in the same project and still build a single logical schema. Earlier in this chapter, you created a database model diagram without a source model. When using source models, the modeler does not directly create a database model diagram. Instead, VEA builds the database model diagram from the source model(s). The process of building the database model diagram is known as a project build. The biggest functional difference between an ER source model and a database model diagram revolves around projects. An ER source model can (in fact must) be built as part of a project, while the database model diagram cannot be part of a project. If a project contains more than one source model, those models can use external objects. An external object is a pointer to a natively defined object in another source model. For the external reference to be successfully resolved, both the model referencing the object and the model with the definition of the object must be included in the same project. Using external objects allows the modeler to separate the Universe of Discourse into smaller units that can be modeled individually, and then merged back into a single logical schema. The same stencils, shapes and windows you used to create a database model diagram are used to create an ER source model. To avoid repetition, this section will only give detailed explanations for operations that are unique to source models. Operations that have already been covered will be referred to but not be explained in this section. For this example, you will need one ER source model and one ORM source model. Create the first source model by choosing File > New > Database > ER Source Model from the main menu. Add Patient and Country entities to the new model. To save time, you do not have to add lots of attributes to each entity. However, make sure that each entity includes at least the primary key attribute. Also add the relationship between Patient and Country . Save the new model to a file called Patient_ER_Source.vsd, and then remove it from memory by closing the document. Create the second model by choosing File > New > Database > ORM Source Model from the main menu. In this second model, you will create facts about the PapSmear object and relate it to the Patient object that is already defined in Patient_ER_Source.vsd. Add the facts and constraints shown in Table 10-7 to your ORM source Model. The first column in the table shows the fact as you should type it into the Fact Editor. The second column tells you how to answer Constraint Question #1 on the Constraints tab of the Fact Editor (don't answer Constraint Question #2, the default is fine). The third column shows the constraint verbalization. Compare the text in this column with the output of the VEA verbalizer to confirm that you have entered the proper constraints. Use the following reference modes for the objects in your model: Patient(nr), PapSmear (nr), and Date (mdy). Assign the MS SQL server physical data type "datetime" to the Date object. Assigning the other data types is not important for this example. At this point, your ORM source model should look like the model in Figure 10-21. Figure 10-21: Patient ORM source Model. Your ORM model makes use of a Patient object, but the entity Patient has already been defined in the model Patient_ER_Source.vsd. Both the ER and ORM source models are going to be included in the same project, so one of the models must have the natively defined Patient object, and the other model must use a pointer to the natively defined Patient object. This pointer is called an external object. In this example, you will make Patient an external object in the ORM source model. Click on the Patient entity to invoke the Database Properties window, as shown in Figure 10-22 and select the External checkbox. Figure 10-22: Making Patient an external object. Selecting the External checkbox tells VEA that the object is a pointer to the fully defined object in another model. As a visual cue, the oval that represents the object will be shaded with gray diagonal lines. It may be useful to think of the object as being "grayed out" because the actual definition of the object Patient resides in another model. The reference mode is automatically removed from external objects. Since the object is defined in another model, there is no way for this ORM source model to be aware of the reference mode before the project is built. Also note that the Kind list box immediately to the left of the External checkbox has been grayed out. The Kind list box normally stores information denoting whether the object is an entity or a value. However, since this object is defined externally, the Kind (entity or value) is unknown until the project is built. Save your ORM source model to a file named Patient_ORM_source.vsd, and close the document to remove it from memory. You are now ready to create and build a database project. To build a project, do the following: Figure 10-23: Adding a document to a project. Figure 10-24: Patient ER source model successfully added to project. Figure 10-26: Project Migrate Prompt. show the three entities Patient, Country and PapSmear in the Tables and Views pane. Dragging the entities onto the drawing surface will reveal the foreign key relationships as shown in Figure 10-25. Figure 10-25: Project Successfully Built. Overview of Database Modeling and the Database Modeling Tool The Conceptual Modeling Solution (ORM) The Logical Modeling Solution (ER and Relational) Managing Database Projects
https://flylib.com/books/en/1.100.1/creating_a_basic_logical_database_model.html
CC-MAIN-2022-21
en
refinedweb
Introduction Azure App Service is a product offered by Microsoft Azure that offers the deployment of web applications in a Docker container on an Azure Virtual Machine. In its current configuration, the product comes in two flavors: - Deployment from code - Deployment from a Dockerfile In the case of "Deployment from code", the material inside of the folder to be deployed is zipped and automatically packaged into a Dockerfile. For more information: Quickstart: Create a Python app - Azure App Service | Microsoft Docs How to use the Web Licensing Service with Azure App Service Since machine-based licensing is not available in a dockerized environment, the Docker container has to receive a token that authorizes it to run Gurobi. The Web Licensing Service (WLS) provides such a token that is renewed in fixed intervals. In case of deployment from a Dockerfile, it is quite straightforward to either put the gurobi.lic file in the default location or to choose an alternative location and set the GRB_LICENSE_FILE environment variable (see here). However, if you deploy directly from code, then you do not have access to the underlying configuration options used in the Dockerfile. Therefore, you should set the license parameters via the API. For Python, this may look as follows: import gurobipy as gp env = gp.Env(empty=True) env.setParam('WLSACCESSID',<access-id from gurobi.lic file>) env.setParam('WLSSECRET', <secret from gurobi.lic file>) env.setParam('LICENSEID', <your license id>) env.start()
https://support.gurobi.com/hc/en-us/articles/4408802848401-How-can-I-use-Web-Licensing-Service-WLS-with-Azure-App-Service-
CC-MAIN-2022-21
en
refinedweb
Add tracing header bug:21195272 Change-Id: I520de9fee7fc40d0570d6bef450d756ce42a1462 diff --git a/ndk/platforms/android-M/include/android/trace.h b/ndk/platforms/android-M/include/android/trace.h new file mode 100644 index 0000000..e42e334 --- /dev/null +++ b/ndk/platforms/android-M/include/android/trace.h @@ -0,0 +1,55 @@ +/* + * ANDROID_NATIVE_TRACE_H +#define ANDROID_NATIVE_TRACE_H + +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Returns true if tracing is enabled. Use this signal to avoid expensive computation only necessary + * when tracing is enabled. + */ +bool ATrace_isEnabled(); + +/** + * Writes a tracing message to indicate that the given section of code has begun. This call must be + * followed by a corresponding call to endSection() on the same thread. + * + * Note: At this time the vertical bar character '|' and newline character '\n' are used internally + * by the tracing mechanism. If sectionName contains these characters they will be replaced with a + * space character in the trace. + */ +void ATrace_beginSection(const char* sectionName); + +/** + * Writes a tracing message to indicate that a given section of code has ended. This call must be + * preceeded by a corresponding call to beginSection(char*) on the same thread. Calling this method + * will mark the end of the most recently begun section of code, so care must be taken to ensure + * that beginSection / endSection pairs are properly nested and called from the same thread. + */ +void ATrace_endSection(); + +#ifdef __cplusplus +}; +#endif + +#endif // ANDROID_NATIVE_TRACE_H
https://android.googlesource.com/platform/development/+/eafa9c3%5E%21/
CC-MAIN-2022-21
en
refinedweb
A Gentle Introduction to PyTorch Library for Deep Learning The following tutorial assumes some basic knowledge about Python programming language and high school mathematics. No prior knowledge of Deep learning is required. This article covers basic knowledge and working of PyTorch required to get started with Deep Learning. Follow along with the tutorial to get hands-on experience.. Here is the Google Search Trends which shows that the popularity of the PyTorch library is relatively higher compared to TensorFlow and Keras. PyTorch is built based on python and torch library which supports computations of tensors on Graphical Processing Units. Currently is the most favored library for the deep learning and artificial intelligence research community. Now let’s get hands-on with PyTorch!! We are using Jupyter Notebooks to run our code. We suggest to follow through the tutorial on Google Colaboratory. It’s a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud. We can also be able to use a GPU for free. You can check out this link for some guidance to use Colab. Tensors PyTorch is a library for processing tensors. A tensor is a fundamental unit of data. It can be a number, vector, matrix, or any n-dimensional array. It is similar to Numpy arrays. Before getting started we should import the torch module as follows: import torch Creating Tensors Creating a tensor t1 with a single number as data # Tensor with Single number t1 = torch.tensor(5.) print(t1) Output : tensor(5.) 5. is shorthand for 5.0. It is used to indicate PyTorch that a tensor is a floating-point number. We can verify the above-using tensor.dtype. If you are using jupyter notebook then you can directly input the variable in the cell and run it to see the results. print(t1.dtype) Output: torch.float32 Similarly, we can create tensors of vector type, # Tensor with 1D vector t2 = torch.tensor([1, 2, 3., 4]) print(t2) Output: tensor([1., 2., 3., 4.]) From the above output we can see that even if one of the elements of the vector is a floating-point number, the tensor will convert the data type of all the elements to float. Now let’s create a 2D tensor, # Matrix t3 = torch.tensor([[1., 2, 3], [4, 5, 6], [7, 8, 9]]) print(t3) Output: tensor([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) A 3D tensor, t4 = torch.tensor([ [[10. , 11, 12], [13, 14, 15]], [[16, 17, 18], [19, 20, 21]] ]) print(t4) Output: tensor([[[10., 11., 12.], [13., 14., 15.]], [[16., 17., 18.], [19., 20., 21.]]]) If we observe these tensors are similar to NumPy arrays. So we can check the shape of the the tensors using tensor.shape. The dimension of the array would be the length of the returned shape. So the shapes of the tensors we defined above would be, print(t1.shape) Output: torch.Size([]) As t1 is just a number, its dimension is 0. print(t2.shape) Output: torch.Size([4]) As t2 is a vector, its dimension is 1. print(t3.shape) Output: torch.Size([3, 3]) As t3 is a matrix of size 3 x 3, its dimension is 2 print(t4.shape) Output: torch.Size([2, 2, 3]) As t4 is has stacked two tensors of 2 x 3, its dimension is 3 Some Tensor Operations and Computing Gradients We can perform operations on tensors with the usual arithmetic operations. Also, tensors have a special ability to compute gradients or derivatives of the given expression with respect to all the independent variables. Let’s look at an example, Define some tensors and then initialize some values, x = torch.tensor(3.) w = torch.tensor(4. ,requires_grad=True) z = torch.tensor(5. ,requires_grad=True) x , w , z Output: (tensor(3.), tensor(4., requires_grad=True), tensor(5., requires_grad=True)) In the above code snippet, we created 3 tensors x, w, and z with numbers and for w and z an additional parameter requires_grad set to True. So now let’s perform an arithmetic operation with these tensors, y = x*w + z print(y) Output: tensor(17., grad_fn=<AddBackward0>) So according to basic multiplication and addition we got the output as expected i.e. y = 3 * 4 + 5 = 17. AutoGrad So now let us discuss a unique ability of Pytorch which can automatically compute the derivative of any expression (y in this case) wrt to independent variables which have the parameter requires_grad set to True This can be done by invoking .backward method on y #Compute derivatives y.backward() We can find the derivatives of y wrt input tensors stored in .grad properties of respective input tensors. print("dy/dx =", x.grad) print("dy/dw =", w.grad) print("dy/dz =", z.grad) Output : dy/dx = None dy/dw = tensor(3.) dy/dz = tensor(1.) We can observe the following: - the value of the derivative of y wrt x is None as the parameter requires_grad is set to False - the value of the derivative of y wrt w is 3 as dy/dw = x = 3 - the value of the derivative of y wrt z is 1 as dy/dz = 1 PyTorch with NumPy NumPy is a popular open-source library used for scientific and mathematical computing in python. It also supports operations on large multi-dimensional arrays and computations based on linear algebra, Fourier transforms, and matrices. NumPy has a vast ecosystem of supporting libraries including Pandas, Matplotlib, and OpenCv. So PyTorch interoperates with NumPy to leverage the tools and libraries of NumPy and then extend the capabilities further. First, let’s create a NumPy array. #First create a numpy array import numpy as np x = np.array([1, 2., 3]) print(x) Output: array([1., 2., 3.]) So we can convert the Numpy arrays to Torch tensors using torch.from_numpy() #Create a tensor from numpy array y. = torch.from_numpy(x) print(y) Output: tensor([1., 2., 3.], dtype=torch.float64) We can check the data types using .dtype print(x.dtype) print(y.dtype) Output: float64 torch.float64 Now we can convert the PyTorch tensors to NumPy arrays using .numpy() method z = y.numpy() print(z) Output: array([1., 2., 3.]) The interoperability with NumPy is required because most of the datasets with which you will be working will most likely be processed in NumPy. Here you might wonder why we are using Pytorch instead of NumPy since it also provides all the required libraries and utilities required for working with multi-dimensional arrays and perform large calculations. It is mainly for two reasons: - AutoGrad: The ability to compute gradients for tensor operations is a powerful ability that is essential for training neural networks and perform backpropagation. - GPU Support: While working with massive datasets and large models, PyTorch tensor operations are carried out in Graphical Processing Units (GPUs), which will reduce the amount of time takes by ordinary CPUs by 40x to 50x. The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/04/a-gentle-introduction-to-pytorch-library/
CC-MAIN-2022-21
en
refinedweb
go to bug id or search bugs for New/Additional Comment: Description: ------------ This bug was pretty hard to track down. I uploaded a test script to make it clear. These conditions need to be met to trigger the bug: 1. You use a custom error handler 2. You use namespaces 3. You use the "use" statement 4. You require a class within the error handler 5. You have a class that uses a deprecated = & new Classname() statment Using an autoloader is no workaround because the system thinks the namespaced class is available (class_exists() returns true). Test script: --------------- To reproduce the bug several files are needed. I uploaded a test script. You can download it here: No Google account required for download! Please call index.php and you will see an error: Fatal error: Class 'namespaced_class' not found in requireerror/deprecated_reference.php on line 42 Please have a look at the code to see how it occurs. Expected result: ---------------- I expect the class that is found by class_exists() and included with a "use" statement is accessible. Actual result: -------------- I get an error: Fatal error: Class 'namespaced_class' not found in deprecated_reference.php on line 31 Add a Patch Add a Pull Request When a file is compiled all relevant info of the use statements are stored in the compiler globals CG(current_import*). When the compilation of the file is finished, these globals are freed. In the given case the use statement in deprecated_reference.php sets an entry in CG(current_import). Due to the deprecated notice during compile time and the error handler, another file (namespaced_class.php) is included and compiled, so CG(current_import) is reset, and after the file has been compiled, freed. Then compilation of deprecated_reference.php continues with the unitialized CG(current_import) -- effectively, the use statement is forgotten, and so the same error is raised as if the use statement was missing in the first place. Basically, I see two ways to fix this issue: either avoid nested compilation, or cater to it by making the respective compiler globals stacks.
https://bugs.php.net/bug.php?id=65014&edit=1
CC-MAIN-2022-21
en
refinedweb
Using Lopy1 for just LoRa Greetings, How applicable is using two LoPy1 SOICs to communicate via LoRa, but not as a packet forwarder? To clarify more, I basically would like to know how well it would work to use a LoPy to act somewhat like a gateway, for receiving data from other LoPys via LoRa. However, once I get that data I want to send it out via UART. No forwarding. I have done this with the ping-pong example, but how well would this work on a larger scale? Like 20 LoPys sending to the same "listening LoPy". Thanks! @robert-hh So a remote raw lora receiver with a lot of nodes sending it is just inevitable that collisions happen? My current "solution" is to send and wait for it to send back an acknowledge, if not just wait a random period of time. @burgeh lora.stats() returns reasonable values after a packet has sent or received. Avoiding collisions is in general not possible. Looking for another station sending at the same time is more a polite behavior than a robust method. You can only see stations close to your place. Remote stations which also can cause a collision are invisible. @robert-hh I've tried -120 and -100, both failed everytime. One issue I noticed is that the first time I call lora.stats() the rssi value is 0. I tried sending first and then checking it, and it remains at 0. Also, this is_channelfree() function the only way to avoid colliding? Because if a unit is just far away, it might not be allowed to send for example. @burgeh If you init a Lora instance and call is_channelfree() with various value from the REPL, what do you get? Since the test window is pretty short, it is always possible that noise is seen as RF carrier. In my quick test with a LoPy4 I had the following results when calling lora.ischannel_free() with various values: -120dB Always False -110dB Always False -105dB ~50% False, 50% True -100dB About 100% True With a LoPy 1 the 50% point was at -117dB, and -115dB was always True. So it looks as if you have to find the proper value by testing. I don't quite understand the lora.ischannel_free() function. The first code below is my main "sending node". I then send it to a reciever which is using the second code. It works fine without the lora.ischannel_free(-120) condition, but when I use it it never actually sends. I only have one sending so I would not think it is busy. The rssi value is like -40. from network import LoRa import socket import time import pycom import binascii import machine from machine import UART #NODE, tx_power = 20) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(False) pycom.heartbeat(False) pycom.rgbled(0x007f00) # green receiveFlag = 0 while True: time.sleep(8) pycom.rgbled(0x0000ff) # blue id = binascii.hexlify(machine.unique_id()) data = lora.stats() print("Stats %s" % data) if (lora.ischannel_free(-120)): s.send("ID: %s" % id) print("Sending ID") pycom.rgbled(0xFF00ff) # something... while (receiveFlag == 0): pycom.rgbled(0xFF0000) # RED data1 = s.recv(128) print("Look for Ack") if (b'%s' % id) in data1: receiveFlag = 1; s.send("ID: %s" % id) time.sleep(1) time.sleep(1) pycom.rgbled(0x00ff00) # green receiveFlag = 0 This is my receiver. from network import LoRa: data1 = s.recv(128) if b'ID' in data1: print("ID Received") data = lora.stats() print("Stats %s" % data) print("Content %s" % data1) print("Sending Ack") reply = "Pong %s" % data1 s.send(reply) pycom.rgbled(0x0000ff) # blue time.sleep(.5) pycom.rgbled(0x00ff00) # green s.recv(128) Here is it working... @braulio I think you found that already in this post: How I can get transmition time with LoRaRAW?? @burgeh The receiver can hardly control when two parties are sending at the same time. In that case, the transmission can fail. The sending nodes can check with lora.ischannel_free() before sending, if someone else is sending, and wait until that is finished. Even when you try to implement something like a round robin protocol, some other devices may send. It may need two way connection eventually, but would a single LoPy setup as a "gateway" be able to handle multiple other units sending, and deal with collisions? By deal with collisions I mean, it is receiving data from LoPy A and LoPy B sends, will it wait until A is done and not jumble the messages? @burgeh You can do that with raw LoRa. if you look at the nanogateway examples, the gateway are configures as Raw Lora Devices. If it is just one direction, nodes to "gateway", that should be simple. If you want to implement a kind of downlink, you have to add more elements of a protocol, such that the nodes in your net can tell the types and purpose of messages they receive. Obviously you have the problem of collisions and lost messages, which is not addressed by the basic transport mechanism.
https://forum.pycom.io/topic/3646/using-lopy1-for-just-lora
CC-MAIN-2022-21
en
refinedweb
TrapFocus API API documentation for the React TrapFocus component. Learn about the available props and the CSS API. Import You can learn about the difference by reading this guide on minimizing bundle size. import TrapFocus from '@mui/base/TrapFocus'; // or import { TrapFocus } from '@mui/base'; Utility component that locks focus inside the component. Props The component cannot hold a ref.
https://next--material-ui.netlify.app/base/api/trap-focus/
CC-MAIN-2022-21
en
refinedweb
RationalWiki:Saloon bar/Archive180 Contents - 1 Pharyngulated again - 2 Thoughts on an idea I have for a peer reviewed history wiki - 3 Into the belly of the beast - 4 Mitt - 5 New articles - 6 Link to forumspace at the top of the page - 7 And you thought PIPA was bad... - 8 Right wing reading comprehension fail - 9 Cagey NASA - 10 Democracy, who needs it? - 11 Nuke your own town - 12 Can the servers handle things? - 13 Probably not worth WiGO-Worlding - 14 Basketball - 15 Brand v Westboro - 16 A RW project proposal - 17 The So-Called "War on Christmas" - 18 This seems counter-productive - 19 Jets - 20 XD - 21 Travel rant - 22 Affirmative Action debate - 23 11 year old vs Eric Hovind - 24 Winner of the 2012 Cringeworthy Awards: North category - 25 My most sincere apologies - 26 Police Brutality and Murder in the USA - 27 Monsanto market share - 28 Timing! - 29 Props to Cuban - 30 ASK and ye shall be answered - 31 Trying to name a certain economic idea? - 32 Native American Genocide - 33 Tea Party Nation: "ROMNEY CAN STILL WIN THIS" - 34 A tad disappointing - 35 Middle east excitement - 36 Has Glenn Beck finally lost his mind? - 37 Santa hat logo (or the war on Christmas part XXVII) - 38 More about Free Speech in the UK - 39 Fundraiser - 40 My college is a circus of incompetence Pharyngulated again[edit] [1] Dr Lightner has since replaced the plushie with an actual goddam possum, but I note the article is being hammered enough the image doesn't load ... Update: the image was actually squid2 being arsey, came good when I kicked it - David Gerard (talk) 15:33, 18 November 2012 (UTC) - It's good that PZ Myers sees us as a useful resource. He's cited us a few times before. We can't expect him to actively promote RW, but it's nice to be on the radar.--"Shut up, Brx." 21:04, 20 November 2012 (UTC) - Too bad P Z Myers is a faggot and Brxbrx, in a complete lack of machismo, stares at pictures of naked adolescents all day long. — Unsigned, by: Fat Atheist With No Machismo / talk / contribs 2012-11-20T21:56:56 Thoughts on an idea I have for a peer reviewed history wiki[edit] I got the idea from seeing one of the things that Citizendium tried to accomplish, and realized how I could make the goal more practical then what they were trying to do. I also got a lot more information on reading their article here. The flaws I saw in Citizendium originally were that they're encyclopedia attempts to be a comprehensive wiki on all subjects, just like Wikipedia, only more reputable. It's also much slower to progress as a consequence of ensuring quality and creating reputation. Their other problem that I saw is their insistance on requiring real names as usernames. There's a lot of knowledgeable amateur historians who aren't going to feel comfortable using their real names who would get automatically excluded, and there are legitimate reasons why people might need to use sockpuppet accounts. I think the fact that certain historical topics would be controversial for some groups of people would be a legitimate reason to allow a sockpuppet or not to use a real name. Keep in mind here that some people's usernames could be known to people they know IRL. Returning to topic, I also don't think that requiring real names really does anything. It doesn't do much for anti vandalism or POV editing because of the controlled registration at Citizendium. I figure that it's seen as reducing the chances of people having flame wars or other incivilities that happen on Wikipedia, but that doesn't actually work. People still can't see each other, so the depersonalization effects still apply. I've know of people in the same community going off on each other or just being @$$holes in email, then being completely agreeable IRL. Also from our article, it seems that Larry Sanger is also a total ass, which shouldn't be hard to avoid. Here's what I plan to do. I originally thought about having something like the Citizendium approach, but just narrowing the topic to history. Of course, that would be a bad idea. What I'm thinking now is to maintain open registration and allow people to use whatever usernames they want, and to allow sockpuppetry for legitimate purposes. We would control editing of articles and the reliability of the site as a source by restricting editing of the main namespace to a special editor group. Everyone would be allowed to edit all other namespaces, and we'd add a draft namespace and tab to the article namespace, similar to the talk namespace. That way, anyone would be able to suggest changes, and to possibly demonstrate them in a draft, and an editor could then make the change. Any user that proved unbiased and reliable would be allowed to be promoted. They could also be demoted if any user could demonstrate that they made changes without evidence that were incorrect. This system should be able to make such a website reliable by removing any doubt about whether the revision someone viewed had been tampered with when they looked at it. It would also make sure that what goes into articles can be backed up by sources. I also plan to add in an essay namespace, and then also have a process of peer review for the essays. Approved essays could be indicated as such after a discussion. I think the Author Protect extension could help protect the integrity of the essay pages; I would have to make sure that it could be applied to a single namespace though. As for the other failings of Citizendium, I'd be sure to stay out of any topics I don't know about, and wouldn't go announcing changes after lengthy discussions suggested otherwise. Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 07:21, 19 November 2012 (UTC) Since I see that RationalWiki is a 501-c-3 organization since it's educational, I'm thinking about looking into getting that certification for such a website, which would be beneficial since Dreamhost says that they give free hosting to 501-c-3 organizations. Google also gives them free access to Google Apps, which is what I use for my sturmkrieg.com email; it's basically Gmail, and all I have is the trial version for up to 10 accounts. Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 07:24, 19 November 2012 (UTC) - Top Tip: Don't vastly overestimate your requirements and end up spending 700+ a month on server bills when they could have got what they needed for less than 300 a month as citizendium did for ages. Naca (talk) 07:51, 19 November 2012 (UTC) - Don't even spend that. Before you start up a wiki you really need some sort of community. OK, it's a bit chicken and egg, but if it's only you at the moment then you'd want to start really small. If you are going to use mediawiki software you'll also need to know a fair bit about the "behind the scenes" operation of the thing. It's substantially more complicated than installing and maintaining desktop app.--Weirdstuff (talk) 08:50, 19 November 2012 (UTC) - MediaWiki is faaaaaaaat - David Gerard (talk) 15:58, 19 November 2012 (UTC) I would like some clarification on something quickly. The title of this is a peer reviewed history wiki. You then mention that focusing on history would be a bad idea. So, are you going for history centered (everything must be on history), history focused (most articles are on history, but some useful tangents allowed) or without focus (as long as its an academic subject and written academically)? It seems like a good idea, compared to all of the other wikis floating around if nothing else, and I'd be interested in helping out when and where I can all the same, but I wanted some clarification on that point.--Logic and Empricism (talk) 15:52, 19 November 2012 (UTC) - To the first two responses- That does sound like something Citizendium would do. I don't think it will be a problem since Dreamhost gives free hosting to 501(c)(3) organizations, which it should qualify for since I believe it covers educational organizations. Eventually at some point it would probably have to transfer to dedicated hosting, but hopefully by then it would be popular enough to receive donations. I have experience with MediaWiki, though I haven't caught up on all of the really technical stuff. And I agree that it really is a huge chicken and egg issue. I have a wiki for Warhammer 40,000 and it's hard to find contributors. Then again, it's also for fanfiction and stuff that forums cover so it maybe doesn't have a huge audience. - To Hamilton- I must have made a typo. I meant to say that a history wiki would be better than Citizendium from the point of creating a restricted wiki for quality control since it would be a narrower topic and easier to fill up than a comprehensive encyclopedia about everything. Unlike them, I'm also not so anti Wikipedia as to ban importing any of their content to serve as a starting point. (which would solve the chicken and egg issue that people won't edit if they don't see content to get them involved, and you won't have content if no one is contributing) Then again, it might be bad when getting considered as a reputable source if we have notices saying that our content was taken from Wikipedia. Thanks for your interest. I might start a small, beginner version on my own server under a subdomain of sashaweb.net to serve as a starting point, and then transfer it to free Dreamhost non-profit hosting once I've applied and we formally open. There shouldn't be much of a technical barrier to setting up the beginner version since it's mostly the same except for restricting the mainspace editing and adding the extra draft tab. The secondary role is to look into using Author Protect on a single namespace. - Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 17:22, 19 November 2012 (UTC) - Let me know when you have something up and running and I'll try to contribute what I can. I think if nothing else I can pull some crap out of my ass on history of mathematics or economics. --Logic and Empricism (talk) 17:36, 19 November 2012 (UTC) - Hi everybody! You may be being a bit optimistic about getting donations Inquisitor. This is the Statistics page from my wiki. 20,000 unique visitors last month - one contribution all year if my memory serves me correctly. (From an RW editor by the way - thanks G.). If all my visitors gave me only 50 cents/pence I could retire! Also visitors does not easily translate into editors unfortunately. --Bob"What can be asserted without evidence can also be dismissed without evidence." 18:26, 19 November 2012 (UTC) - Alright, I'll set up a "prototype" version where people can add content that we can use to get started and that can help us estimate how much interest there is. Once I get everything in order I can then move it to the new host. As for donations, I figure that most of the people who visit would not make donations. If we get extremely popular, not quite as large as Wikipedia but close as an alternative, we'll have enough users that some of them might make donations. We won't need them until we get large enough that we need to move to a dedicated server anyway. Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 20:32, 19 November 2012 (UTC) - What exactly do you mean by "peer reviewed" in this context? Peer reviewed by whom?--Weirdstuff (talk) 20:46, 19 November 2012 (UTC) - I mean that the users will discuss changes to the article (or make them to the article draft) and then approved users will make the changes to the article. For creating an article, the draft will be made first (in the draft namespace, unlike Citizendium that has drafts in the main namespace) and then once reviewed by users and preferably an expert, the article will be added to the mainspace. Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 00:02, 20 November 2012 (UTC) - If that is your plan then you are really going to get stuck on the chicken-egg problem mentioned above. Not only are you going to have to recruit users but you are also going to have to recruit a second level of expert users who will approve the drafts made by the first set. But there again, for all I know you have got lots of history-expert friends and colleagues lined up to work on it. If so, then no problem.--Weirdstuff (talk) 08:08, 20 November 2012 (UTC) Does my userpage accurately describe Citizendium? Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 07:02, 20 November 2012 (UTC) - Have you had a look at WikiSage? It seems fairly similar to the sorts of ideas you're suggesting above. If nothing else, it might help you to clarify exactly what the differences are. Peter Jackson 11:15, 20 November 2012 (UTC) - (aside) Just BTW Citizendium's watching ... always watching. Scream!! (talk) 11:59, 20 November 2012 (UTC) - They would be more enlightened reading the Citizendium article than this thread.--Bob"I thought this was supposed to be "Rational" Wiki?." 13:39, 20 November 2012 (UTC) - I wonder if they've seen my userpage. Now it's only a matter of time before they start claiming that this wiki is run by Jewish Bolsheviks or something! Seemed to happen last time with Sturmkrieg really fast. What makes it really dumb is that all signs point to it being SPESS NAZIS living in SPESS. Go here and tell me if the first thing you think of has anything to do with Jewish Bolshevism: . Inquisitor Sasha Ehrenstein des Sturmkrieg Sector 15:21, 20 November 2012 (UTC) Into the belly of the beast[edit] I'm going to Colorado Springs this week to visit my sister and her family for Thankgiving. I was checking my hotel's web site for nearby attractions, hoping to find a place to take my family for dinner. The number two thing nearby they listed? Focus on the Fambly. MDB (the MD is for Maryland, the B is for Bear) 13:46, 19 November 2012 (UTC) - Whee! Sounds like a fun place! Wait... if you're visiting family, why you in a hotel? --PsyGremlinFale! 13:53, 19 November 2012 (UTC) - Our parents are already there, and they're taking up the guest space. - Oh, and I just checked Google Maps. My hotel is right off the Ronald Reagan Highway. Shoot me now, shoot me now... MDB (the MD is for Maryland, the B is for Bear) 13:55, 19 November 2012 (UTC) - The Ronald Reagan Highway is a memorialization, which is pretty common on many lengths of highways in Colorado (I believe US 285 south from Denver is named after a Navy Seal who died in combat). No one I know has ever called it "The Ronald Reagan Highway", but rather "I-25". -- Seth Peck (talk) 18:56, 19 November 2012 (UTC) - Was the list automated? I cannot imagine an office complex being much fun, even for hard-core Christians. --TheLateGatsby (talk) 13:59, 19 November 2012 (UTC) - Nope. MDB (the MD is for Maryland, the B is for Bear) 14:09, 19 November 2012 (UTC) - So you won't be packing the silver lycra and feather boa for cocktail hour then? --PsyGremlin말하십시오 14:13, 19 November 2012 (UTC) - I suppose I'll just stick to my little black dress. MDB (the MD is for Maryland, the B is for Bear) 15:20, 19 November 2012 (UTC) - WfG lives in them thar parts (well Colorado), perhaps she could help. SophieWilder 16:02, 19 November 2012 (UTC) - I doubt that she'd have anything which would fit. ГенгисYou have the right to be offended; and I have the right to offend you. 16:07, 19 November 2012 (UTC) - My joking aside, it's not that bad. I'm flying into Denver Wednesday and should be in Colorado Springs by mid-afternoon. I depart Friday morning. So, I'll be there about 48 hours, and pretty much that entire time will be with my family. And while they're not the lefties I am, they're hardly FotF types, either. MDB (the MD is for Maryland, the B is for Bear) 16:05, 19 November 2012 (UTC) - Flying in? Then you'll miss the wonder of Ronald Reagan Highway, but you could fly from DC's Ronald Reagan airport. why did they name an airport after the guy who fired all the air traffic controllers? SophieWilder 23:18, 19 November 2012 (UTC) - I remember spending a Labor Day trip there a few years ago. Couldn't go up Pike's Peak because the others didn't think we needed a reservation, but the Manitou Cliff Dwellings and Garden of the Gods National Park were fun. Got a couple of great wool blankets with Native American design on them from the gift shop. Only spent a night and a day there, but I would dearly love to go back for a full week and really take it all in. Of course, I went in the tail end of summer, so take from that what you will. --CoyoteSans (talk) 18:50, 19 November 2012 (UTC) - At the restaurant in Capitol Hill where I regularly attend brunch, they have a tourist brochure rack, and FotF's headquarters is one of the things I always see in there. Pretty ridiculous, especially given that it's about an hour away. I'd say we should meet up for a beer, but it seems like your schedule is pretty tight. -- Seth Peck (talk) 16:54, 19 November 2012 (UTC) - The Springs is ok in my book...I dunno why the Religious Right turned it into their fiefdom but I personally did everything I could to drive the buggers out of Virginia so I guess they headed for Colorado. Um, light up a few for me (legally) while you're there, will ya? Secret Squirrel (talk) 01:54, 20 November 2012 (UTC) - You're in VA, Squirrel? What part? MDB (the MD is for Maryland, the B is for Bear) 17:09, 20 November 2012 (UTC) - I'm actually in West (by dog) Virginia, and have also lived in Virginia. Secret Squirrel (talk) 03:09, 21 November 2012 (UTC) - My part of the country. Love it! and Secret: Several years ago, the mayor and city council of The Springs intentionally recruited a number of religious organizations to the city via tax incentives and lots of begging. SirChuckBA product of Affirmative Action 08:08, 20 November 2012 (UTC) Mitt[edit] Let's just say I don't want to be in his shoes right now. Are there any other past photos of candidates post-election, returning back to their normal lives? Osaka Sun (talk) 08:42, 20 November 2012 (UTC) - For many candidates "normal life" is still politics. Kerry for example is still a senator AFAIK, so presumably there are plenty of photographs of him in business-as-usual mode after the election. Obviously it does suck to be followed everywhere by the press, but that will fade, he's not an attractive fifteen year old girl (maybe that would be seventeen in the US?) where it makes commercial sense to lurk in the bushes until you get either a bad hair day picture or a nip-slip for the "newspapers". - Mitt could retire entirely at this point, he's easily rich enough to retire and go into philanthropy, but (and maybe someone with a history of key US political figures and their charitable giving will correct me) a Republican presidential candidate doesn't strike me as the right person to hit up for donations to your AIDS charity, African refugee support or orphanage. Maybe he'll put some money into medical research when he gets old and scared, but otherwise I expect to see Mitt devote himself to getting even more rich by telling not-so-rich folks that they can be like him. - It's a winning formula. Consider colossal business failure Jean-Louis Gassée, responsible at Apple for the easily forgotten Newton Messagepad, he then went on to run the company behind BeOS and the BeBox computer which sank without trace after the dotcom boom, then he headed up the division that developed the never-used PalmOS 6, and then he went on to help Nokia dismantle their successful company and become a Microsoft subsidiary in all but name. All the way through that he was handsomely compensated and attracted offers to speak and teach others how to be just as... successful? 82.69.171.94 (talk) 10:12, 20 November 2012 (UTC) New articles[edit] I would like the community's input on the new article, Chronicles (magazine). I'm comfortable rating it silver, but I'd like to know what, if anything, can be done to bring it up to gold. I've also written League of the South, though it's not nearly as good. Radioactive afikomen Please ignore all my awful pre-2014 comments. 10:59, 20 November 2012 (UTC) - If Chronicles is so fond of burning the evidence, every link on the page needs capturing - David Gerard (talk) 16:37, 20 November 2012 (UTC) - And done - David Gerard (talk) 17:32, 20 November 2012 (UTC) - Thank you; I had been wondering if the links ought to be captured. Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:45, 20 November 2012 (UTC) - I think the article on Chronicles is very very good. It is thorough and well-cited, interesting and informative. Thank you for writing it--"Shut up, Brx." 03:45, 21 November 2012 (UTC) Link to forumspace at the top of the page[edit] Do we need to link to forumspace from here? The forums are nearly dead, just used now by pet-subject-pushers. SophieWilder 13:28, 20 November 2012 (UTC) - Agree. Evil fascistoh noez 13:30, 20 November 2012 (UTC) - How often does anyone see the top of the page? I normally go direct to a section from RCs Scream!! (talk) 16:08, 20 November 2012 (UTC) - I come here from the sidebar quite often, if I've been reading an article or something. SophieWilder 16:18, 20 November 2012 (UTC) - I use the side bar to check if there has been any activity in the forum. Saves me like 5 seconds a day, easily. --Logic and Empricism (talk) 18:03, 20 November 2012 (UTC) And you thought PIPA was bad...[edit] Ladies and Gentlemen, I give you yet another nice attempt to take away civil rights in hopes of security. Anyone know how to fire up the blogosphere on this one? 147.138.90.129 (talk) 01:41, 21 November 2012 (UTC) Right wing reading comprehension fail[edit] So this story How Obama can be stopped in the electoral college, is making the rounds in the right wing circles claiming that the EC requires a 2/3 majority of states to vote to form a quorum. Actual text of the relevantmtoulouse (talk) 18:04, 20 November 2012 (UTC) - Take home message for me, at least, being that the "founder" of the Tea Party and a self proclaimed defender of the constitution can't even read and understand what is written. It must be a real bitch to be a literalist constructionist and not be able to comprehend the basics of what you read. Tmtoulouse (talk) 18:09, 20 November 2012 (UTC) - Its exactly the same as the tea party's weird ideas about what constitutes a natural born citizen. I think they just get over excited about any crazy legal theory that might have a chance to stop the black guy from being president. --JeevesMkII The gentleman's gentleman at the other site 18:35, 20 November 2012 (UTC) - Editor’s note, Nov. 20, 2012: Since this column was posted it has been discovered that the premise presented about the Electoral College and the Constitution is in error. ГенгисOur ignorance is God; what we know is science. 18:38, 20 November 2012 (UTC) - Reminds me of Hurlbut's blog, where the answers to quash liberalism lie in some forgotten part of the constitution, which for some reason only a washed up blogger has noticed.--"Shut up, Brx." 18:46, 20 November 2012 (UTC) - Hurlbut's blog is where you go when you decide WND is "a bunch of liberal pantywaists." MDB (the MD is for Maryland, the B is for Bear) 18:54, 20 November 2012 (UTC) I've noticed more and more the US Constitution being treated like the holy bible, as in that nutters are quote mining and stringing it together to make it say exactly the opposite of that it does. --Revolverman (talk) 22:51, 20 November 2012 (UTC) - That trend died down a little bit after the newly inducted Republican controlled congress decided to hold a reading of the constitution, which excluded the three-fifths compromise. This was lampshaded in the press and following that the idea that the Founding Fathers were prophets and the constitution scripture became less prevalent.--"Shut up, Brx." 23:56, 20 November 2012 (UTC) - To their credit they left it out because they only read the parts of the constitution that are still binding law, which is appropriate. DickTurpis (talk) 00:52, 21 November 2012 (UTC) - What is truly ammusing to me, is that these **PATRIOTIC!!!!** Americans watched a majority of the people, and a majority of the states vote Obama in. It is somehow "patriotic" to ignore that, and go "round the election". How is this even a thought? "Well, we didn't win, so let's find a loop hole - and fuck the election system."Godot She was a venus demilo in her sister's jeans 00:47, 21 November 2012 (UTC) - Doing ANYTHING to hijack a properly democratic election is patriotic when you throw a little bit of No True Scotsman into the mix, because that changes the results of the election completely. And when one doesn't work, keep going down the line until the fucking nig-- I mean the President is impeached and a white guy takes his seat as it should be. It's gotta work eventually. Ochotonaprincepsnot a pokémon 10:06, 21 November 2012 (UTC) - As is often the case, the real juice in that original post is in the comments. Some of them are surely Poes, but there's still a load of crazy to enjoy. rpeh •T•C•E• 10:17, 21 November 2012 (UTC) - And the comment count starts going down... ah, WND, removing comments, even, but leaving the completely inaccurate story up. Ochotonaprincepsnot a pokémon 20:50, 21 November 2012 (UTC) Cagey NASA[edit] This shit is killing me...Big News From Mars? Rover Scientists Mum For Now. Tmtoulouse (talk) 03:14, 21 November 2012 (UTC) - Ja, but Time says differently. Acei9 03:16, 21 November 2012 (UTC) - Sorta yes, there is a lot of back and forth on it now, that is why its stupid to leak like this! Tmtoulouse (talk) 03:37, 21 November 2012 (UTC) - "Huffington Post, which is never quite so happy as when it’s hyperventilating..." I lol'd. --PsyGremlinPraat! 03:38, 21 November 2012 (UTC) Democracy, who needs it?[edit] I... don't even know where to start with this. --Revolverman (talk) 10:15, 21 November 2012 (UTC) - The concept of a Tyranny of the majority has a long and august history, but it's funny to watch what happens when wingnuts get their hands on it. Apparently Obama's 332-206 majority in the EC and 3.5% lead in the popular vote doesn't give him a mandate, but W's 271-266 EC win with a loss in the popular vote in 2000 gave him a solid mandate to govern. Go figure. rpeh •T•C•E• 10:29, 21 November 2012 (UTC) - I'm more entained by how much it talks about how bad Democracy is, but is curiously silent on any other form of Government. --Revolverman (talk) 10:32, 21 November 2012 (UTC) - Wow. WND's exclusive commentary is a smorgasbord of nuttery today - Farah, Geller, this idiot, Molotov Mitchell and Monckton. I rather enjoyed their "God told me to go 100 in a 30 zone" story" --PsyGremlinSpeak! 11:01, 21 November 2012 (UTC) - The late Sir Winston was bang on the money with this one. JzG (talk) 13:15, 21 November 2012 (UTC) - Judging by the comments Citizen's Rule Book is required reading in the Tea Party with all the pedantic bullshit about "Republic, not democracy". That or they are just hard-coded to think "Democratic bad" "Republican good" --Revolverman (talk) 03:07, 22 November 2012 (UTC) Nuke your own town[edit] At long last, you can answer the question "What happens if my hometown gets hit by a Minuteman missile?" Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:47, 20 November 2012 (UTC) - Being a nuclear dartboard is about the only thing my town is good for IMO. --Revolverman (talk) 22:45, 20 November 2012 (UTC) - Haul out the Tsar Bomba - that's some impressive destruction right there. --PsyGremlinFale! 23:04, 20 November 2012 (UTC) - Assuming the radius given are correct, its interesting that the bigger the bomb, the less relevant the radioactive fallout is. --Revolverman (talk) 23:07, 20 November 2012 (UTC) - I'm happy to learn that if Tempe gets hit by a nuke, there's a good chance I won't be killed- at least until I develop a terrible cancer of one kind or another as a result of the radiation--"Shut up, Brx." 23:59, 20 November 2012 (UTC) - Okay, who do we nuke first? How about, Las Vegas. What next? Seattle! Yeah! Secret Squirrel (talk) 01:30, 21 November 2012 (UTC) - I tried to stick with places I've actually lived in, but that limits me to some hilariously small towns. (Okay, I lied. I totally caved and bombed New York City.) Radioactive afikomen Please ignore all my awful pre-2014 comments. 02:18, 21 November 2012 (UTC) - I dropped the Bomba on the middle of Jo'burg, then I was able to phone friends in Vereeniging and yell "You're dead! You're dead! You're dead" at them. However, as this was at 3am, it didn't have quite the desired effect. --PsyGremlinSprich! 03:41, 21 November 2012 (UTC) - Interesting. Though it would be better if it took local geography into account. Being shielded by a mountain would make a big difference to those geometrically perfect circles.--Bob"I thought this was supposed to be "Rational" Wiki?." 07:54, 21 November 2012 (UTC) - Omahas honestly sorta boring to nuke, and i ussualy just end up hitting the intersections I hate with Crockets. Whats fun is jumping the megatons up so you have a good swath of the earth being burned to cinders by the fireball. Then again; i stopped using this thing after sandy hit NYC and doing full scale and Nuclear Terrorism seemed distasteful.--Mikal Harass Follow 08:09, 21 November 2012 (UTC) - It's a little more complicated than that. The Tsar Bomba, for instance, was reduced in yield by 50% to prevent excess fall out. Fallout also depends on whether it's detonated on the ground or in the air, and at what height. d hominem 16:41, 21 November 2012 (UTC) Apparently, a nuclear bomb of 1 picoton, (1e^-12) would have enough oomph to irradiate a few pigeons on the roof of the Empire State Building. ĴάΛäšςǍ₰ No comment 22:27, 22 November 2012 (UTC) Can the servers handle things?[edit] Is there another spike in traffic? [2] If traffic becomes unprecedented yet again can the servers handle the load? Proxima Centauri (talk) 07:02, 21 November 2012 (UTC) - Infrastructure has been completely revamped from the ground up, yes we are better able to handle substantially more traffic, that is not to say that a sufficiently high enough spike can't cause us troubles but it would have to be several orders of magnitude higher than previous. Tmtoulouse (talk) 07:07, 21 November 2012 (UTC) - I hope those in the know can understand that diagramme. Incidentally we're definitely ahead of CP [3] and people visit CP mainly to laugh. Proxima Centauri (talk) 07:19, 21 November 2012 (UTC) - There are only two people right now who need to understand it and I assure they do. Tmtoulouse (talk) 07:21, 21 November 2012 (UTC) RationalWiki has become steadily more popular over the last 6 months or so. I suspect the rate of increase is going up too, that our Alexa ranking improved during the last 6 months more than in the preceding 6 months. We're also becoming more influential, users come here for information, by contrast readers go to CP for entertainment and parodists go there to provide entertainment. RationalWiki started as a group of frustrated intellectuals who were banned for trying to insert reasonable material into CP. We're intelligent but not exceptional yet somehow together we're producing an exceptional website. It's plausible we could become as popular as pharyngula 1 or pharyngula 2 No, we shouldn't pay out yet for servers to handle Pharyngula scale traffic as that may never happen. We should just bear in mind that it could happen. Something special is happening and I don't know where it will end. Proxima Centauri (talk) 09:27, 21 November 2012 (UTC) - Our recent Pharyngula link had no visible effect on the MediaWiki server (apache1), which surprised and pleased me. We have had what appear to be spikes that knock the server over, though they're a lot rarer - there are things that can be done to help protect against that and I want to do them (Tim Starling, lead dev on MediaWiki, listed a pile of things for us to try), it's getting around to it. So we're not bulletproof, but we're better than we were, and it wasn't free but it wasn't hugely expensive to the RWF. Feel free to submit our stuff to Reddit, I want to see what happens :-D - We managed, through forgetting to save them, to lose the Squid logs for mid-Oct to mid-Nov (the logrotate was set to 10 days), we need to make the anonymous versions of those available for others to chew on - David Gerard (talk) 15:04, 21 November 2012 (UTC) - And of course the server got rebooted 20 min ago, after 10 days' uptime. Gah - David Gerard (talk) 21:18, 21 November 2012 (UTC) - Aaah, that explains the Error503! Scream!! (talk) 21:23, 21 November 2012 (UTC) - Yep. This is what I want to try - we'll see how it goes on my personal machine for a bit - David Gerard (talk) 01:18, 22 November 2012 (UTC) Why is there only one internet in the diagram? I think you should add more. After all they're free. --2.39.39.47 (talk) 20:45, 21 November 2012 (UTC) - Yes, where are the 100 Internets I won on Livejournal for making up a Borg version of the Lord's Prayer? This diagram is incomplete! SophieWilder 22:33, 22 November 2012 (UTC) Probably not worth WiGO-Worlding[edit] I did find this quite amusing, though. Ah, irony. DickTurpis (talk) 02:59, 22 November 2012 (UTC) Basketball[edit] I doubt many here are into it, but if you are, this was insanely impressive. How is this guy only Division III? Sam Tally-ho! 06:43, 22 November 2012 (UTC) - I can't really say about netball, I'm a footie fan, but it's generally true that the bigger scores are in the lower divisions. The higher you go the more evenly matched the teams. Innocent Bystander (talk) 10:37, 22 November 2012 (UTC) Brand v Westboro[edit] Seen this? Apologies if it's previously/elsewere mentioned. Scream!! (talk) 18:16, 21 November 2012 (UTC) - Loled at this, and included a brief section on it in the article on ol' Fred.--Logic and Empricism (talk) 19:45, 21 November 2012 (UTC) - The scary thing with this is just how utterly humourless those two fuckwads are. Not a shred of humanity or joy within them. --PsyGremlinSprich! 08:52, 22 November 2012 (UTC) - Well, that's par for the course for religious wingnuts. What struck me is that this was probably the best presentation I've seen of the Westboro folks (which certainly isn't saying much). I've often wondered if most fundie nutjobs actually believe their whole "we're only trying to save you from Hell; this is for your own good" rhetoric, or if they're just using religion as an excuse to justify their hatred of those who are different (and I don't mean Westboro specifically here). I imagine there's a little bit of both in most cases. DickTurpis (talk) 15:13, 22 November 2012 (UTC) - I was surprised how much they weren't complete assholes. These are the guys that picket military funerals because "they support gays". I half expected more nuttery. LiberalOfAnUnknownVariant (talk) 10:08, 23 November 2012 (UTC) A RW project proposal[edit] So the RW foundation is a non-profit and thus barred from endorsing any political candidates (in the US, anyway) but the crazy during and after the US elections got me thinking about a voter education project we could undertake for next time. Things like Wingnut Daily's big banner headline "God's Judgement on America" in particular drew my eye. There were just so many predictions of apocalyptic things that would happen if Obama were re-elected, wouldn't it be great if some people were to catalogue all these claims and contrast them during the next election cycle with what actually happened, the message being in essence "don't get fooled again." What do people think about doing something a little beyond just being a wiki on interwebs and moving in to other media? --JeevesMkII The gentleman's gentleman at the other site 21:12, 21 November 2012 (UTC) - - I like the idea, but there are other websites which already do this. The only advantage that we could have is to do the same thing with non-US elections. --Logic and Empricism (talk) 21:07, 22 November 2012 (UTC) - Sounds interesting to me. But what other websites already do this? Links?--Bob"I thought this was supposed to be "Rational" Wiki?." 07:49, 22 November 2012 (UTC) - The thing about non-US elections is that I know of no other country where actual important people, people who can get a seat on TV news panels and be taken seriously, say that electing the wrong candidate will literally destroy the nation, cause blood to rain from the sky, cats to lie with dogs, etc. I think this is a fairly uniquely US phenomenon. --JeevesMkII The gentleman's gentleman at the other site 15:46, 22 November 2012 (UTC) - Bob, there are literally dozens of sites that do something like this. Really, just google "who is running in -whatever district you're in-". Ballotpedia is a common one, and so are uselections and elections.mytimetovote.--Logic and Empricism (talk) 16:32, 22 November 2012 (UTC) - I somehow doubt that "Who is running in Laukiz" or even "Who is running in Vizcaya" would get me very many on-target hits. :-) --Bob"I thought this was supposed to be "Rational" Wiki?." 19:19, 22 November 2012 (UTC) - I don't think you read the proposal very carefully. --JeevesMkII The gentleman's gentleman at the other site 17:31, 22 November 2012 (UTC) - The OPs suggestion seems like little more then an extension of the mission of those sites.--Logic and Empricism (talk) 21:06, 22 November 2012 (UTC) - What my butler is suggesting is that we basically aggregate all the doomsday predictions that were made leading up to and following the elections, and offset them against reality. Maybe even go as far as setting up a seperate site, or even going into print, before 2016, so the next time some yahoo stands up and says "A vote for the Dems will cause boils on your bum!" we can say "Oh, but you said that about Obama, and nothing happened. In fact, they found a cure for boils on the bum." --PsyGremlinPrata! 19:32, 22 November 2012 (UTC) - Well, I've been to ballotpedia and I can't seem to find it already covering the suggestion made by Jeeves - though I didn't spend a long time looking as I suspect that the poster who mentioned the site hadn't checked it either. It's not up front and central anyway. So perhaps we could have a few links to the "dozens" of sites who are doing it?--Bob"I thought this was supposed to be "Rational" Wiki?." 20:39, 22 November 2012 (UTC) - I think I said that no one does that specifically, but what is being suggested is largely an extension of the work done by those websites. If nothing else its more of an extension of their work then RWs.--Logic and Empricism (talk) 21:06, 22 November 2012 (UTC) - What you initially said was: "but there are other websites which already do this". It's at the top of the thread. If you now want to shift your ground a bit then that's fine though. - Anyway if, in fact, nobody is actually doing this then it sounds like a very good idea.--Bob"I thought this was supposed to be "Rational" Wiki?." 07:34, 23 November 2012 (UTC) The So-Called "War on Christmas"[edit] I think the only people actually waging war on Christmas (in the US, anyway) are the retailers. They've shredded the significance of the Holiday better than any Atheist could. I'd recommend to any Catholics they should celebrate the Epiphany, and get a few extra days to shop, with good discounts on the gifts they buy. --TheLateGatsby (talk) 17:31, 22 November 2012 (UTC) - I dunno, when you read into what they actually did in Santa Monica's Christmas displays, it's clear that there are some atheists who really do want to impose a ban on it. Granted, it was the City's fuck up that they fell for some FSM-style trolling by giving out those scene spaces out randomly, and their fault for scrapping it rather than dealing with the mistake, but the jubilation you're seeing from the likes of Vix and organisations like the FFRF is just mean-spirited. This isn't what it should be about. It must just be something about America, as their religious evangelists have long since been disowned as embarrassments to the religious elsewhere on the planet, atheist groups might follow soon. narchist 10:29, 23 November 2012 (UTC) - There's a huge difference between "I want to ban Christmas" and "I don't want my tax money going to the purchase, upkeep, and presentation of an overtly religious display on public land." Santa Monica decided it didn't want to deal with the hassle and they stopped. Nobody is suing to remove Manger scenes from private property. As a matter of fact, this lawsuit was instigated by Christians trying to force the city into keeping the display. SirChuckBCall the FBI 21:57, 23 November 2012 (UTC) This seems counter-productive[edit] Declaring that you in effect rule by decree seems like a step backwards when the people just forced out the last guy to do that. --Mikal Harass Follow 02:26, 23 November 2012 (UTC) - Yeah, I'm thinking he's pretty much done unless he makes some serious concessions now. Protests already going on, and lots more planned. Q0 (talk) 03:06, 23 November 2012 (UTC) - So, he figured "Ya, I'm taking the exact same powers Mubarak but its totally cool because... I got a different haircut!"? --Revolverman (talk) 05:29, 23 November 2012 (UTC) - Seems legit. The guy comes in democratically elected, claiming to respect the democratic process, then proceeds to completely trash the separation of powers, making himself supreme legislator answering to no-one. Looks hopeful, doesn't it?Elpresidentetel (talk) 10:47, 23 November 2012 (UTC) - Meesa proposes, that the senate grant immediately, emergency powers to tha supreem chanshla. postate 11:19, 23 November 2012 (UTC) Jets[edit] I see the Jets got whupped again last night. Maybe it is time to play Tebow. Clearly they need divine inspiration. --PsyGremlinSermā! 07:23, 23 November 2012 (UTC) - No. Osaka Sun (talk) 07:34, 23 November 2012 (UTC) - He broke his ribs playing against Seattle so he only 'suited up' to inspire his team mates (well, that worked well). Innocent Bystander (talk) 11:39, 23 November 2012 (UTC) - Dear Diary, Today I once again got in my uniform and pads yet Rex again refused to put me in. Despite Mark's inability to lead the team to anything but embarrassment against those damnedmean Patriots, I remain seated. Why in the Lord's great plan could involve the Jets embarrassing themselves so? Sam Tally-ho! 02:08, 24 November 2012 (UTC) XD[edit] Lookout everyone! It's Obama's child FEMA brownshirt army! The comments might even be funnier, ecspecially the ones about how they don't look like Americans. --P3A58NT86 19:16, 23 November 2012 (UTC) - So. Much. GODWIN. Ochotonaprincepsnot a pokémon 20:05, 23 November 2012 (UTC) Travel rant[edit] I'm flying to Chile from the UK and just transiting through JFK. To check in online I need approval from the US Customs which means I need to fill out an ESTA electronic approval that now demands a fee of $14. Last time I applied it was free. ГенгисIs the Pope a Catholic? 11:25, 22 November 2012 (UTC) - Sadly, travel guides now have articles like this - I recently flew Shanghai-Chicago-Ottawa with no fees, but I did not check in online. There were no real problems except long lines in Chicago and Canadian Customs deciding to have a thorough rummage through my bags and my portable hard drives. They said they were looking for porn or hate literature; they did not mind 100-odd gigs of music, mostly bought in China. - I wonder what they would have done if I'd encrypted the drives. I'd have refused on principle to decrpyt (see and then what? The UK has a law that lets them jail you for not supplying keys in some circumstances, but I do not know of such a law in Canada. Pashley (talk) 11:46, 22 November 2012 (UTC) - Whatever the rights and wrongs of the case, if a customs officer asks for the keys, then refusing to hand them over is asking for a whole world of pain. Those guys have you by the short and curlies and it really isn't worth messing with them. Innocent? Illegal search? Who cares, you're on a no-fly list for life. Innocent Bystander (talk) 12:03, 22 November 2012 (UTC) - They seem to chop and change the US entry requirements too often, I doubt they provide any actual security. Rememeber when they briefly had those silly fingerprint machines on exit? I'm pretty sure they've scrapped those now, at least I don't remember having to find one. My guess is DHS has far too much money allocated to them and has to find some way to spend it every year on silly procedural changes so they don't get their budget slashed. --JeevesMkII The gentleman's gentleman at the other site 15:59, 22 November 2012 (UTC) - To expand on Innocent Bystander's point - entry is entirely at the discretion of the host entity. They can refuse you entry for any reason, including your declining to enter a key for your disk encryption, or your not laughing at their terrible puns, or even for no reason at all. If you're a citizen you might get some traction from the courts, particularly if you're a resident citizen (and thus in some legal sense "entitled" to enter) or if you're an EU citizen and you were trying to enter an EU country (because the principle of free movement of goods and people is a key element of the union). 82.69.171.94 (talk) 23:20, 22 November 2012 (UTC) - I think that if you are citizen of the country they cannot deny you entry. Before the existence of the EU I once entered the UK even though I had lost my passport. As the nice immigration cahppies explained it they couldn't deny entry to somebody who could prove they were a British citizen. To be fair, this took bit of doing in the absence of a passport, but they eventfully let me in.--Bob"I thought this was supposed to be "Rational" Wiki?." 18:59, 23 November 2012 (UTC) - So how did you do this? Do please tell the tale! - David Gerard (talk) 23:48, 23 November 2012 (UTC) - I started writing an answer to this but I've realised that it's going to be quite a few paragraphs. Do you really want the full story? (It's better when told over a few beers by the way.)--Bob"I thought this was supposed to be "Rational" Wiki?." 16:24, 24 November 2012 (UTC) - For ex-colonial powers like Britain there are a whole shitload of people who are citizens (because they were born inside an empire that no longer exists) but are not legal residents. Those people do not have right of entry to the UK, though they might be able to apply for a visa. Their passports (and thus, the records supporting those passports) will show things like British Overseas Citizen or British Protected Person as nationality rather than British Citizen like Bob's. They would be entitled to assistance from a British consulate representative if they're injured or arrested in a country where they are not a citizen (so typically anywhere other than the ex-colony where they were born) but if they arrive at Heathrow they're just like any other visitor and can be refused entry on a whim. 82.69.171.94 (talk) 14:17, 24 November 2012 (UTC) Affirmative Action debate[edit] Just wanted to see what your opinions are on this, and why. Humour me please.Percival (talk) 22:16, 22 November 2012 (UTC) - Humour constitutes POLITICAL CORRECTNESS GONE MAD - David Gerard (talk) 22:25, 22 November 2012 (UTC) - I'll bite. - It's an unfortunate and mostly ineffective solution to a very real problem. Leaders of the Civil Rights movement knew that some form of 'compensation' (always noted as the wrong word, because you could never actually make up for slavery) was necessary in order to successfully reintegrate blacks into American society. Desegregation was not enough. In fact, even an end to all racism would not be enough. The problem was deeply embedded in the very nature of capitalism, widely known but rarely spoken. Those who start off with little to no wealth face an exponentially more difficult climb to the top, no matter their individual aptitude. Without some sort of significant public welfare, including and especially free quality education, the system is simply unjust, with everyone either advantaged or disadvantaged from birth. But this is an uncomfortable fact, which places the entire notion of success in our society into doubt. - And indeed, affirmative action co-opted the energy behind the Civil Rights movement in order to - among other things - close the debate off to critiques of capitalism. Anything that resembled 'socialism' was off the table from the very start. Sure, part of that was to avoid the mess of deciding how you would distribute wealth to black Americans, but a much larger motivation was to preserve the capitalist system that nearly every power interest in America was prone to want. Opening up a national debate about capitalism at the same time the Soviet Union was our enemy of choice (and actively attempting to spread their form of communism, mind you) was an unthinkable proposition. - So, what would solve the problem? A clear redistribution of wealth, with at least a baseline living wage and real education. Unfortunately, we can't even have these debates on the national scale, because even something so benign as raising taxes on the top 1% is branded socialism. It's also nearly untenable to try and solve anything as an issue of race now. It's harder every day to make light of the fact that any progress which has been made, as Wyatt Tee Walker said, "is more cosmetic than consequential." Slavery turned to segregation turned to de-facto segregation. Racial slurs turned into Lee Atwater style code words in the public sphere... and, of course, the same racial slurs in private. We also now have a black president whose inaction on racial issues speaks volumes, cementing for millions of white Americans the idea that 'if a black man can become president, there's no excuse for anyone in the ghetto', and sowing seeds of doubt in every minority community at the same time. - The reality is that this country has still yet to face its race problem head on. It failed post-slavery, and it failed post-segregation. For as much good as comes out of tidy little concessions like affirmative action, there is at least an equivalent amount of bad. The 'reverse racism' backlash allows actual racists to feel justified in their beliefs, claiming that minorities fail even with benefits not available to others. How do you fight that sentiment without going through the arduous task of repeating this long diatribe I've gotten into here? How do you fight it without questioning the very nature of America: how free and democratic are we?; how fair is our economic system?; how does discrimination - not just based on skin color, but gender, sexuality, and other aspects as well - affect these issues? It's much easier just to appeal to base instincts than even consider these arguments, and so that's what happens. To make it worse, affirmative action is openly used as a political weapon, aimed at anyone who clamors for real reform. You know that tired old line: "We already have affirmative action, what more do we need to do?" - But, until we're able to have a real debate on economic inequality and social justice, affirmative action will probably be necessary. Sorry for the wall of text, but I felt like collecting some thoughts on the issue anyway. Q0 (talk) 02:57, 23 November 2012 (UTC) - You should read Michelle Alexander's The New Jim Crow. I think you'd like it.--talk 04:11, 23 November 2012 (UTC) - The "de-facto segregation" is human normal. Humans form little tribes and partly form their identity out of differences between their tribe and other tribes, whether the tribe is called "Klan members" or "Wikipedia editors" or "Apple employees" is at the highest level just a label. Tribe members choose to live close together, work together, etc. without really any conscious consideration. That doesn't make it either universal or a good thing of course, any more than superstition (also human normal) or momentary self-destructive urges (ditto) are universal or a good thing, there are plenty of people who have never had a suicidal urge, who are disinclined to blame an unusual occurrence on supernatural phenomena and who had never thought to see whether the other people who live on their street also like Jazz and wear a lot of turtle-neck sweaters. But you should probably set public policy on the assumption that left to their own devices many of your citizens will believe in ghosts, will actively choose not to hire someone who seems very different from them and their existing colleagues, and have had the idea of jumping off the top floor of the car park. People who set public policy often talk about providing "nudges", ways you can encourage people to do the Right Thing™ without them feeling like they're having everything dictated to them. Fresh fruit nearest the checkout instead of confectionery is an example of a nudge. You can still buy a chocolate bar, but the temptation is to buy the apple instead because it's right there. 82.69.171.94 (talk) 11:59, 23 November 2012 (UTC) - Discrimination breeds discrimination. - However well intentioned, affirmative action may provide some short term assistance to a previously oppressed minority in order to raise their standard of living, but it will also create reactionary racist/sexist in the majority. - American History X portrays it nicely. The entire movie revolved around a youth's descent into neonazism, which is shown to be a refection of his father's views. However his father was not racist because he viewed another race as inferior, he was so because of affirmative action policies which had been implemented at his workplace. - If you work hard in your job, and another gets a similar job doing less work because of their race, isn't that going to make you angry? - If you apply for a job, but a less skilled applicant gets it because of their race, isn't that going to make you angry? - And if you are angered by affirmative action policies, would you not begin assuming that anyone of that race only achieved what they did because of affirmative action, and not off their own merits? - There are a million and one ways to address the cycle of disadvantage. The best starting point is education, not just throwing money at schools, but properly implemented programs that motivate kids to study at school and chase their dreams. Second step is in the justice system, trying to get people into work programs etc rather than just slapping them with warnings or throwing them in solitary confinement. Etc etc etc. - Affirmative action only serves to assist people to get into positions for which are are not the best for. It negatively affects other members of their race. It breeds resentment in the rest of the population. If you want to divide a society on racial grounds, then affirmative action is a great place to start. But if you want to truly make a difference, there are much more appropriate ways out there. 58.165.164.86 (talk) 14:17, 23 November 2012 (UTC) - Simplistic bollocks. There is a natural tendency that those who choose entrants, be it students or employers, tend to discriminate towards "people like them". If the applicant is a black woman and the interviewer is a white male then she is, de-facto, discriminated against. You have to break that vicious circle. Without affirmative action the best motivated and best educated kids will come up against all sorts of glass ceilings. - And, to quote "If you apply for a job, but a less skilled applicant gets it because of their race, isn't that going to make you angry?" Is that not just as you say when the black applicant sees the less skilled white applicant get the job because the white interviewer favours "people like me". Innocent Bystander (talk) 14:29, 23 November 2012 (UTC) - AD, The New Jim Crow is a great resource, as well as an important and detailed explanation of how race is a significant aspect of both the prison system and the war on drugs in America. Having said that, it disappointed me in its lack of scope (though I do realize it was written for a specific audience). I just don't understand how it's possible to ignore the gigantic holes your very own arguments leave in the fabric of prevailing economic and legal theory. Indeed, at times she is a couple sentences away from making the broader point yet incredulously stops short, presumably for fear of sounding too radical. Q0 (talk) 15:57, 23 November 2012 (UTC) - My sense was that she made a deliberate effort to stay focused, because otherwise it would have spun out into yet another of the many wide-ranging polemics already on the market. One of the main strengths of the book is its limited scope and reasonable length - after all, I think it's easier to conceptualize and act on problems that seem capable of address. An indictment of the whole economic and legal system would have been less useful in catalyzing change than Michelle Alexander's focused attack on one aspect. It takes less force to move a needle's point than a railroad spike.--talk 23:26, 23 November 2012 (UTC) - Speaking as an authentic Black Man, I think AA, while inherently flawed, is a great program and one that is vastly needed. I could write a huge paper on this (in fact, I have) but to keep it short: Black people are still discriminated against in our country in both overt and subtle ways. See this study that found people with black names have a [[4] smaller chance of even getting an initial callback for a job interview.] Even with AA laws, there is still a huge discrepancy in economic and educational opportunity for all minority races and the last thing needed to repeal these laws. SirChuckBFurther bulletins as events warrant 22:06, 23 November 2012 (UTC) - I've always found this a difficult matter to address, because there's a clear need and yet a clear moral shortcoming in this solution. There's obviously a big gap in opportunity, I agree. My view is just that it would be better to fund things like the United Negro College Fund, instead. This is also an affirmative action, just without the nasty taint of inherent discrimination. - For example, I think a good policy to implement would be to federally fund the UNCF and equivalents to the tune of $100 million or so, with the provision that this funding will decrease by $2 million every year. This gives the discriminated-against minority an assured leg up for a discrete amount of time, without damaging the meritorial basis of hiring and admissions. Places and organizations that are all-white (or all-male, for that matter) should continue to suffer social consequences, of course.--talk 23:26, 23 November 2012 (UTC) - A significant problem is that society has generally failed most black children long before the UNCF can even come into play. Don't get me wrong, it's a great program; it just can't account for the fact that public K-12 education in most cases is bad and getting worse. Q0 (talk) 02:08, 24 November 2012 (UTC) Haha I throw my hands up - I know nothing about racism in the US. I guess the public service/corporate world there isn't as progressive as over here. All I know is that if I applied for a job, and a less qualified/experienced applicant got it because of their race/gender, I'd be very pissed. And if I had substandard workers who got their jobs because of their race/gender, I'd be very pissed. And yes, I would start questioning whether any member of that gender/race got their achievements through their own work, or through public policy. And if this practice was widespread, I have no doubt it would breed resentment and discrimination from the majority. And no, I'm not white nor conservative. I strongly believe that we need to work to break the cycle of disadvantage across all groups, but I don't believe that AA has any valid role in doing so. But then again, I can't speak for what's happening in the US. 58.165.164.86 (talk) 00:27, 24 November 2012 (UTC) 11 year old vs Eric Hovind[edit] 11-year-old vs creationist here Good debate!--Bob"I thought this was supposed to be "Rational" Wiki?." 19:20, 23 November 2012 (UTC) - Adjusting for the fact that the kid is 11 and you'd expect Eric Hovind to know what he's talking about, the kid won hands down. theist 20:33, 23 November 2012 (UTC) - If anyone would like to know how we know that 2+2=4 narchist 20:39, 23 November 2012 (UTC) - Not just because Hovind says so then?--Bob"I thought this was supposed to be "Rational" Wiki?." 21:28, 23 November 2012 (UTC) - Excellent. Thank you for sharing this Bob. Reckless Noise Symphony (talk) 05:39, 25 November 2012 (UTC) Winner of the 2012 Cringeworthy Awards: North category[edit] The two most hated Canadians on the planet meet at last. Osaka Sun (talk) 00:09, 24 November 2012 (UTC) - I'm not much of a fashionista, but what the hell is going on there? Sam Tally-ho! 02:09, 24 November 2012 (UTC) - Bieber got awarded a Diamond Jubilee Medal by Prime Minister Assface. Wikipedia says that it was "awarded to 60,000 citizens and permanent residents of Canada who made a significant contribution to their fellow countrymen, their community, or to Canada over the previous sixty years." The PM apparently was allocated 200 of the 60,000 to hand out. I'm not actually a JB-hater, but given his choice of clothing for such a high honour, I'm not sure which of the two I'd like to smack more in that picture. Ochotonaprincepsnot a pokémon 19:14, 24 November 2012 (UTC) - That precise outfit (one-strap denim overalls, white shirt, backwards cap) was the height of gay twink fashion for six months around 1993 - David Gerard (talk) 19:39, 24 November 2012 (UTC) - If we were actually having this conversation in 1993, I could legitimately say "I'm 12 years old and what is this?" Ochotonaprincepsnot a pokémon 20:06, 24 November 2012 (UTC) My most sincere apologies[edit] Hi there. I'm sorry for everything I've done. I plead for your forgiveness. Please forgive me. Or there will be consequences. (A new revolutionary struggle, a new RWW/MCWiki etc.) You have been warned. A new intifada is around the corner. MARCUSCICERO - Ah, I see your local has wifi. How's the novel coming along? SophieWilder 20:19, 24 November 2012 (UTC) - you need new material, apologies and threats of revolt are so last season. --Mikal Harass Follow 20:27, 24 November 2012 (UTC) - "Intifada" is in especially bad taste given what's going on in Palestine and Israel. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:07, 25 November 2012 (UTC) - apologies and threats of revolt are so last season Dude has been doing this for 4 years now. Acei9 00:29, 25 November 2012 (UTC) - Now, Sir, I don't know you, and have never had any dealings with you previously, but don't you feel that an apology, followed immediately by a threat, is rather poor negotiation, don't you think? Percival Aldrige Grainger 01:59, 25 November 2012 (UTC) - whistles in, totally naive* Oh, hey, Marcus. I don't know who you are, but I do know that going to the saloon bar to apologise for your mistakes is not met kindly. As anyone on here will testify, I have done so repeatedly with ill effect. The Heidelberg Kid (talk) 02:14, 25 November 2012 (UTC) Police Brutality and Murder in the USA[edit] Police Brutality in America How reliable is the Baltimore Chronicle? Proxima Centauri (talk) 08:20, 24 November 2012 (UTC) - It's ok. Stephen Lendman is not. Here he is on Russia Today talking about how CIA asset Osama Bin Laden died in Dec 2001 and held on ice for years.--talk 13:01, 24 November 2012 (UTC) - Oh, but police brutality is indeed rampant.--talk 13:02, 24 November 2012 (UTC) - You do know that article's from 2010, right? SophieWilder 13:33, 24 November 2012 (UTC) - List of killings by law enforcement officers in the United States, November 2012 SophieWilder 13:37, 24 November 2012 (UTC) - Four of the first six of those involved the killed person physically threatening the police officers. Just because someone was killed by the police doesn't mean its abuse--Logic and Empricism (talk) 17:12, 24 November 2012 (UTC) - The guy who punched me in the face represented a physical threat. If I had responded by pulling out a gun and shooting him dead I'd be in jail right now. It's appropriate to hold the police to a different standard in light of what we ask them to do, for example most jurisdictions give the police a lot more leeway to arrest, hold and question people than an ordinary citizen has - but that doesn't give them a license to kill. 82.69.171.94 (talk) 11:42, 25 November 2012 (UTC) - Of the six I read, two were pointing a gun at a cop, one was trying to take a cops gun, and the forth was threatening them with a hammer. --Logic and Empricism (talk) 20:57, 25 November 2012 (UTC) - And the police report said "I had to shoot him, he was... he was... he was pointing a gun at me." Innocent Bystander (talk) 21:02, 25 November 2012 (UTC) - So, we're now just assuming that whenever someone is killed by the police it was, what, because the cop was a violent sociopathic racist? Hey, why not just make wild and unfounded assumptions!--Logic and Empricism (talk) 05:27, 27 November 2012 (UTC) - No, we know that sometimes when someone is killed by the police it's because they were dangerous and couldn't safely be subdued or arrested. Just like we know that some of the people who were locked up in Guantanamo Bay really were terrorists. But just because some of these killings are justified doesn't stop us from being concerned by the rest of them, particularly because instead of being held to a very high standard (which we should desire) police are instead being held to a very low standard. In particular statements by police officers should be treated with the same scepticism that we'd apply to other witness testimony given by someone with a powerful interest in the story being understood their way rather than providing us with impartial evidence of what happened. Right now very often officer statements are taken as overriding cold hard facts rather than being at best supplementary to them. - A train guard who failed to follow procedure and thus witnessed a horrible accident got jail time in the UK this month (from the coverage you might think the proper procedure would have saved the girl's life, actually she'd still be dead but he wouldn't have seen it happen). But the police officer who shot a man and thereby sparked a series of riots walked away without even a reprimand. Why? Because they recited over and over again that they perceived an immediate threat, that they had no choice but to shoot. All the evidence that he wasn't actually armed at the time, that he was running away from police, and that police had tampered with the scene before investigators arrived, everything was disregarded so long as they kept repeating that line. To retain confidence in armed police we need better quality investigations, and that includes being more sceptical of the human element. 82.69.171.94 (talk) 13:44, 27 November 2012 (UTC) [edit] In our Monsanto article, we mention that it has 90% of the GM seed market. However I can't find any source for this. This chart suggests that its market share is more like 40%, but it also includes non-GM seed. Does anyone have any information on this subject? --Tweenk (talk) 20:52, 24 November 2012 (UTC) - | This source says "Based on industry statistics, ETC Group estimates that Monsanto's biotech seeds and traits (including those licensed to other companies) accounted for 87% of the total world area devoted to genetically engineered seeds in 2007." That seems high, given there are several other major players in the market such as DuPont/Pioneer and Syngenta. Doctor Dark (talk) 23:33, 24 November 2012 (UTC) - The key part being "(including those licensed to other companies)". So, according to that source they don't have "90% of the GM seed market", they're instead directly or indirectly involved in ~90% of the GM seeds which get planted. They've apparently developed most of the better seed traits, and then licensed that technology to other companies. If that's true, it may be misleading in a strict sense (which is to be expected), but as far as overall control of the market they'd still have a commanding position. Q0 (talk) 00:36, 25 November 2012 (UTC) - 87% of "involvement" sounds about right, because all the major seed companies extensively cross-license - see third chart here. But that doesn't say a lot about actual market share. --Tweenk (talk) 03:07, 25 November 2012 (UTC) - Would comparing GMO's total revenue across the industry with Monsanto's GMO revenue be meaningful? Assuming this data is available, of course. Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:13, 25 November 2012 (UTC) - That would be, if there was a reliable way of finding the revenue attached to GMO. That strikes me as something that would take months of hard accountant work.--Revolverman (talk) 06:16, 25 November 2012 (UTC) - Not necessarily. I'm skimming Monsanto's 10-K report right now to see if they break down their revenue streams. Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:24, 25 November 2012 (UTC) - I meant finding out the global revenue of GMO crops in general. --Revolverman (talk) 06:26, 25 November 2012 (UTC) - I figured there'd be a trade organization more than happy to boast about its industry size. Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:33, 25 November 2012 (UTC) - Damn, Monsanto only describes its GMO operations as a share of gross profit, not revenue. "Gross profit as a percent of net sales for the Seeds and Genomics segment increased two percentage points to 62 percent..." (p23 of Monsanto's 10-K). Net Sales $11.8 billion, gross profit $6.08 billion. Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:39, 25 November 2012 (UTC) - The most logical thing would be to ask the author who put the 90% figure in the article where they got it from. But, unless I'm misreading the history, it looks like it was added by Tweenk in March 2011. As Tweenk is now the one now questioning the veracity of the figure perhaps it should be removed?--Bob"I thought this was supposed to be "Rational" Wiki?." 19:49, 25 November 2012 (UTC) - I don't really remember where I saw the 90% number, but after reading the linked 10-K form I have a better guess now. - Here the size of the GM seed market in 2011 is estimated as $13.3 billion. Monsanto's 10-K says that its Seed and Genomics segment sales in 2011 were $8.6 billion, though this also includes income from licensing agreements for GM traits. The upper limit of Monsanto's market share is therefore 65%. --Tweenk (talk) 03:05, 26 November 2012 (UTC) Timing![edit] The Wikipedia fundraiser started today (for the Anglosphere, anyway). Nevertheless, I have taken the time to tweet appropriately, and JzG to RT appropriately. Get out there and WHORE YOUR SOCIAL REPUTATION TO OUR GREATER GLORY - David Gerard (talk) 22:51, 26 November 2012 (UTC) - No Bitcoin option? Paper checks, postal money orders, Western Union? Get with the program! Secret Squirrel (talk) 00:15, 27 November 2012 (UTC) - The lack of payment with slabs of beer is also a bit of a downer. I planned to donate two slabs of XXXX gold and a bunch of bitcoins. I wonder if they will be accepted for the rationalwiki fundraiser. Naca (talk) 04:59, 27 November 2012 (UTC) - Slabs of beer? I know and love stubbies, tallboys, masses, yards, six-packs, cases, kegs... but slabs? Doctor Dark (talk) 05:21, 27 November 2012 (UTC) - I suspect a slab of beer is a box of 12 (or 24), in a shrink-wrapped cardboad half-tray. However, shipping will probably be more expensive than the beer itself. Do you take virtual slabs? CS Miller (talk) 15:23, 27 November 2012 (UTC) - For the record we do accept slaps of beer, it can be directly converted into quality programming. Tmtoulouse (talk) 05:23, 27 November 2012 (UTC) - I've been at the root prompt after a few glasses of ~10% home-brewed mead rather a lot of late - David Gerard (talk) 09:27, 27 November 2012 (UTC) - The important question is, do you know how you got there and what happened during the previous hour? Ochotonaprincepsnot a pokémon 12:07, 27 November 2012 (UTC) - I MADE RATIONALWIKI WORK WITH WONDER AND GRANDEUR DAMMIT. Now, have I drunk enough to do the conversion to fcgid? - David Gerard (talk) 20:02, 27 November 2012 (UTC) Props to Cuban[edit] I never knew or cared much about Mark Cuban, but I think he deserves some RW respect for his stance on NBA endorsed woo. If only the everyone else in the NBA put reality before profits. DickTurpis (talk) 02:55, 27 November 2012 (UTC) - Hang on, doesn't he own an NBA team? Won't he be in massive breach of contract? Surely the NBA has a contract with owners not to bring sponsors products into disrepute. I must admit I am very conflicted about this one. I love sports and I understand they need commercial sponsorship to survive, but on the other hand I can't stand woo. --DamoHi 06:14, 27 November 2012 (UTC) - He owns the Mavericks, and some television interests (HDNet channel, for example). He's probably one of the more reasonable hugely-rich rich guys, on par with Branson but not quite a Buffett (and certainly not a Sinegal). As far as the conflict of interest/breach of contract, the NBA is a league, an entirely separate organization, from the Dallas Mavericks ball club. Just because one endorses one product doesn't mean the other has to...otherwise, they'd never be allowed to have different sponsors (beer, restaurants, whatever). That one can use the other is an agreement in licensing (logos, etc.), but otherwise they're separate (same with stadium ownership, rebroadcast rights, t-shirts, etc.) -- Seth Peck (talk) 18:05, 27 November 2012 (UTC) - Yeah I understand they don't have to actively support the NBA sponsors, but it would surprise if there wasn't a "bring the game into disrepute clause". If was the makers of the powershit bands or whatever they are called I would be pretty pissed off at the NBA if they didn't do anything. DamoHi 18:46, 27 November 2012 (UTC) ASK and ye shall be answered[edit] I just looked at ASK, out of mild curiosity. The main page has been accessed just under 130,000 times. I have a wiki of my own, which I use to talk about such endlessly fascinating topics as my model railway. My front page has been accessed over 600,000 times and my page on building the Parkside diagram 100 hopper kit ( has been accessed more times than any of the 20 random ASK pages I loaded. I am delighted that my site is as successful, by this wholly scientific and objective measure, as the widely respected and well known ASK. JzG (talk) 12:03, 27 November 2012 (UTC) - Yesssss. But ASK started in March 2009, it seems that yours started somewhat earlier. So you are not really comparing like with like. What you'd really need to do is calculate something like "reads per year" or something like that. - Wikiindex tries to use something called WikiFactor but even that doesn't really take time into account.--Bob"I thought this was supposed to be "Rational" Wiki?." 13:41, 27 November 2012 (UTC) - Didn't we have a wikifactor article? I seem to recall LArron writing it. SophieWilder 13:49, 27 November 2012 (UTC) - Ah. SophieWilder 13:51, 27 November 2012 (UTC) - You dare compare yourself to PJR, mortal? Behold! [5] 70k wall of text. You don't even have articles that long. Scurry along now. 14:31, 27 November 2012 (UTC) - I'm sorry but your wiki is not a real wiki as ken has not defacedadded his wonderful insights to it. Naca (talk) 15:50, 27 November 2012 (UTC) - Is there any way I can tell how many people access Liberapedia and Atheism Wiki? Proxima Centauri (talk) 16:26, 27 November 2012 (UTC) - You could ask Wikia. SophieWilder 16:58, 27 November 2012 (UTC) - Yeah, but they'd probably only point you to the Liperapedia stats page which isn't very helpful. ГенгисIs the Pope a Catholic? 20:11, 27 November 2012 (UTC) - Liperapedia has 4,592,112 registered users? Seems ever so slightly hard to believe. Even if you count spammers that's a hell of a number.--Bob"I thought this was supposed to be "Rational" Wiki?." 20:57, 27 November 2012 (UTC) - User accounts on Wikia work across all Wikia wikis, so that figure is counting the total number of users across all its wikis. Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:03, 27 November 2012 (UTC) - Say that ten times fast. Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:04, 27 November 2012 (UTC) - A statistic of quite remarkably wonderful uselessness in that case.--Bob"I thought this was supposed to be "Rational" Wiki?." 23:39, 27 November 2012 (UTC) - Well, yes, that was rather the point behind the comment about the method being scientific and objective. But actually my site was on Lotus Domino for most of its life, it moved to this MediaWiki install, without stats history, in 2010 I think ( so actually the comparison is more or less meaningful. JzG (talk) 00:33, 28 November 2012 (UTC) Ah, but the Recent Changes on aSK amuses me. I'm a bit amazed he leave the site up, honestly. --Kels (talk) 18:48, 27 November 2012 (UTC) Trying to name a certain economic idea?[edit] My apologies if this is the wrong place for this, but talk pages and forums are pretty dead, so I'll try my luck here. Please correct me if there's a specific place for this. One of my online friends has talked on a few occasions about what he'd ideally want the world to be, and one of the things that's puzzled me is his idea of a world without standardised currencies. This isn't the first time I've heard of such an idea, I've heard of it a few times. However, I can never really wrap my head around how the concept could possibly work. We didn't add each other mainly to talk about barter systems and the likelihood we'll stop killing each other when aliens land (plus, I think he's kind of baked at these times), so we just move on. It sounds, if anything, like an anarchist notion, but when it comes to lawmaking he seems very much in favour of Big Gubbernmint, and sometimes rather authoritarian, with a "Don't be a dick before you sign something", which quite frankly scares me if he considers that an effective safeguard in the long run. In trying to educate myself on it elsewhere, I asked a friend (economics student) if there's a specific name. All he said was that it seems to be a very niche movement and best just to say "proponent of barter". So I have no idea what to really search for here. The closest I can find is the Gift economy article, but his idea has very solid trade - just with no set, standardised currency to rely on. Is there a term for this? Many thanks in advance. Polite Timesplitter talk to me sugar, but best keep it on thedown-low 21:19, 26 November 2012 (UTC) - The impolite answer is that is sounds like a bunch of bullshit. The more polite answer is that it sounds like an extremely utopian economic idea which would probably not function very well for any amount of time, because whether he wants it to or not, eventually market forces will create a single universal currency with a standardized value (for lack of a better way stating that). Essentially, we'd go back to the Gold standard, for its failings, except with the theoretical ability to change from gold to silver, or whatever. Granted, by the time the gold standard becomes an issue, changing currency within the market would probably be insanely difficult, if not impossible.--Logic and Empricism (talk) 21:36, 26 November 2012 (UTC) - This one about lots of competing local currencies in simultaneous circulation is not unheard of. It's not clear how well it would work in practice; I suspect one currency would rapidly become the most popular - David Gerard (talk) 22:44, 26 November 2012 (UTC) - When the Republic of Ireland switched to the Euro, for a short while there were three currencies valid in parts of Northern Ireland - Euros, Pounds and Punts. Nothing seemed to break although there was a fair amount of mental arithmetic going on. Perhaps Marcus knows more about this? SophieWilder 23:15, 26 November 2012 (UTC) - This idea sounds like free banking. --Tweenk (talk) 01:03, 27 November 2012 (UTC) - It basically is, and a thing about free banking is that it tends to move towards one or two currencies anyways.--Logic and Empricism (talk) 05:25, 27 November 2012 (UTC) - Yes, even if multiple currencies are allowed, people end up using as few as possible because they tire of constantly calculating exchange rates every time they go shopping. SophieWilder 10:42, 27 November 2012 (UTC) - I'm realising that living in a society stable enough to have a standard currency is actually quite the privileged condition, historically - David Gerard (talk) 20:00, 27 November 2012 (UTC) - My understanding is that Cuba has had two official currencies for some time and also works with multiple foreign ones.--Bob"I thought this was supposed to be "Rational" Wiki?." 23:47, 27 November 2012 (UTC) - You'd end up moving towards a standardised currency anyway (the US dollar is de facto for international use) for much the same way people use English as a standardised language for communication. I'm confused by what it means by "no standardised currencies", because that implies something slightly different to "just lots and lots of currencies" - but I'm curious to exactly how it'd be different. Perhaps it's just a half-baked idea that sounds fun until you actually apply thought to it. I know someone whose anti-capitalist manifesto involved us all becoming more capitalistic to drive the system into the ground. Thankfully, that didn't last too long once he turned up at University and actually looked at how economics works. We all have those crazy ideas at some point. moral 17:22, 28 November 2012 (UTC) Native American Genocide[edit] So, I'm taking an American Diversity class. We are on a Native American unit right now, and we just covered events such as the Wounded Knee and Sand Creek massacres. We are about to engage in a class debate over something that many 'Mericans seem edgy about-the Native American Genocide. We are to debate over whether or not the events as a whole concerning the American confrontations with the Native Americans constitute Genocide. Now, I can see how it could go either way, though my side in this is that it indeed does represent a Genocide. My friend and I seem to agree that it certainly represented ethnic cleansing. That much is clear. What is also clear to us is that the trail of tears can be seen as a Genocidal act, although the question is, did the Army have the foresight to predict the death and suffering? That would constitute the difference between an act of genocide and Genocide, in my opinion. I guess the struggle here is to make a good argument to the rest of the class. What do you all think?--P3A58NT86 22:01, 26 November 2012 (UTC) - Start by reading the Genocide Convention. Some will argue that any discussion of "does event X constitute genocide" is only meaningful if framed in terms of the only practical definition of the term, a point-of-view I tend to agree with. Read up on the question of intent, because that's where the rubber meets the road in terms of proving culpability (see the Krstic decision for this). Chalk and Jonassohn have a useful discussion about defining genocide in the intro to their book. Theory of Practice Still tryin' to figure it all out. 23:01, 26 November 2012 (UTC) - Historically speaking, it was known that the Trail of Tears and the Navajo Long Walk would be devastating to community - including political and spiritual (often one-in-the-same) structures within the community. It was also considered quite a dangerous trip, and the soldiers were advised to take extra precautions in terms of supplies for themselves and their troops. So, I'm guessing they figured people would die. many people would die. I'm skeptical of the claims made by DeLoria and Churchill that soldiers and / or the govt knew the blankets given to tribes were infected with small pox, but if proven, it would not surprise me. You also have to consider mission schools, "covert or kill" was an underscored theory. But now i'm curious - how is ethnic cleansing not a genocide, and what is an "act" of genocide. I've heard that they are all different, but never understood how or why.--Godot She was a venus demilo in her sister's jeans 23:52, 26 November 2012 (UTC) - It may be hair-splitting but I would say "ethnic cleansing" was the elimination of a particular group from a specific area - ranging from forcible expulsion to mass murder - while genocide would be the attempted total elimination of an ethnic group everywhere. ГенгисOur ignorance is God; what we know is science. 01:45, 27 November 2012 (UTC) - IIRC there were a couple times 'smallpox blankets' were given to Native Americans purposefully, but those were isolated incidents and only affected things on a local scale. Overall, as long as Europeans were intent on staying in North America at the time, the spread of crowd diseases like smallpox to most of the native population was, sadly, inevitable. Guns, Germs, and Steel touches on this quite a bit. - As far as genocide goes, I've always felt that arguing over semantics on these type of issues is harmful rather than helpful. Is it still genocide if they don't actually care if the population dies out or not, they just want the land to themselves? Or is that "just" ethnic cleansing? It's hard to maintain perspective when you're saying "Oh, they just killed them to take their stuff." or "They didn't have any intent to commit genocide... at least until the popular idea of natives as savages appeared and whipped everyone into a frenzy." Q0 (talk) 02:25, 27 November 2012 (UTC) - " I've always felt that arguing over semantics on these type of issues is harmful rather than helpful." i hope you never intend to be a scholar, or practice international criminal law. Theory of Practice Still tryin' to figure it all out. 02:27, 27 November 2012 (UTC) - Can't say I pretend international law actually means anything in today's world. Maybe in the distant future... Q0 (talk) 03:12, 27 November 2012 (UTC) - Are you aware of the numerous verdicts brought down by the ICTR and the ICTY? The Charles Taylor case? The ongoing work of the ICC (no trials as of yet, but several investigations are underway and indictments have been handed down)? Compare this to the situation 20 or even 10 years ago and realize how sorely mistaken you are. Theory of Practice Still tryin' to figure it all out. 03:25, 27 November 2012 (UTC) Frankly, I will have a hard time taking it seriously until the same standards are applied to world powers. (Or, you know, at the very least Henry fucking Kissinger...) And until then, how can it do anything other than serve those powers' interests? The ICTY is actually a great example of that. Q0 (talk) 04:42, 27 November 2012 (UTC) - So since a complete revolution in world affairs, undoing centuries of great power politics hasn't taken place in a reasonably short amount of time, we should just disregard the real progress we have made. There's nothing like a person with a strong opinion on a topic he is close-minded about. Theory of Practice Still tryin' to figure it all out. 04:46, 27 November 2012 (UTC) - Justice, in the form of punishment, is not an end in and of itself. Do Serbs and Croats hate each other any less today? Are we do believe the upcoming state vs state genocide case will have a positive - rather than negative - effect on their relations? Just look at the strong reactions from both countries every time a verdict has been handed down from the ICTY. Compare these results to those of something like a Truth and Reconciliation Commission, wherein we don't just assume two peoples will keep on hating each other until the end of time. - And for that matter, can we even say that war crimes trials have been preventative? Can we say that with a straight face while we observe the situation with M23 in Congo right now, despite the fact that a Congolese war criminal was just convicted a few months ago? It might help us more to step back and ask ourselves why most of Africa is in the sorry state that it's in today. Maybe we can also consider if colonialism has truly ended or not. - But still, is it really progress to deal out selective justice on an order of this magnitude? The people on trial are the ones who the US approves to be on trial (or actively has a bounty out for, as in the case of Charles Taylor). And generally are the ones who Russia approves, who China approves, who France approves, and, yes, who the UK approves. And even in the odd case where a body of international law has ruled against the powerful, like in Nicaragua v. United States, they can choose to just ignore the verdict. Usually, however, their control over events is more than blatant, as it was during the ICTY. NATO exerted immunity, even though its bombing exacerbated a situation that led to many of the most substantial war crimes being committed, and its targeting practices were grossly negligent at best. They simply stated that they controlled the court - they allowed it to exist - and so could do whatever they wanted. Q0 (talk) 09:28, 27 November 2012 (UTC) Back on topic... What was done to the Native Americans was a crime against humanity, but genocide? Doesn't a genocide require the group committing it be attempting to remove the race from the face of the earth? I believe it was just more forcing them out of the way. If that is still thought to be genocide, then its genocide. --Revolverman (talk) 05:36, 27 November 2012 (UTC) - Logically as it's a "cide" you would expect it to refer to the complete removal of that gene pool. Perhaps surprisingly it's not though. You only need to be responsible for the death of an unspecified but large group. See wikipedia on this. --Bob"I thought this was supposed to be "Rational" Wiki?." 13:25, 27 November 2012 (UTC) - The Genocide Convention is clear on this question. Genocide is defined as "...any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:" Theory of Practice Still tryin' to figure it all out. 13:52, 27 November 2012 (UTC) - There is no doubt the desire was to destroy in whole or in part, the "savage Indian". Read newspapers, personal diaries, letters of the day, and that sentiment is public and wide spread. IT is not necessarily a government sentiment - but the sense of the people is that these are animals and deserve death.Godot She was a venus demilo in her sister's jeans 14:11, 27 November 2012 (UTC) Tea Party Nation: "ROMNEY CAN STILL WIN THIS"[edit] [6] -- based on a horribly wrong interpretation of the XIIth Amendment. WND briefly amplified this BS as well before issuing a hastily written "Oops!" --TechCheesegrieve 23:30, 27 November 2012 (UTC) - Amusingly, the comments section of WND's column on the subject continues to wallow in denialism. They somehow think they can still make it work. Apokalyps2547 (talk) 02:45, 28 November 2012 (UTC) - 2700 comments? Good fuck. That's ~250 a day since it was posted. I'm willing to bet that 90% of everything worth reading occurred in the first 10% of the posts, and everything after that is recirculating, along with Poes and mocking. I gave up trying to even subject myself to it after the 2000-comment mark. Ochotonaprincepsnot a pokémon 09:11, 28 November 2012 (UTC) A tad disappointing[edit] Currently I'm sailing out of the Straits if Magellan and was hoping for something a bit more exciting. Генгисevolving 00:55, 28 November 2012 (UTC) - You're browsing your favorite website from a boat off of fucking Tierra del Fucking Fuego, and somehow, it isn't "exciting" enough for you. #firstworldproblems. Theory of Practice Still tryin' to figure it all out. 01:05, 28 November 2012 (UTC) - Yeah, WTF? Acei9 01:08, 28 November 2012 (UTC) - Seriously, you were expecting a fucking dragon or something? Theory of Practice Still tryin' to figure it all out. 01:15, 28 November 2012 (UTC) - No I was expecting more dramatic scenery. I've seen the dragons. ГенгисRationalWiki GOLD member 01:17, 28 November 2012 (UTC) - Head inland to the Argentina/Bolivia border. Beautiful wine country. Acei9 01:23, 28 November 2012 (UTC) - I find it slightly disturbing that one can get internet access from Tierra del Fuego. What does a man have to do to get away from it all? Doctor Dark (talk) 01:28, 28 November 2012 (UTC) - Try Yemen, RW is blocked there. ГенгисIs the Pope a Catholic? 01:32, 28 November 2012 (UTC) - All the old sea stories I've read this is supposed to be storm central. Whatcha sailing on? Evil fascistoh noez 01:38, 28 November 2012 (UTC) - The Straits of Magellan are far less stormy than going around the horn which is why it is the preferred route. Acei9 01:47, 28 November 2012 (UTC) - I know that, yet due to the squalls I would prefer not to do it in a sailboat. Evil fascistoh noez 01:52, 28 November 2012 (UTC) - You'll never discover the Japan's with that attitude. Acei9 01:53, 28 November 2012 (UTC) - We had 35kt winds yesterday and so didn't sail, but we're expecting some big stuff at times in the south Atlantic. The vessel is a reasonable size commercial ship - not a yacht - but I'm told it rolls a lot. ГенгисIs the Pope a Catholic? 01:57, 28 November 2012 (UTC) - Did 2 months in a 42footer, twice. Though that was in the Atlantic/Gulf of Mexico and was mostly calm. Nautical family.Evil fascistoh noez 02:01, 28 November 2012 (UTC) - Watch the Irving Johnson narrated short movie Around Cape Horn, showing the Peking's trip in the 1920s. It will make you glad you're in the calm straits. Or it might make you wish you were part of that voyage, though it is pretty pants-shitting stuff. DickTurpis (talk) 06:08, 28 November 2012 (UTC) - I do hope you're reading the relevant section of The Voyage of the Beagle while you're down there. SophieWilder 11:59, 28 November 2012 (UTC) - I'm watching Master and Commander. ГенгисRationalWiki GOLD member 12:33, 28 November 2012 (UTC) Middle east excitement[edit] The last 48 hours have been just a blast (literally, in one place). Isreal-Palistine sign peace treaty. Isreal breaks it 48 hours later. Egypt has installed a new Pharaoh, and Saudi Arabia has said that in their quest to protect women, they will begin tracking all women, and notifiying husbands or fathers if the women go outside of designated paths, or hit airports and bus stations. "For the women's protection" of course. --Godot She was a venus demilo in her sister's jeans 19:53, 23 November 2012 (UTC) - It's hard to even keep track of everything going on, it's pretty crazy.--talk 21:03, 23 November 2012 (UTC) - From the country which feels that letting women drive would increase prostitution, pornography, homosexuality and divorce.--Bob"I thought this was supposed to be "Rational" Wiki?." 21:27, 23 November 2012 (UTC) - Honestly, I was afraid to really bring up Israel-Palestine, not knowing if it would be violating an unwritten rule around here. The twitter war itself was a sight to behold, and I was glued to it despite the fact I usually can't stand the awful 140 character medium. It's amazing how connected you feel when you have updates by the second: words, images, sounds. The brutal emotion of war in real time. Q0 (talk) 21:59, 23 November 2012 (UTC) - Dunno. I'm with Hitchens on this one. Both sides are beholden to the religious crazies of their side, which are minorities by population, but they control policy somehow. If you want to say which side is worse, whatever, but I think there's plenty of blame to go around. LiberalOfAnUnknownVariant (talk) 23:04, 23 November 2012 (UTC) Of course when they track pupils in the USA with RFID chips that is a different matter. I know it's not just the US, the UK has far too many CCTV cameras for my liking. Standing up for individual liberties, everywhere, is one of the most important things we should be championing because our rights to privacy and anonymity are slipping away. I hope it won't affect me because I'm much closer to the finishing post than most of you, but the younger ones... ramble, waffle, bleah bleah, mutter, lorem ipsum. ГенгисRationalWiki GOLD member 23:20, 23 November 2012 (UTC) - It's gotta suck to try to have an affair these days. "your gps on your phone shows you were at work till 4, then drove to a hotel and stayed there for 2 hours, before heading home."Godot She was a venus demilo in her sister's jeans 06:08, 24 November 2012 (UTC) - I don't have any problem with the end of privacy, so long as it's the end for all of us. When I think about CCTV watching me board a train, I also think about a government minister photographed sitting in First Class without a First Class ticket and knowing that even if he now pays for an upgrade that's going to be in all the news media. Sunlight is good for all of us, we just need to be sure nobody is privileged to remain in the shadows. - I have been amused by the reaction of various friends and acquaintances to the news that EU car insurers, no longer permitted to discriminate on gender will be moving towards more use of "black boxes". People feel that it's somehow unjust that whereas previously they were abusing the system (in some cases even committing fraud by lying about who is the "main driver" of a vehicle) they will no longer be able to do so. Our culture will be changed forever by openness, but although the change may be painful it's not necessarily bad. 82.69.171.94 (talk) 12:42, 24 November 2012 (UTC) - The sunlight does not shine on the wealthy. Money buys privacy. Its disappearance for the rest of us is not a good or equal thing.--talk 12:58, 24 November 2012 (UTC) - Suggestion: imagine that you are child or adolescent who is gay/transgender/atheist/not sharing your parents' political views, and that your parents are not exemplars of tolerance (a situation that is not uncommon in the US)... Does the "end of privacy" still seem compelling? There was at least one RW user who mentioned hiding his copy of The God Delusion from his parents. And this is in the US, the world's paragon of freedom and justice.--ZooGuard (talk) 13:23, 24 November 2012 (UTC) - The God Delusion wasn't around at the time, but I certainly had to hide The Demon-Haunted World from my parents... among just about every other aspect of my being. But that would be the last aspect of privacy to go, anyway. I'm more worried about AD's observation about money, along with the overall philosophy behind destroying privacy. I'm getting tired of the idea that everyone is out to get us, that there are tons of criminals and people who want to abuse the system and enemies of the state and all that. If we as a society feel we have to take extraordinary security measures and give up huge swaths of civil liberties, we've already failed somewhere. Q0 (talk) 16:35, 24 November 2012 (UTC) - There is no "philosophy behind destroying privacy". Modern technology doesn't avail privacy, so if we're to retain privacy that will be by the rule of law. You will see things, but acknowledging that you saw them will be illegal, you will know things, but acting on that knowledge will be illegal. Think DADT or current US copyright law. If you laughed at England's "super-injunctions" which forbade people from repeating true but embarrassing facts about celebrities that had been revealed in court then you're going to love the privacy you're so keen on. Unlike sunlight, which is free, a privacy afforded by the courts will be expensive, it will exist in practice solely for the rich and privileged, but on paper it will be for everyone so it's "fair". - Using money to buy privacy without the laws you crave is an unwinnable race. The Barclay brothers' private island would once have bought them sovereign status, vulnerable only to a concerted effort from another sovereign power, they could have done whatever they liked. Today they're being dragged through a legal dispute for driving a motor car. On an island they entirely own, but which a nearby island claims some residual control over. Once upon a time the best idea you'd have of what they'd done with the island would be a sketch made from a boat miles at sea. Today we can see detailed satellite photographs of the island, manifests of what was imported, details of who went to and fro and there's not a damn thing they can do to stop us - without a privacy law. - I argue that trying to retain privacy where technology erodes it will push us back to the hypocrisy of the Victorian era, the rich will pretend to be morally upright by violently suppressing all report of their real behaviour, this time in the name of "privacy". The poor will be told they are morally reprehensible, because they cannot afford the deceit of the rich, and in the attempt to hide their ordinary vices they will do a worse violence to society than any amount of mere vice could achieve. - In the US a convenient bit of legal squinting has resulted in privacy being associated with abortion. But a practical end to privacy doesn't have to threaten access to abortion, it just means somebody needs to get a court to do what it should have done in the first place without bringing "privacy" into it. If the US Supreme Court were to decide gay marriage is legal in all 50 states based on some inspired re-interpretation of the Second Amendment I hope that wouldn't put an end to calls for gun control. 82.69.171.94 (talk) 12:14, 25 November 2012 (UTC) - I think you missed my point entirely. The argument isn't that we need more privacy laws, it's that we need to fight the current pattern which is going in the opposite direction. The mentality of the national security / police state, helped along by improvements in technology, is a serious threat to the common good. Surely we can agree that the ability the FBI and CIA have (by law...) to spy on your every conversation - from email to text message to telephone - is a serious problem, no? What about when it extends to search engine searches, direct messages on facebook and twitter, and browsing history? What about when your entire hard drive is searched by customs every time you enter or leave the country? Q0 (talk) 16:21, 25 November 2012 (UTC) - Exactly. Every small encroachment on personal liberty is justified at the time because of some perceived threat, but these are never reversed and powers that were originally enacted in response to external threats to national security become absorbed into the everyday policing of ordinary citizens. We've seen people arrested for joking public tweets how long before the authorities extend that to jokes in private communications? Dan Carlin covered it in some detail in one of his recent Common Sense podcasts. ГенгисOur ignorance is God; what we know is science. 22:20, 25 November 2012 (UTC) - Liberty and privacy are distinct, and are coming increasingly into conflict. We can have all our existing liberties and more without any privacy, it just means our hypocrisies are transparent and we have to come to terms with that. You want the FBI stopped from spying on people's conversations, I want the FBI's conversations recorded and brought into evidence. One of us is hoping people will stop doing evil if they're told not to and then allowed to skulk in the shadows, I propose we instead oblige people to do their business in the sunlight where most will be too ashamed to try evil and the remainder can be easily identified and caught. - If you're specifically worried about FBI / CIA spying on you then stop using antiquated unencrypted systems, for example you can get commodity software that uses the Socialist Millionaire's protocol to instantly secure and mutually authenticate a conversation with a party you know over an insecure link using weak shared secrets. That is you can easily obtain a program that lets you and your friend Bob communicate without fear of eavesdropping after verifying each other's identities with real-world questions like "What did we call our Biology teacher back in high school?" and you can use ordinary IM software to do it, the protocol secures an existing system rather like a magical "secure telephone line" in movies except thanks to sophisticated mathematics this actually works. 82.69.171.94 (talk) 23:25, 27 November 2012 (UTC) - You're assuming the lights are still going to be on. December 21, baby. Can't wait can't wait. If the Rapture doesn't wipe out all the surveillance cameras, the sunspot flare-ups will. Secret Squirrel (talk) 01:29, 28 November 2012 (UTC) - BON, your argument isn't coherent. You propose a total end to privacy, then in the next paragraph advocate encryption systems? Whut? ωεαşεζøίɗMethinks it is a Weasel 20:34, 28 November 2012 (UTC) - Actually I'm mostly telling you how the technology shakes out, the proposals are only about what we should do about it now that this particular Pandora's Box is open. This has happened before, and it will happen again. And of course people will fight it, the US is still proposing new laws to try to stop all the world's people from making copies of things using the billions of perfect duplicators that are found in homes and businesses across the globe - hopefully you don't need a crystal ball to guess how that ends. Maybe we're being betrayed by language, if you consider "encryption" a type of privacy then we're definitely experiencing a definitional problem. 82.69.171.94 (talk) 00:30, 29 November 2012 (UTC) - Your proposals make no sense: FBI conversations are freely available to everyone but ordinary citizens use codes and unnecessary verification questions when holding conversations? This is upside-down. ₩€₳$€£ΘĪÐMethinks it is a Weasel 07:46, 29 November 2012 (UTC) - How is it upside down? This isn't a caste system, being an FBI agent doesn't make someone a better or more worthy person, but such agents are entrusted with powers that aren't available to ordinary citizens. As a result agents (and the agency as a whole) must be subject to more public scrutiny not less or you have a recipe for wrong-doing. Does it also strike you as wrong that an aeroplane pilot is expected to be sober while mere passengers are permitted to be drunk or high? As to using "codes and unnecessary verification questions" you don't have to do so but the option is open to you if that's what you want. I'm not interested in forcing you to wear trousers, but if you insist on running around naked and then complain that it's cold I might point out that wearing trousers would help a lot more than yelling at the weather. 82.69.171.94 (talk) 00:27, 30 November 2012 (UTC) - Can somebody link to some info about the SA women-tracking thing? The news I've found says they're sending messages to husbands when their wives leave the country (which is ridiculous enough, of course)...but I can't find mention of any other new uses. 99.50.98.145 (talk) 03:37, 25 November 2012 (UTC) Has Glenn Beck finally lost his mind?[edit] I know this guy is basically irreverent now, but just sit back and enjoy the lulz. (talk) 15:12, 28 November 2012 (UTC) - What do you mean "finally"? --TheLateGatsby (talk) 15:18, 28 November 2012 (UTC) - The video is new, but the footage has gotta be at least 2 years old. Theory of Practice Still tryin' to figure it all out. 15:39, 28 November 2012 (UTC) - I'm gonna go with Fuck yes SirChuckBWill Sysop for food 17:18, 29 November 2012 (UTC) Santa hat logo (or the war on Christmas part XXVII)[edit] It's that time of year again. Evil fascistoh noez 23:01, 27 November 2012 (UTC) - Toooo sooon. Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:12, 27 November 2012 (UTC) - Not yet it bloody isn't. SophieWilder 23:12, 27 November 2012 (UTC) - wait a week or two. Acei9 23:15, 27 November 2012 (UTC) - While I am a great fan of the hat I do feel that late November is jumping the gun a bit.--Bob"I thought this was supposed to be "Rational" Wiki?." 23:40, 27 November 2012 (UTC) I hate Christmas/the Holidays and everything to do with them. Can someone write a bit of magic code that keeps me from seeing it. Theory of Practice Still tryin' to figure it all out. 00:12, 28 November 2012 (UTC) - I think we could do without the Santa hat. Maybe just some snowflakes? Blue Talk 01:13, 28 November 2012 (UTC) - That's blasphemy. That's MADNESS. I'd have the santa hat all year round if I had my druthers. I'm definitely holding out for until 12th night again this year. --JeevesMkII The gentleman's gentleman at the other site 23:06, 28 November 2012 (UTC) - I'm all for the Santa hat, to coincide with the 12 days of Xmas. Just because we're a bunch of baby-eating rational atheists doesn't mean we should all yell "Bah humbug!" at the top of our lungs. --PsyGremlinSnakk! 13:24, 29 November 2012 (UTC) - Indeed. And just because these johnny-come-lately Christians have tried to pervert the true meaning of Saturnalia it doesn't mean that we have to let them get away with it. The War on Saturnalia has gone too far!--Bob"I thought this was supposed to be "Rational" Wiki?." 13:44, 29 November 2012 (UTC) - I'm with ToP on this one. To be Rational[TM] about it, we should either have it year-round, or not at all. Year-round would be too much like the snapping pennants over a used car sales lot. Why not have gifs of wacky inflatable arm waving guys instead of wiki brackets flanking the brain? That leaves not at all. QED. (Don't make me get out the slowly grinding gears, now...) Sprocket J Cogswell (talk) 16:57, 29 November 2012 (UTC) - I think that, for the first time ever, I'm opposed to the Santa hat logo. I'm not going to cry if we do have it, but I agree with an above suggestion at snowflakes or something. Reckless Noise Symphony (talk) 12:37, 30 November 2012 (UTC) More about Free Speech in the UK[edit] There has been a worldwide scandal involving illegal activities by parts of the press that Rupert Murdock owns. You in the USA probably know this as you had a few problems too. Free speech and freedom of the press in no way includes freedom to hack into people’s phones and the like as Murdoch employees did with murder victims and dead soldiers. There’s a widespread feeling in the UK that we need tighter controls against this type of thing. Today in the UK a report will be published about what might be done and MP's warn against press regulation law. Will new legislation be limited to controlling what many feel should be illegal? Alternatively will irresponsible people in the government use this as an excuse to clamp down on freedom that the press should have in a democracy? I hope I won’t end up having to look at American websites to find out what’s happening in my own country. Anyway we’ll find out more later today. Proxima Centauri (talk) 12:11, 28 November 2012 (UTC) - Seeing as our glorious free press has used its system of "self regulation" to besmirch individuals and scapegoat minorities, they'll have had regulation coming. Also, if you're not already reading news sites from a variety of countries, whyever not? SophieWilder 12:18, 28 November 2012 (UTC) - I read news from many countries and frequently use sites outside the UK as sources for what I write in RationalWiki. Still I find UK news websites give better coverage for UK news. Of course if there is less press freedom here the government may be shooting themselves in the foot. USA sources will expand their UK coverage to tap the market of UK people wanting to know what's really going on here and the UK government has no control over what's written in the USA. Proxima Centauri (talk) 12:34, 28 November 2012 (UTC) - Nice scaremongering Prox. Yes, USA sources will tap the market of UK people wanting to know what's really going on here. Also, the income tax rate will go up to 98%, the last English tree will be felled to make housing space for the Muslim majority with their huge families, and NHS waiting times will rise to one million years. WëäŝëïöïďMethinks it is a Weasel 13:30, 28 November 2012 (UTC) - The phone hacking thing, and all the other dodgy shit they've been up to, pisses me off because of who they were targeting, not what they did. If they used dodgy and/or outright illegal methods to get a recording of, for example, Tony Blair leaving a message on George Bush's phone saying "yo we made up that shit about iraq just like you said, what's next in Operation: Sandy Fun Times?", no one would've given a flying fuck about how they got the information, they'd just be thankful the information had come out. But no, journalists are apparently immense cunts and use these potentially useful methods against meaningless targets just so they can publish stories like "parents of murdered child are sad their child was murdered". But now the backlash against that is going to have a chilling effect on anyone who was willing to cross the line for a good cause (exposing actual corruption, for example) and golly gee whizz does that make me angry. The shit idiots brought it on themselves and I give zero fucks about them, but they'll bring the same to everyone else too. X Stickman (talk) 18:42, 28 November 2012 (UTC) - Frightening sci fi Proxima Centauri (talk) 18:45, 28 November 2012 (UTC) - - "I hope I won’t end up having to look at American websites to find out what’s happening in my own country." - to be honest, i've always found that going to international sources for news about a country is one of the best things you can do. I read BBC, Agence France for news about the US, but then read US, and Agence F for news on BBC. something about being on the outside, and not having a stake in the game seems to add to the analysis. Godot She was a venus demilo in her sister's jeans 18:49, 28 November 2012 (UTC) It's now due today. Proxima Centauri (talk) 08:54, 29 November 2012 (UTC) - Well, Leveson's report did what it needed to do, including asking that the independence and free speech of the press be recognised in law (First Amendment anyone?). Such a shame that the PM has shown that he'd rather back the press than the victims by not implementing the report. If anybody's interested, there's a petition here. Hell, feel free to sign it if you don't like the crimson one in genereal. (Note to non-politics followers, Cameron goes crimson everytime he loses his temper, every Wednesday, at some point during PMQs, the Crimson Tide that washes up Cameron's face gets pointed out).-- Jabba de Chops 11:31, 30 November 2012 (UTC) - The BBC has an article about that petition, Leveson report: Victims urge full implementation This looks set to grow. Proxima Centauri (talk) 20:28, 30 November 2012 (UTC) Fundraiser[edit] Holy shit that's going well. Do we have any idea what's up with this? Is this a few big donations, a lot of small donations, what? - David Gerard (talk) 22:25, 28 November 2012 (UTC) - Well, either a whole bunch of small donations came through in a six-hour period, or someone made a $500 donation. That's a phenomenal 1/5th of our goal right there Radioactive afikomen Please ignore all my awful pre-2014 comments. 22:43, 28 November 2012 (UTC) - It's probably several smaller donations as TMT isn't continuously updating. Although I don't want names it would be interesting to see a list of amounts. ГенгисIs the Pope a Catholic? 00:08, 29 November 2012 (UTC) - We did have a $500 donation, a regular donor actually that has been considerable help. The rest of the donations have ranged from a $1-$100. Its really fun to see a bunch of $5 donations come in though, it all adds up and demonstrates there are a lot of people out there cheering for us. Tmtoulouse (talk) 02:26, 29 November 2012 (UTC) - That's pretty heartening. ГенгисYou have the right to be offended; and I have the right to offend you. 02:51, 29 November 2012 (UTC) - Had I hit the Powerball we'd have Rationalwiki for life, and perhaps a RW scholarship fund for science majors. Guess that'll have to wait. Aboriginal Noise What the ... 14:31, 29 November 2012 (UTC) Appealing to simple folks[edit] - Appealing to ordinary people rather than to an intellectual elite appears to be working. I feel we should aim to publish material of interest to a wide range of readers above average intelligence from 5th and 6th formers preparing for university through to undergraduates and graduates. Proxima Centauri (talk) 12:49, 30 November 2012 (UTC) - How is that different from what we already do? WeaseloidMethinks it is a Weasel 13:08, 30 November 2012 (UTC) It is what we're doing now and we should stick with what works, if it ain't broke, don't fix it. Proxima Centauri (talk) 13:21, 30 November 2012 (UTC) - Thank you, Captain Obvious. -- Nx / talk 13:28, 30 November 2012 (UTC) - I could point out some serious problems with that philosophy... d hominem 15:04, 30 November 2012 (UTC) - I've had a few hours to consider what I wrote above and I'm less sure I was right, there's one problem I've noticed. The most likely victims of scams we expose are below average intelligence and/or are below average education. We aren't reaching them. For the moment let's stick with what works, later let's consider publishing a section of the wiki in simple English. Can Simple English Wikipedia show us how to do this. Proxima Centauri (talk) 18:36, 30 November 2012 (UTC) - Can it show you how to write in simple English? Theory of Practice Still tryin' to figure it all out. 18:39, 30 November 2012 (UTC) - No. It's near impossible to make a nuanced argument in their primary school English. However, making sure the lede stands on its own as a soundbyte summary is good practice and not always followed. JzG (talk) 19:33, 30 November 2012 (UTC) - We can keep nuanced arguments for the main body of RationalWiki, we need warnings about scams in simple English and that needn't be nuanced. A separate website is worth considering because a section for less gifted readers here would be humiliating. I'm not suggesting doing anything straight away, it's better if we consider what to do for several months first. Proxima Centauri (talk) 19:56, 30 November 2012 (UTC) - How about a section for less-gifted writers, Proxima? Would that be humiliating? Theory of Practice Still tryin' to figure it all out. 20:02, 30 November 2012 (UTC) - What about trolls? Proxima Centauri (talk) 20:11, 30 November 2012 (UTC) My college is a circus of incompetence[edit] <angry rant>So, I've spent the last year in a protracted fist fight with the local 4 year college because I'm trying to transfer from a community college. This started the end of last year when I was rejected because I tried to apply late. Fine, my own stupid fault for assuming I'd have more then 3 days from the end of the community college's semester to apply for the college's semester starting in two months. Whatever. So, I apply later (since I cannot apply right then for the next semester!), I get accepted, and I get handy card in the mail telling me what I need to do to actually start classes. I get it, after the enrollment period for the college's next semester, so I cannot take classes. OK, whatever, I can take a few more classes at the community college, no big deal. Then as that semester is wrapping up I get another card in the mail telling me I still need to do things to take classes, and that includes sending in my final transcript from the community college and my high school. I have two days from the grades from the community college getting posted to when I have to enroll in classes. It will take "5-10 business days for transcripts to get to and be processed by the institution". OK, again, this is my own stupid fault for assuming I'd be able to send in my final transcripts after the semester has started so I can start giving them my money. But, on the bright side this is the point where I finally got an account with the school's "my college" bullshit so I can finally see a list of what all I need to do to take classes. That might have been useful like 6 goddamn months ago, but whatever. So, I do some looking and notice that if I take one more semester (ending earlier this month) I'll have two months to do whatever bullshit they still need me to do, but haven't gotten around to telling me about. So, my final grades were posted last Wednesday, but I didn't have the money to pay my school bill until Monday. So, I paid, the check cleared Tuesday, so I sent in my final transcripts, called my high school and had my high school transcripts sent in, still no idea why they needed it, but whatever. So, yesterday I called to make sure I don't need to do anything else that they aren't going to tell me about for a few months otherwise. Turns out, I do. Because I originally applied for the spring semester but I couldn't go because they were too stupid to let me know what I needed to do when I could actually get in, I have to re-apply. So, while doing my absolute best to contain my rage, I tried to sign into the "my college" BS, and I cannot remember my password, or apparently any of the security questions, which cycles through a random combination of 2 of the 12, most of which I have two possible answers to ("What is the name of your pet" "I have two. Which would you like me to tell you about?"). So, my account is locked. I think it'll be unlocked today, worst case scenario. Nope. I have to call them, tell my college ID and my personal email. Jesus, that is a brilliant security system. And I've been there a few times for other things (the mandatory campus tour I didn't need to do and meeting with an academic advisor who told me I was set, and then I went home). Each time I spent atleast ten minutes trying to find a fucking park spot, and I was there when there was literally -2 parking spaces. A negative number of parking spaces. I didn't know that was possible. Last time I got a $200 parking ticket for not having a parking permit, when the whole reason I was there, was to find out where the fucking hell to get a goddamn $350 parking permit while on the mandatory campus tour. Which, for the record, did not actually include where to get a fucking parking permit. And Christ, don't get me started on the academic problems I have with this fucking school</angry rant>--Logic and Empricism (talk) 17:14, 29 November 2012 (UTC) - Getting set up with a college can be one of the most frustrating and convoluted processes out there. They often really don't put a lot of effort into streamlining the process and guiding you through it. Good luck getting through it though! Sam Tally-ho! 18:07, 29 November 2012 (UTC) - Wow. Never ceases to amaze me how much of the cost of doing business is instead transferred to the consumer. Q0 (talk) 01:16, 30 November 2012 (UTC) - I don't think this is really a cost, I think it's just laziness or stupidity.--Logic and Empricism (talk) 01:39, 30 November 2012 (UTC) - Is it possible to get there using public transport? --Tweenk (talk) 01:57, 30 November 2012 (UTC) - There's a public park right by, so there's no reason to use the college's parking other then convience/speed. It's really just another point of irritation--Logic and Empricism (talk) 02:55, 30 November 2012 (UTC)
https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive180
CC-MAIN-2022-21
en
refinedweb
Titanic survivors, a guide for your first Data Science project Introduction In this article, we are going to go through the popular Titanic dataset and try to predict whether a person survived the shipwreck. You can get this dataset from Kaggle, linked here. This article will be focused on how to think about these projects, rather than the implementation. A lot of the beginners are confused as to how to start when to end and everything in between, I hope this article acts as a beginner’s handbook for you. I suggest you practice the project in Kaggle itself. The Goal: Predict whether a passenger survived or not. 0 for not surviving, 1 for surviving. Describing the data In this article, we will do some basic data analysis, then some feature engineering, and in the end-use some of the popular models for prediction. Let’s get started. Data Analysis Step 1: Importing basic libraries import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline Step 2: Reading the data training = pd.read_csv('/kaggle/input/titanic/train.csv') test = pd.read_csv('/kaggle/input/titanic/test.csv') training['train_test'] = 1 test['train_test'] = 0 test['Survived'] = np.NaN all_data = pd.concat([training,test]) all_data.columns Step 3: Data Exploration In this section we will try to draw insights from the Data, and get familiar with it, so we can create more efficient models. training.info() training.describe() # seperate the data into numeric and categorical df_num = training[['Age','SibSp','Parch','Fare']] df_cat = training[['Survived','Pclass','Sex','Ticket','Cabin','Embarked']] Now let’s make plots of the numeric data: for i in df_num.columns: plt.hist(df_num[i]) plt.title(i) plt.show() So as you can see, most of the distributions are scattered, except Age, it’s pretty normalized. We might consider normalizing them later on. Next, we plot a correlation heatmap between the numeric columns: sns.heatmap(df_num.corr()) Here we can see that Parch and SibSp has a higher correlation, which generally makes sense since Parents are more likely to travel with their multiple kids and spouses tend to travel together. Next, let us compare survival rates across the numeric variables. This might reveal some interesting insights: pd.pivot_table(training, index = 'Survived', values = ['Age','SibSp','Parch','Fare']) The inference we can draw from this table is: - The average age of survivors is 28, so young people tend to survive more. - People who paid higher fare rates were more likely to survive, more than double. This might be the people traveling in first-class. Thus the rich survived, which is kind of a sad story in this scenario. - In the 3rd column, If you have parents, you had a higher chance of surviving. So the parents might’ve saved the kids before themselves, thus explaining the rates - And if you are a child, and have siblings, you have less of a chance of surviving Now we do a similar thing with our categorical variables: for i in df_cat.columns: sns.barplot(df_cat[i].value_counts().index,df_cat[i].value_counts()).set_title(i) plt.show() The Ticket and Cabin graphs look very messy, we might have to feature engineer them! Other than that, the rest of the graphs tells us: - Survived: Most of the people died in the shipwreck, only around 300 people survived. - Pclass: The majority of the people traveling, had tickets to the 3rd class. - Sex: There were more males than females aboard the ship, roughly double the amount. - Embarked: Most of the passengers boarded the ship from Southampton. Now we will do something similar to the pivot table above, but with our categorical variables, and compare them against our dependent variable, which is if people survived: print(pd.pivot_table(training, index = 'Survived', columns = 'Pclass', values = 'Ticket' ,aggfunc ='count')) print() print(pd.pivot_table(training, index = 'Survived', columns = 'Sex', values = 'Ticket' ,aggfunc ='count')) print() print(pd.pivot_table(training, index = 'Survived', columns = 'Embarked', values = 'Ticket' ,aggfunc ='count')) - Pclass: Here we can see a lot more people survived from the First class than the Second or the Third class, even though the total number of passengers in the First class was much much less than the Third class. Thus our previous assumption that the rich survived is confirmed here, which might be relevant to model building. - Sex: Most of the women survived, and the majority of the male died in the shipwreck. So it looks like the saying “Woman and children first” actually applied in this scenario. - Embarked: This doesn’t seem much relevant, maybe if someone was from “Cherbourg” had a higher chance of surviving. Step 4: Feature Engineering We saw that our ticket and cabin data don’t really make sense to us, and this might hinder the performance of our model, so we have to simplify some of this data with feature engineering. If we look at the actual cabin data, we see that there’s basically a letter and then a number. The letters might signify what type of cabin it is, where on the ship it is, which floor, which Class it is for, etc. And the numbers might signify the Cabin number. Let us first split them into individual cabins and see whether someone owned more than a single cabin. df_cat.Cabin training['cabin_multiple'] = training.Cabin.apply(lambda x: 0 if pd.isna(x) else len(x.split(' '))) training['cabin_multiple'].value_counts() It looks like the vast majority did not have individual cabins, and only a few people owned more than one cabins. Now let’s see whether the survival rates depend on this: pd.pivot_table(training, index = 'Survived', columns = 'cabin_multiple', values = 'Ticket' ,aggfunc ='count') Next, let us look at the actual letter of the cabin they were in. So you could expect that the cabins with the same letter are roughly in the same locations, or on the same floors, and logically if a cabin was near the lifeboats, they had a better chance of survival. Let us look into that: # n stands for null # in this case we will treat null values like it's own category training['cabin_adv'] = training.Cabin.apply(lambda x: str(x)[0]) #comparing survival rates by cabin print(training.cabin_adv.value_counts()) pd.pivot_table(training,index='Survived',columns='cabin_adv', values = 'Name', aggfunc='count') I did some future engineering on the ticket column and it did not yield many significant insights, which we don’t already know, so I’ll be skipping that part to keep the article concise. We will just divide the tickets into numeric and non-numeric for efficient usage: training['numeric_ticket'] = training.Ticket.apply(lambda x: 1 if x.isnumeric() else 0) training['ticket_letters'] = training.Ticket.apply(lambda x: ''.join(x.split(' ')[:-1]) .replace('.','').replace('/','') .lower() if len(x.split(' ')[:-1]) >0 else 0) Another interesting thing we can look at is the title of individual passengers. And whether it played any role in them getting a seat in the lifeboats. training.Name.head(50) training['name_title'] = training.Name.apply(lambda x: x.split(',')[1] .split('.')[0].strip()) training['name_title'].value_counts() As you can see, the ship was boarded by people of many different classes, this might be useful for us in our model. Step 5: Data preprocessing for model In this segment, we make our data, model-ready. The objectives we have to fulfill are listed below: - Drop the null values from the Embarked column - Include only relevant data - Categorically transform all of the data, using something called a transformer. - Impute data with the central tendencies for age and fare. - Normalize the fare column to have a more normal distribution. - using standard scaler scale data 0-1 Step 6: Model Deployment Here we will simply deploy the various models with default parameters and see which one yields the best result. The models can further be tuned for better performance but are not in the scope of this one article. The models we will run are: - Logistic regression - K Nearest Neighbour - Support Vector classifier First, we import the necessary models from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC 1) Logistic Regression lr = LogisticRegression(max_iter = 2000) cv = cross_val_score(lr,X_train_scaled,y_train,cv=5) print(cv) print(cv.mean()) 2) K Nearest Neighbour knn = KNeighborsClassifier() cv = cross_val_score(knn,X_train_scaled,y_train,cv=5) print(cv) print(cv.mean()) 3) Support Vector Classifier svc = SVC(probability = True) cv = cross_val_score(svc,X_train_scaled,y_train,cv=5) print(cv) print(cv.mean()) Therefore the accuracy of the models are: - Logistic regression: 82.2% - K Nearest Neighbour: 81.4% - SVC: 83.3% As you can see we get decent accuracy with all our models, but the best one is SVC. And voila, just like that you’ve completed your first data science project! Though there is so much more one can do to get better results, this is more than enough to help you get started and see how you think like a data scientist. I hope this walkthrough helped you, I had a great time doing the project myself and hope you enjoy it too. Cheers!! Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/05/titanic-survivors-a-guide-for-your-first-data-science-project/
CC-MAIN-2022-21
en
refinedweb
This project uses the TLE493D magnetic sensor and the MAX32620FTHR for a portable home security system. You can monitor when windows, doors, or pretty much any open/close system's state change. You will need to test both the magnetic sensor and the MAX32620FTHR microcontroller. If you have the TLE493D 2GO kit, you can use the Arduino IDE to test your board: When you view the serial monitor, you should expect values like the ones shown below depending on how you're printing them: 2.73 ; 2.73 ; 0.52 2.47 ; 2.73 ; 0.52 2.60 ; 2.86 ; 0.52 2.47 ; 2.47 ; 0.52 You should play around with the sensor separately to determine the ranges for when the magnet is near the sensor (closed state for the door/window) vs when the magnet is farther way. Depending on where you apply this, this will have different results. For the MAX32620FTHR, you can find the Mbed page here. Try the blinky example as that will get you started and the board tested. Make sure you select the right board on the top right corner: Sometimes you need to unplug and replug the microcontroller or hit the reset button to get something like this: You can either try i2c by breaking off the magnetic sensor from the TLE493D board at the break line or use the UART pins on the XMC. The pinout for the TLE493D sensor is available here. Here's the potential schematic for i2c (connect the MAX to a battery): For the magnet sensor, you can read the values off Bx, By, or Bz. I suggest to use one of these registers and rely on that to figure out when your window/door is opened or closed. Here's the register diagram take from the manual: Here's some example code that will blink the onboard led on the microcontroller: #include "mbed.h" I2C i2c(I2C1_SDA, I2C1_SCL); DigitalOut myled(LED1, 1); int main() { char buff[2]; float OPEN_STATE_THRESHOLD = 0.33; // Value that you find as open state int magnetRegister = 0x01; // register you want to read from for (;;) { i2c.read(magnetRegister << 1, buff, 2); float reading = float((buff[0]<<8)|buff[1]); if (reading < OPEN_STATE_THRESHOLD) { myled = 1; wait(0.2); myled = 0; wait(0.2); } Thread::wait(1000); } }
https://www.hackster.io/exp0nge/magnetic-home-security-monitor-ebc39e
CC-MAIN-2022-40
en
refinedweb
This is the ninth and the last part of my Spring Data JPA tutorial. Now it is time to take a look of what we have learned, and how we should use it to build better software. Table of Contents The contents of my Spring Data JPA tutorial is given in following: - Part One: Configuration - Part Two: CRUD - Part Three: Custom Queries with Query Methods - Part Four: JPA Criteria Queries - Part Five: Querydsl - Part Six: Sorting - Part Seven: Pagination - Part Eight: Adding Functionality to a Repository - Part Nine: Conclusions The next step is to take a look of the advantages provided by Spring Data JPA and learn how we can use it in effective manner. Promises Kept The goal of the Spring Data JPA project is stated:. This is a lot to promise. The question is, has Spring Data JPA achieved its goal. As you have learned from my tutorial, Spring Data JPA has following advantages over the "old school" method of building JPA repositories: - It provides CRUD capabilities to any domain object without the need of any boilerplate code. - It minimizes the amount of source code needed to write custom queries. - It offers simple abstractions for performing common tasks like sorting an pagination. The thing is that implementing these functions have forced the developers to write a lot of boilerplate code in the past. Spring Data JPA changes all this. It minimizes the amount of code needed for implementing repositories. Making It Work for You I hate the term best practices because it has a negative effect on continuous improvement. However, I still feel that it is my responsibility to give you some guidance concerning the usage of Spring Data JPA. Here are my five cents about this matter: Creating Queries Your goal should be to use the Spring Data JPA to reduce the amount of code you have to write. With this goal in mind, I will you give some guidelines for creating queries with Spring Data JPA: - If the query can be build by using the query generation from method name strategy, I think you should use it. However, if the method name will become long and messy, I would consider using the @Query annotation in order to make the source code more readable. - Your second option for creating queries should be the @Query annotation and JPQL. This approach ensures that the you will not have to write more code than it is necessary. - Use JPA Criteria API or Querydsl only when you have no other options. Remember to extract the query generation logic into separate classes which creates Specification or Predicate objects (Depending on your technology selection). JPA Criteria API Versus Querydsl This is a question which should be asked by each developer. The usage of JPA Criteria API has been argued by claiming that you can use it to build type safe queries. Even though this is true, you can achieve the same goal by using the Querydsl. The first round ends in a draw, and we need to look for the answer from a bit deeper. I will compare these two options in following categories: readability and testability. Readability Programs must be written for people to read, and only incidentally for machines to execute - Abelson and Sussman on Programming. With this guideline in mind, lets take a look of the implementations, which I created for my previous blog entries. The requirements of the search function are following: - It must be possible to search persons by using their last name as a search criteria. - The search function must return only such persons whose last name begins with the given search term. - The search must be case insensitive. First, lets take a look of the implementation which is using the JPA Criteria API. The source code of my static meta model is given in following: @StaticMetamodel(Person.class) public class Person_ { public static volatile SingularAttribute<Person, String> lastName; } The source code of my specification builder class is given in following: public class PersonSpecifications { /** * Creates a specification used to find persons whose last name begins with * the given search term. This search is case insensitive. * @param searchTerm * @return */(); } }; } } Second, the source code of the implementations which uses Querydsl is given in following: public class PersonPredicates { public static Predicate lastNameIsLike(final String searchTerm) { QPerson person = QPerson.person; return person.lastName.startsWithIgnoreCase(searchTerm); } } This use case is pretty simple but it can still be used for demonstrating the differences of the JPA Criteria API and the Querydsl. The source code written by using Querydsl is clearly more readable than the one using the JPA Criteria API. Also, when the queries become more complex, the difference will be much bigger. I would say that this round goes to Querydsl. Testability Software testability is the degree to which a software artifact (i.e. a software system, software module, requirements or design document) supports testing in a given context. In other words, the testability of your code defines the amount and quality of tests you can write at the same cost. If the testability of your code is high, you can write more tests with better quality than in a situation where the testability of your code is low. Lets keep this measurement in mind when we will compare the unit tests written for implementations which were presented earlier. First, lets check out the unit test for the implementation which uses the JPA Criteria API:); } } Second, the unit test for the implementation using Querydsl is given in following:); } } After seeing the unit tests for both implementations, it should be obvious that writing unit tests for Querydsl is much easier than writing unit tests for the JPA Criteria API. Also, the unit test written to test the Querydsl predicate builder is much easier to understand. This is valuable because unit tests should also be used to document the behavior of the system. At this point it should be clear that the winner of this round is Querydsl PS. I am aware that unit tests do no ensure that the results returned by the created query are correct. However, I believe that they are still valuable because running unit tests is typically dramatically faster than running integration tests. It is still good to understand that in the context of integration testing, the testability of both implementations is equal. Conclusions The question is: Should I use the JPA Criteria API or Querydsl? It depends. If you are starting from scratch and you have a total control over your technology selections, you should at least consider using Querydsl. It makes your code easier to write and read. It also means that writing unit tests for your code is simpler and faster. On the other hand, if you are modifying an existing system to use Spring Data JPA, and the existing code is using the JPA Criteria API, you might want to continue using it for the sake of consistency. The fact is that there is no right answer for this question. The answer depends always from external requirements. The only thing you can do, is to ensure that you are aware of the different options, which are available to you. Only then you can choose the right tool for the task in hand. There is Still More to Learn The truth is that I have only scratched the surface of implementing JPA based repositories. I hope that the recommendations given in this blog entry will help you to take the first step, but I have to admit that there is a lot more to learn. I hope that the following resources will help you in your journey: Reference Documentation JPA Criteria API 2.0 - Dynamic, Typesafe Query in JPA 2.0 - JPA Criteria API by Samples Part I and Part II - Using the Criteria API to Create Queries - The Java EE 6 Tutorial This has been awesome stuff. Thanks for sharing. Especially the github projects are really valuable. Antti, Thanks for the feedback. It is great to hear that you have find this tutorial useful. Nice blog post again. One other benefit of Querydsl compared to the JPA 2 Criteria API in the Spring Data context is that it is also available for some other backends, at the moment MongoDB and JDBC. Timo, thanks for your comment. The support for MongoDB and JDBC is indeed a strong benefit of Querydsl. However, I feel that there might be some work to be done in order improve the general awareness about Querydsl. does querydsl support postgres Yes. However, nowadays I am using jOOQ because it has a better API (in my opinion). After some days googling I couldn't find a better Spring JPA tutorial than this one, Spring data team should hire you to write some documentation and working examples for them, I read their Spring JPA doc and it is vague compared to this one, very well done. Guido, Thanks for your comment. I am happy to hear that you found this tutorial useful. Some parts of this tutorial contain a bit outdated information since the tutorial is based on 1.0.2 version of Spring Data JPA. However, I have written a book called Spring Data Standard Guide that is an extended edition of this tutorial. This book covers the usage of Spring Data JPA 1.2.0 and Spring Data Redis 1.0.1. Very nice, I'll get the book ASAP, I was about to ask questions but a book should answer most, I have though one question of a matter of opinion, JPA reporsitories are nice, but what if you need to build your query base on optional parameters? Let's say, you have a table of events, and you want a list of events between a time frame (findByTimeBetween signature)? Answering the question myself, it seems to me that the best builder pattern like usable for this is the Criteria API (Passing a null to a Between signature method raises an exception), which I like it more soft than hard typed (metamodel), any thoughts on this? Or, does the book cover stuff like that? Hi Guido, You have two options (you already figured the first one out): The book covers both of these situations. Also, the book has 11 different implementations of a simple contact manager application that use Spring Data JPA. 7 of those applications demonstrate the different query creation techniques (1 technique per application) and 4 demonstrate other concepts. These applications makes it easy to start experimenting right away (it can be frustrating to start building application from scratch if you are not sure how things work). Also, if you have more questions, I will be happy to answer them. Hi Petri, I just overview the whole book, it is very nice, I do feel it is missing one or two chapters with some corner cases scenario, not that I wanted one specific scenario in it, but it would help to shed some light/ideas. There is still the scenario that I described to you before about dynamic queries with JPA 2 Criteria API, our project is kind of overloaded already and for the use cases we have I don't think we are going to into any specific complex scenario, for such cases I would just use standard JPQL queries. To complete my puzzle I need just one idea, I don't want to go thru the hassle of creating my own custom Pageable by using .count(...), so I was thinking if there is a way to link Criteria API with Spring Pageable interface or PageRequest class. I know, in the worst scenario I will just create a Generic method where I pass a Predicate, a Sort and a Page request to manually execute two queries (one to count and another which will actually do the job) base on the passed Predicate. Here is a working example, but it is missing the link between Pageble and Criteria, we use this with JPA interceptor (new code in progress) which has every object backed by Riak (Think of it Redis like with consistency and availability in our case, fast KV): @Repository public class UpdateChunkServiceImpl implements UpdateChunkService { @PersistenceContext private EntityManager entityManager; @Override public List<UpdateChunk> findByChunkTypeAndTimeBetween(final UpdateChunkType chunkType, final DateTime fromDate, final DateTime toDate, int pageIndex) { final CriteriaBuilder criteriaBuilder=entityManager.getCriteriaBuilder(); CriteriaQuery<UpdateChunk> criteriaQuery=criteriaBuilder.createQuery(UpdateChunk.class); final Root<UpdateChunk> root=criteriaQuery.from(UpdateChunk.class); final Path<DateTime> timePath=root.get("time"); final Path<Integer> chunkTypePath=root.get("chunkType"); Predicate predicate=null; if(chunkType != null){ predicate=addAndPredicate(criteriaBuilder, predicate, criteriaBuilder.and(criteriaBuilder.equal(chunkTypePath, chunkType.getTypeId()))); } if(fromDate != null){ predicate=addAndPredicate(criteriaBuilder, predicate, criteriaBuilder.and(criteriaBuilder.greaterThanOrEqualTo(timePath, fromDate))); } if(toDate != null){ predicate=addAndPredicate(criteriaBuilder, predicate, criteriaBuilder.and(criteriaBuilder.lessThanOrEqualTo(timePath, toDate))); } if(predicate != null){ criteriaQuery=criteriaQuery.where(predicate); } criteriaQuery=criteriaQuery.orderBy(criteriaBuilder.desc(timePath)); return entityManager.createQuery(criteriaQuery).getResultList(); } private Predicate addAndPredicate(final CriteriaBuilder criteriaBuilder, final Predicate oldPredicate, final Predicate newPredicate) { return oldPredicate != null ? criteriaBuilder.and(oldPredicate, newPredicate) : newPredicate; } } Thanks for taking a look at my book! I agree that it lacks "advanced" concepts but I was under a strict page count limit given by the publisher which made it practically impossible to add these concepts to the book. We were originally planning to add a chapter about Spring Data Hadoop as well but the page count limit made it impossible. I ended up publishing that chapter in my blog. About your problem: Are you trying to figure out a way to implement this piece code with Spring Data JPA or just use parts of it in your implementation? If you want to use Spring Data JPA, you have to follow the steps described in the seventh part of my Spring Data JPA tutorial. If you want to use only parts of it and build your own pagination logic, you could find some answers from the source code of the SimpleJpaRepository class. Check the private readPage(TypedQuery<T> query, Pageable pageable, Specification<T> spec) method that is used to read the objects belonging to the requested page from the database. My plans are to use strictly Spring Data JPA wired with Hibernate 3.6.10.Final because version 4 has no support yet for Joda time and few other things, using Spring data repositories for the simple scenarios and Criteria API for more complex scenarios, then by using Hibernate interceptor manage the backed Riak data, think of it as a JPA hybrid JPA where you can do SQL for filtering and KV for fetching, of course, storing data will still do its part in SQL for very few fields, like ID, time, some categories, but the raw data will only be stored in NoSQL, we want to follow a simple standard like the combination of Spring Data + JPA 2, simplicity of proxying injection and at the same time the complexity of our other layer (NoSQL) So are objects have Jackson annotations AND JPA annotations which with the aid of Hibernate interceptors will do both with the same @Persistent @Json instance. So basically you want to: Is there some reason why you prefer using Hibernate interceptors instead of simply getting the raw data from Riak after you have received the ids? I have not personally used Hibernate interceptors so I am kind of shooting blind here. However, I took a quick look of the Javadoc and you might be able to use them if you: @Transientannotation. UpdateChunkclass as a "normal" entity, and build the query executed against the relational database by following the approach described in my Spring Data JPA tutorial that talks about pagination. This is getting interesting. I definitely want to know if this works. UpdateChunk as you guessed (you basically read my mind), has several annotations per method, for example, if a field will be ONLY stored at Riak, then it is @Transient, if it is a pseudo property then it is annotated with @Transient and @JsonIgnore, so basically I have a hybrid ORM, except that my relational part is very minimal, the idea behind SQL is only to provide a set of indexes for search and filtering purpuses, you could manuall fetch with multiget using the IDs, but I have used Hibernate interceptors before and they are ... lets say they are faster than doing the job manually OR using AOP. Lets say, List findBy... will do the whole job, hybridly speaking, my only draw back, which it is a matter of taste, is that I don't like Query DSL, at advanced applications you usually do either of the following things: 1) Standard JPA repo for most queries (which supports pagination) 2) Use custom queries either using entity manager and building your own query. 3) Typesafe using criteria, I like more the Type safe idea because the mapping is resolved, specially if your project has custom mapping like joda time, which by just adding a jar works: @Type(type="org.joda.time.contrib.hibernate.PersistentDateTime") I have another layer for Riak, which will be called by Hibernate interceptor, with mutation and all that Riak implies, with its own caching (Using Google's Guava 13.0.1 framework) so doing the multiget will kind of have its own 2nd level cache so it will be fast. But as you know, Redis, Memcache, CouchDB, and most KV NoSQL DBs tend to have poor indexing/search like API, so we have to have a Hybrid model, we even have Solr 4 which we use for other type of Docs searching. Thanks for explaining the "theory" behind the decision to use the "hybrid" model. It was interesting and I definitely agree that index and search APIs of NoSQL databases tend to be poor (At least when you compare them with relational databases). Hi Guido, You'd have some direction on how I could inject a JPA repository into an Hibernate interceptor ? I would like to log in a table Hibernate operations using an AuditLogRepository but it is not seen by Hibernate. Cheers, Hi Stephane, I noticed that you asked you the same question here: Spring managed Hibernate interceptor in JPA. This gave me an idea. Maybe you can do something like this: @Configurationclass or add it as a method parameter to the @Beanmethod that creates the Hibernate interceptor object. @Beanmethod by creating the Hibernate interceptor. Remember to pass the JPA repository to it. Naturally I haven't tried this myself, but it could work. Our idea is to use POJO which will be convert back and forth to JSON, stored in Riak NoSQL fully and only few of their properties stay in SQL for query/indexing purposes, so our focus in JPA is just for filtering, paging and stuff, using JPA/Hibernate to update objects and interceptors to populate to Riak, most Objects will just in Riak with few in SQL. Yes, you are right, the source code pointer you sent answers my questions, thanks a million. Now my puzzle is completed, to be honest, it seems like Spring Data is bigger than what I thought, you need another 200 pages from your publisher and an Advanced Spring Data book. Hey, good to know that you found the answer you were looking for! Now I am wondering if my lucky shot described above works. I am wondering if I should test it myself ;) As for the Hibernate interceptor, it is easier to use than what you think, explained on this reference: Thanks for the learning experience. It is always nice to learn new things! I have successfully implemented what I called phase 1, I couldn't do it in Hibernate, I had to switch to EclipseLink 2.4.1 which to be honest for JPA 2+ I think it will have a better future, when you annotate a property with @Transient, no interceptor in the world see the value and hence it is lost for Riak (@PostInsert and @PostSave) So EclipseLink as this @CloneCopyPolicy which allows you to specify a method which will create a of your Entity including its transient properties, so this method which I called cloneThis basically creates a BeanWrapper instance and copy each not null property to a new instance. To resume, the Entities backed with NoSQL require two additional annotations: @CloneCopyPolicy and @EntityListeners, then magic happens, the entity listener also have methods for @PostLoad and @PostRemove. Where can I post or send you my @Configuration class for EclipseLink?, It is almost the same as the one of your tutorial, it requires for a better performance the ReflectiveLoadTimeWeaver which has to be configured at bootstrap at the application servers (Had to google so much) BTW, I finished today most of our new API, finally, I went for EntityListener called JPA2RiakListener, since I don't mind annotating the POJO with the listener class, and it will remove the overhead of calling the listener when an Entity is backed by Riak. It was really hard to create a JPAQueryUtil for count and pagination because of some stupid error with CriteriaQuery and generatedAlias, I meant, making a Generic type like Pageable method where it accept a CriteriaQuery and Order, I had to mix some class and generate the alias by myself, I'll post the code when I give a better form to the code (clean up and stuff) Hi, after I read your book and and make de tutorials that you brote, we take de desicion to make a project for about 150 enttities, Already finish the construcction of the backend... but in the construccion of the front-end I feel that the team spent too much time. it seems to elaboraiting to much boilerplate. so de question is in sense to suggest me a set of frameworks to acelerate de work?. what do you think about Spring Roo with Spring Data? First, I would like to thank you for reading my book and tutorials. I hope that they were worth your time. Could you describe what were the biggest reasons of writing boilerplate code? This would help me to give a better answer to your question. However, one very common task that requires boilerplate code is transforming DTO objects into model objects and vice versa. There are several libraries which help you to reduce the amount of required code: I have no experience from Spring Roo (I have not used it) so I have no idea if it is useful. The problem of code generation (in general) is that I want to be in control of my code base because this way I know that the code is good enough. Code generators can help you to create the skeleton of the application but naturally they cannot write the actual logic for you (except maybe in simple cases). This being said, I think that it might be worth to give Spring Roo a shot. You should never judge something which you haven't used yourself. There are a couple of "related" Spring projects which you might find useful as well: I hope that my answer was useful to you. If you can shed more light on the reasons of writing boilerplate code, I am more than happy to continue this discussion! I am searching for general benchmarks for performance of SPRING DATA. Is there some blog or articles on performance of Spring Data JPA vs just OpenJPA? I am not aware of such blog post or benchmark :( This is a shame because it would be interesting to know the overhead caused By Spring Data. If you happen to find such a benchmark, it would be nice if you could let me know about it. Hello Petri, Thank you very much for your articles .... If we use spring data JPA , what would be the best UI frame work you would recommend to integrate with this ? We are looking for some thing that can help faster development and also less experienced developers should be able to learn fast. JSF/wicket/struts/spring mvc ........ like this a list is being proposed ... Regards MRV Hi, Have you considered using Grails? Although I have no experience from it, it seems to get a lot love from SpringSource right now. Another interesting project is Spring Boot which simplifies the development of Spring powered applications. It is kind of hard to say what is the best UI framework because it depends from many factors. For example, if most of your team members already know JSF and have no experience from Spring MVC, it probably makes no sense to use Spring MVC in your project (unless you want to learn something new). One suggestion though: I wouldn't consider using Struts because it doesn't offer anything which isn't found from Spring MVC. Petri, Basically we are looking for some thing like SPRING REST, SPRING DATA JPA stack ... but looks like JSF is not a good candidate for the above stack . I may be wrong but. Grails looks like( yii php frame work) but couldn't find good tutorials that give enough confidence .. Vikas, Maybe you could build your applications by using the "full" Spring stack. In my previous job I did use another web framework (Wicket) with the Spring stack. It seemed to be the right thing to do at the time but now I think that perhaps the "full" Spring stack would have been a better choice. Petri, Do you have any experience on Play frame work ? any idea how good it is when comparing to grails ? Regards Vikas Hi Vikas, I have created a "Hello World" application with Play and Scale so I have no real experience from it. However, I remembered that zeroturnaround.com has published a few rather good articles about Java web application frameworks: Maybe these will answer to your question. Petri, Planning to try Vaadin... thank you very much for your directions. Will update you if any road blocks.... just for sharing experience.....:-) Regards Vikas Hi Vikas, I am very interested to hear your opinion about Vaadin. I have an opinion too but I am not going to reveal it yet. :) Petri, As per one of our brain trusts opinion we decided to try vaadin for only admin modules. As it is state full and consumes more memory due to all built-in widgets/components ...We fear whether it may scale to eCommerce kind off apps where traffic is more ...we are now more to the original plan ... Some thing like Spring data jpa rest mvc + Jquery ... Or Play + Jquery Regards Vikas Hello Petrik, I am very new to the world of Spring and all programming stuff. We use Spring data in our project. I have a simple query as follows "select * from USERS". I also use Pageable to enable pagination. This query may have optional predicates based on the given parameters being null or not. For example if "code" parameter is given and not null, then the query becomes "select * from USERS where code = :code"; As far as I know I cannot implement this using @Query annotation. I can implement a custom repository and use EntityManager to create a dynamic query. However, I am not sure how I can integrate "Pageable" with that to get back paginated results. How can I achieve this? If you need to build dynamic queries, you can use either JPA Criteria API or Querydsl. Both of these techniques support pagination so it shouldn't be a problem. Hi Petrik, Thanks for the reply, This is how i created my query. Can you please tell me how i can use pagination with the same ? I need to return Page instead of collection. public List findCustomers( final String firstName, final String surname) { StringBuilder queryBuilder = new StringBuilder( "select c from Customer where "); List paramList = new ArrayList();.createQuery( queryBuilder.toString()); List resultList = (List)query.getResultList(); // iterate, cast, populate and return a list } Return a page instead of collection / list, can you please help me with that? You cannot use pagination provided by Spring Data JPA if you use entity manager. Let's assume that you can to create the query by using Spring Data JPA and JPA Criteria API. You can do this by following these steps JpaSpecificationExecutor<T>interface (See my Spring Data JPA + Criteria API tutorial for more details about this.) Specification<Customer>objects (See my Spring Data JPA + Criteria API tutorial for more details about this). findAll(Specification specification, Pageable pageable)method of the JpaSpecificationExecutor<T>interface (See my Spring Data JPA + pagination tutorial for more details about this). I hope that this answered to your question.
https://www.petrikainulainen.net/programming/spring-framework/spring-data-jpa-tutorial-part-nine-conclusions/
CC-MAIN-2022-40
en
refinedweb
Hai, Event receivers are effective ways to add triggers to SharePoint solution. To create a simple event receiver, follow the steps - Open SharePoint site and create a new list called SPEventList and leave the list with default Title column - Open Visual Studio –> New Project –> Select Event Receiver in the SharePoint 2010 project template folder - Give a name to project and select “Deploy as Farm Solution” in Security level option - When prompted int he wizard, select the List Item Events option under the type of event receiver which we want to associate the event. Select Announcements list under the event source and “An Item is being added” as the specific event and then Finish. In the event receiver class file add the following code using System; using System.Security.Permissions; using Microsoft.SharePoint; using Microsoft.SharePoint.Security; using Microsoft.SharePoint.Utilities; using Microsoft.SharePoint.Workflow; namespace EventReceiverProject1.EventReceiver1 { /// <summary> /// List Item Events /// </summary> public class EventReceiver1 : SPItemEventReceiver { /// <summary> /// An item is being added. /// </summary> public override void ItemAdding(SPItemEventProperties properties) { base.ItemAdding(properties); string eventName = "Event List:"; LogAnAnnoncement(properties, eventName); } private void LogAnAnnoncement(SPItemEventProperties properties, string eventName) { string listTitle = properties.List.Title; string siteURL = ""; DateTime currentDate = DateTime.Now; using (SPSite mySPSiteCollection = new SPSite(siteURL)) { using (SPWeb mySPSite = mySPSiteCollection.RootWeb) { SPList mySPList = mySPSite.Lists["SPEventList"]; SPListItem newListItem = mySPList.Items.Add(); newListItem["Title"] = eventName + listTitle + " @ " + currentDate.ToLongTimeString(); newListItem.Update(); } } } } } Build and deploy Event Receiver project in SharePoint site. now go to SharePoint site –> Announcements list –> Add new item. After finish the new item go to SPEventList (which we created earlier), you will see a new list item. An event receiver is, a custom DLL that is deployed to the global assembly cache (GAC) on SharePoint server. Using the project template, Visual Studio creates a feature that then references the custom assembly in the GAC when the action that triggers the event occurs. Here we added an event that is triggered whenever someone adds an event to the Announcements list.
http://iamsiva.com/blog/2015/05/event-receiver-in-sharepoint/
CC-MAIN-2022-40
en
refinedweb
pair of Key and Value. This key-value pair can be used to represent - Element name - Attribute name - XML namespace declaration. XmlDictionary class uses XmlDictionaryString class object to create key value pair. If you see below code, we are creating a XmlDictionary object and List of XmlDictionaryString. dct is object of XmlDictionary and while adding string value in this it returns XmlDictionaryString object. XmlDictionary dct = new XmlDictionary(); List<XmlDictionaryString> lstData = new List<XmlDictionaryString>(); lstData.Add(dct.Add("Bat")); lstData.Add(dct.Add("Ball")); lstData.Add(dct.Add("Wicket")); foreach (var r in lstData) { Console.WriteLine("Key = {0} and Value = {1}", r.Key, r.Value); } Console.ReadKey(true); If you run above code you will get output as below, We are simply adding returned XmlDictionaryString object to list of XmlDictionaryString. I hope now you have understanding of XmlDictionary class. In next post we will discuss some other important class required to understand Message in WCF. Thanks for reading
https://debugmode.net/2011/06/18/internal-classes-to-understand-wcf-message-xmldictionary-class/
CC-MAIN-2022-40
en
refinedweb
Agile/Scrum Training Classes in Edmond, Oklahoma Learn Agile/Scrum in Edmond, Oklahoma and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current Agile/Scrum related training offerings in Edmond, - Introduction to Spring 5, Spring Boot, and Spring REST (2021) 17 October, 2022 - 21 October, 2022 - Advanced C++ Programming 12 December, 2022 - 16 December, 2022 - Intermediate - Advanced Java 11 3 October, 2022 - 7 October, 2022 - DP-100: DESIGNING AND IMPLEMENTING A DATA SCIENCE SOLUTION ON AZURE 17 October, 2022 - 19 October, 2022 - file exists is a two step process in Python. Simply import the module shown below and invoke the isfile function: import os.path os.path.isfile(f…
https://hartmannsoftware.com/Training/Scrum/Edmond-Oklahoma
CC-MAIN-2022-40
en
refinedweb
Colorful, flexible, lightweight logging for Swift 3, Swift 4 & Swift 5. Great for development & release with support for Console, File & cloud platforms. Log during release to the conveniently built-in SwiftyBeaver Platform, the dedicated Mac App & Elasticsearch! Docs | Website | Twitter | Privacy | License During Development: Colored Logging to Xcode Console Learn more about colored logging to Xcode 8 Console with Swift 3, 4 & 5. For Swift 2.3 use this Gist. No need to hack Xcode 8 anymore to get color. You can even customize the log level word (ATTENTION instead of ERROR maybe?), the general amount of displayed data and if you want to use the 💜s or replace them with something else 😉 During Development: Colored Logging to File Learn more about logging to file which is great for Terminal.app fans or to store logs on disk. On Release: Encrypted Logging to SwiftyBeaver Platform Learn more about logging to the SwiftyBeaver Platform during release! Browse, Search & Filter via Mac App Conveniently access your logs during development & release with our free Mac App. On Release: Enterprise-ready Logging to Your Private and Public Cloud Learn more about legally compliant, end-to-end encrypted logging your own cloud with SwiftyBeaver Enterprise. Install via Docker or manual, fully-featured free trial included! Google Cloud & More You can fully customize your log format, turn it into JSON, or create your own destinations. For example our Google Cloud Destination is just another customized logging format which adds the powerful functionality of automatic server-side Swift logging when hosted on Google Cloud Platform. Installation - For Swift 4 & 5 install the latest SwiftyBeaver version - For Swift 3 install SwiftyBeaver 1.8.4 - For Swift 2 install SwiftyBeaver 0.7.0 Carthage You can use Carthage to install SwiftyBeaver by adding that to your Cartfile: Swift 4 & 5: github "SwiftyBeaver/SwiftyBeaver" Swift 3: github "SwiftyBeaver/SwiftyBeaver" ~> 1.8.4 Swift 2: github "SwiftyBeaver/SwiftyBeaver" ~> 0.7 Swift Package Manager For Swift Package Manager add the following package to your Package.swift file. Just Swift 4 & 5 are supported: .package(url: "", .upToNextMajor(from: "1.9.0")), CocoaPods To use CocoaPods just add this to your Podfile: Swift 4 & 5: pod 'SwiftyBeaver' Swift 3: target 'MyProject' do use_frameworks! # Pods for MyProject pod 'SwiftyBeaver', '~> 1.8.4' end Usage Add that near the top of your AppDelegate.swift to be able to use SwiftyBeaver in your whole project. import SwiftyBeaver let log = SwiftyBeaver.self At the the beginning of your AppDelegate:didFinishLaunchingWithOptions() add the SwiftyBeaver log destinations (console, file, etc.), optionally adjust the log format and then you can already do the following log level calls globally: // add log destinations. at least one is needed! let console = ConsoleDestination() // log to Xcode Console let file = FileDestination() // log to default swiftybeaver.log file let cloud = SBPlatformDestination(appID: "foo", appSecret: "bar", encryptionKey: "123") // to cloud // use custom format and set console output to short time, log level & message console.format = "$DHH:mm:ss$d $L $M" // or use this for JSON output: console.format = "$J" // add the destinations to SwiftyBeaver log.addDestination(console) log.addDestination(file) log.addDestination(cloud) // // log anything! log.verbose(123) log.info(-123.45678) log.warning(Date()) log.error(["I", "like", "logs!"]) log.error(["name": "Mr Beaver", "address": "7 Beaver Lodge"]) // optionally add context to a log message console.format = "$L: $M $X" log.debug("age", context: 123) // "DEBUG: age 123" log.info("my data", context: [1, "a", 2]) // "INFO: my data [1, \"a\", 2]" Server-side Swift We ❤️ server-side Swift 4 & 5 and SwiftyBeaver supports it out-of-the-box! Try for yourself and run SwiftyBeaver inside a Ubuntu Docker container. Just install Docker and then go to your the project folder on macOS or Ubuntu and type: # create docker image, build SwiftyBeaver and run unit tests docker run --rm -it -v $PWD:/app swiftybeaver /bin/bash -c "cd /app ; swift build ; swift test" # optionally log into container to run Swift CLI and do more stuff docker run --rm -it --privileged=true -v $PWD:/app swiftybeaver Best: for the popular server-side Swift web framework Vapor you can use our Vapor logging provider which makes server logging awesome again 🙌 Documentation Getting Started: Logging Destinations: - Colored Logging to Xcode Console - Colored Logging to File - Encrypted Logging & Analytics to SwiftyBeaver Platform - Encrypted Logging & Analytics to Elasticsearch & Kibana Advanced Topics: Stay Informed: Privacy SwiftyBeaver is not collecting any data without you as a developer knowing about it. That's why it is open-source and developed in a simple way to be easy to inspect and check what it is actually doing under the hood. The only sending to servers is done if you use the SBPlatformDestination. That destination is meant for production logging and on default it sends your logs plus additional device information end-to-end encrypted to our cloud service. Our cloud service can not decrypt the data. Instead, you install our Mac App and that Mac App downloads the encrypted logs from the cloud and decrypts and shows them to you. Additionally, the Mac App stores all data that it downloads in a local SQLite database file on your computer so that you actually "physically" own your data. The business model of the SwiftyBeaver cloud service is to provide the most secure logging solution in the market. On purpose we do not provide a web UI for you because it would require us to store your encryption key on our servers. Only you can see the logging and device data which is sent from your users' devices. Our servers just see encrypted data and do not know your decryption key. SwiftyBeaver is fully GDPR compliant due to its focus on encryption and transparency in what data is collected and also meets Apple’s latest requirements on the privacy of 3rd party frameworks. Our Enterprise offering is an even more secure solution where you are not using anymore our cloud service and Mac App but you send your end-to-end encrypted logs directly to your own servers and you store them in your Elasticsearch cluster. The Enterprise offering is used by health tech and governmental institutions which require the highest level of privacy and security. End-to-End Encryption SwiftyBeaver is using symmetric AES256CBC encryption in the SBPlatformDestination destination. No other officially supported destination uses encryption. The encryption used in the SBPlatformDestination destination is end-to-end. The open-source SwiftyBeaver logging framework symmetrically encrypts all logging data on your client's device inside your app (iPhone, iPad, ...) before it is sent to the SwiftyBeaver Crypto Cloud. The decryption is done on your Mac which has the SwiftyBeaver Mac App installed. All logging data stays encrypted in the SwiftyBeaver Crypto Cloud due to the lack of the password. You are using the encryption at your own risk. SwiftyBeaver’s authors and contributors do not take over any guarantee about the absence of potential security or cryptopgraphy issues, weaknesses, etc.; please also read the LICENSE file for details. Also if you are interested in cryptography in general, please have a look at the file AES256CBC.swift to learn more about the cryptographical implementation. License SwiftyBeaver Framework is released under the MIT License.
https://opensourcelibs.com/lib/swiftybeaver
CC-MAIN-2022-40
en
refinedweb
Clojure implemented on top of Python Project description # clojure-py An implementation of Clojure in pure Python. []() ## Why Python? It is our belief that static virtual machines make very poor runtimes for dynamic languages. They constrain the languages to their view of what the “world should look like” and limit the options available to language implementors. We are attempting to prove this by writing an implementation of Clojure that runs on the Python VM.. ## Basic concepts Python builtins are available under the py/ namespace. Actual python bytecodes can be injected via py.bytecodes/OP Viewing the code at is probably the best way to get a feeling of what is possible, and how clojure-py implements certain functions. One note: clojure-py implements the new “property vs calling method” design used in ClojureScript: (.__name__ (module)) ; same as module.__name__() in python (.-__name__ (module)) ; same as module.__name__ in python ## How can I help? At this point, find a need and fill it! Play around with clojure-py, start porting your favorite clojure lib, and see what is missing. Also feel free to join our [mailing list](). We (the clojure-py devs) normally just send out a message either we plan on working on a certain aspect of clojure-py (either through a issue report or through the mailing list). Currently there are quite a few functions in clojure.core that need porting. Drop by the mailing list and let us know what your interests are, and we’ll be glad to offer suggestions and help however we can. From time to time, we’ll post status updates, ideas and plans to this blog ## Installation Install 0.1.0 release: easy_install clojure-py clojurepy To run from GitHub checkout: python ./clojure.py ## Unit tests # (must ‘easy_install nose’ or ‘pip install nose’ first) nosetests ## Running clojurepy ## License Not endorsed by Rich Hickey, but this project contains code based on his work Clojure-Py. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/clojure_py/
CC-MAIN-2022-40
en
refinedweb
Flutter mobile apps can now easily use Google Mobile Ads The brand-new concept of an open beta for Google Mobile Ads SDK for Flutter, the advanced plugin which provides inline banner and native ads, with existing overlay formats. Publishing size doesn’t matter here, this plugin can be altered to your scenarios and this plugin supports Ad Manager and Admob. Some companies have already taken initiative towards the plugin and have launched their apps with these new formats. The largest Latin American platform Sua Musica having more than 15k verified artists and 10M MUA has successfully launched their new flutter app with Google Mobile Ads SDK for flutter plugin. Their growth was astonishing, 350% increase in impressions with a 43% increase in CTR, And a 13% increase in eCPM. This plugin is now ready to use for today Andrew Brogdon and Zoey Fan delivered a session on “Monetizing app with Flutter” telling the techniques for monetization of apps built with Flutter, and loading ads in your Flutter app. Steps to display AdMob ads and earning revenue: - Basic requirements - Flutter 1.22.0 or higher - Android - Android Studio 3.2 or higher - Target Android API level 19 or higher - Set compile SDK Version to 28 or higher - IOS - The updated version of XCode with command-line tools - Create an AdMob account and register an Android or iOS app - For Android update AndroidManifest.xml. The app ID will be in the AdMob UI Ex: > - For iOS update Info.Plist. Ex: <key>GADApplicationIdentifier</key> <string>ca-app-pub-################~##########</string> - Load the Mobile Ads SDK Initialize the Mobile Ads SDK by calling MobileAds. Instance. Initialize () that loads SDK and returns a future which over when initializing is complete. Perform this right before the running app. Ex: import ‘package:google_mobile_ads/google_mobile_ads.dart’; import ‘package. } } - Selecting an Ad format Now after importing SDK implement your ad with a number of ad formats that fit your app’s UX. - Banner Ads: Rectangular ads that show up at the top or lower part of the gadget screen. Banner promotions stay on screen while clients are associating with the application, and can revive naturally after a specific timeframe. Assuming that you’re new to mobile ads, they’re an extraordinary spot to begin. - Interstitial Ads: Full-screen advertisements that cover the interface of an application until shut by the client. They’re best utilized at regular stops in the progression of an application’s execution, for example, in the middle of levels of a game or soon after wrapping up a responsibility. - Native Ads: Customizable ads that match the look and feel of your application. You conclude how and where they’re put, so the format is more predictable with your application’s plan - Rewarded Ads: Ads that reward clients for observing brief recordings and associating with playable promotions and reviews. Useful for adapting allowed to-play clients. Bio: John Smith is a Flutter app development expert and has been managing its difficulties in the last 10 years. She is a blogger by passion, and thus, consistently pens down her encounters with Frantic Infotech services. Her words have upheld abundant organizations, who have as of late changed to digital platforms.
https://technonguide.com/flutter-mobile-apps-can-now-easily-use-google-mobile-ads/
CC-MAIN-2022-40
en
refinedweb
Theses tutorials are based on actual code found in the tutorials/ directory of the Polygene™ SDK sources. You should start your favorite editor and find the code related to this tutorial, run it and play with it. This introduction will deepen your understanding of Polygene™, as we touches on a couple of the common features of Polygene™. It is expected that you have gone through and understood the "Polygene™ in 10 minutes" introduction.. We will go back to the OrderEntity example; @Concerns( { PurchaseLimitConcern.class, InventoryConcern.class } ) public interface OrderEntity extends Order, Confirmable, HasSequenceNumber, HasCustomer, HasLineItems, HasIdentity { } Let’s say that this is an existing Composite, perhaps found in a library or used in a previous object, but we want to add that it tracks all the changes to the order and the confirmation of such order. First we need to create (or also find in a library) the mechanics of the audit trail. It could be something like this; public interface HasAuditTrail<M> { AuditTrail<M> auditTrail(); } public interface AuditTrail<M> extends Property<List<Action<M>>> {} public interface Action<T> extends ValueComposite // [2][3] { enum Type { added, removed, completed }; @Optional Property<T> item(); // [1] Property<Type> action(); // [1] } public interface Trailable<M> { void itemAdded( M item ); void itemRemoved( M item ); void completed(); } public class TrailableMixin<M> implements Trailable<M> { private @This HasAuditTrail<M> hasTrail; @Override public void itemAdded( M item ) { addAction( item, Action.Type.added ); } @Override public void itemRemoved( M item ) { addAction( item, Action.Type.removed ); } @Override public void completed() { addAction( null, Action.Type.completed ); } private Action<M> addAction( M item, Action.Type type ) { ValueBuilder<Action> builder = valueBuilderFactory.newValueBuilder( Action.class); // [4] Action<M> prototype = builder.prototypeFor( Action.class ); prototype.item().set( item ); prototype.action().set( type ); Action instance = builder.newInstance(); hasTrail.auditTrail().get().add( instance ); return instance; } } Quite a lot of Polygene™ features are leveraged above; [1] Property is a first class citizen in Polygene™, instead of getters/setters naming convention to declare properties. [2] ValueComposite for Action means that it is among other things Immutable. [3] The Action extends a Property. We call that Property subtyping and highly recommended. [4] The CompositeBuilder creates Immutable Action instances. We also need a Concern to hang into the methods of the Order interface. public abstract class OrderAuditTrailConcern extends ConcernOf<Order> implements Order { @This Trailable<LineItem> trail; @Override public void addLineItem( LineItem item ) { next.addLineItem( item ); trail.itemAdded( item ); } @Override public void removeLineItem( LineItem item ) { next.removeLineItem( item ); trail.itemRemoved( item ); } @Override public void completed() { next.completed(); trail.completed(); } } In this case, we have chosen to make an Order specific Concern for the more generic AuditTrail subsystem, and would belong in the client (Order) code and not with the library (AuditTrail). Pay attention to the @This annotation for a type that is not present in the Composite type interface. This is called a private Mixin, meaning the Mixin is only reachable from Fragments within the same Composite instance. But the AuditTrail subsystem could provide a Generic Concern, that operates on a naming pattern (for instance). In this case, we would move the coding of the concern from the application developer to the library developer, again increasing the re-use value. It could look like this; public class AuditTrailConcern extends ConcernOf<InvocationHandler> implements InvocationHandler { @This Trailable trail; @Override public Object invoke( Object proxy, Method m, Object[] args ) throws Throwable { Object retValue = next.invoke(proxy, m, args); String methodName = m.getName(); if( methodName.startsWith( "add" ) ) { trail.itemAdded( args[0] ); } else if( methodName.startsWith( "remove" ) ) { trail.itemRemoved( args[0] ); } else if( methodName.startsWith( "complete" ) || methodName.startsWith( "commit" ) ) { trail.completed(); } return retValue; } } The above construct is called a Generic Concern, since it implements java.lang.reflect.InvocationHandler instead of the interface of the domain model. The ConcernOf baseclass will also need to be of InvocationHandler type, and the Polygene™ Runtime will handle the chaining between domain model style and this generic style of interceptor call chain. Finally, we need to declare the Concern in the OrderEntity; @Concerns({ AuditTrailConcern.class, PurchaseLimitConcern.class, InventoryConcern.class }) @Mixins( TrailableMixin.class ) public interface OrderEntity extends Order, Confirmable, HasSequenceNumber, HasCustomer, HasLineItems, HasIdentity { } We also place it first, so that the AuditTrailConcern will be the first Concern in the interceptor chain (a.k.a InvocationStack), so that in case any of the other Concerns throws an Exception, the AuditTrail is not updated (In fact, the AuditTrail should perhaps be a SideEffect rather than a Concern. It is largely depending on how we define SideEffect, since the side effect in this case is within the composite instance it is a border case.). So let’s move on to something more complicated. As we have mentioned, EntityComposite is automatically persisted to an underlying store (provided the Runtime is setup with one at bootstrap initialization), but how do I locate an Order? Glad you asked. It is done via the Query API. It is important to understand that Indexing and Query are separated from the persistence concern of storage and retrieval. This enables many performance optimization opportunities as well as a more flexible Indexing strategy. The other thing to understand is that the Query API is using the domain model, in Java, and not some String based query language. We have made this choice to ensure refactoring safety. In rare cases, the Query API is not capable enough, in which case Polygene™ still provides the ability to look up and execute native queries. Let’s say that we want to find a particular Order from its SequenceNumber. import static org.apache.polygene.api.query.QueryExpressions.eq; import static org.apache.polygene.api.query.QueryExpressions.gt; import static org.apache.polygene.api.query.QueryExpressions.templateFor; import org.apache.polygene.api.query.QueryBuilder; [...snip...] @Structure private UnitOfWorkFactory uowFactory; //Injected [...snip...] UnitOfWork uow = uowFactory.currentUnitOfWork(); QueryBuilder<Order> builder = queryBuilderFactory.newQueryBuilder( Order.class ); String orderNumber = "12345"; HasSequenceNumber template = templateFor( HasSequenceNumber.class ); builder.where( eq( template.number(), orderNumber ) ); Query<Order> query = uow.newQuery( builder); Iterator<Order> result = query.iterator(); if( result.hasNext() ) { Order order = result.next(); } else { // Deal with it wasn't found. } The important bits are; Another example, QueryBuilder<Order> builder = queryBuilderFactory.newQueryBuilder( Order.class ); LocalDate last90days = LocalDate.now().minusDays( 90 ); Order template = templateFor( Order.class ); builder.where( gt( template.createdDate(), last90days ) ); Query<Order> query = uow.newQuery(builder); for( Order order : query ) { report.addOrderToReport( order ); } In the above case, we find the Orders that has been created in the last 90 days, and add them to a report to be generated. This example assumes that the Order type has a Property<Date> createdDate() method. Now, Orders has a relation to the CustomerComposite which is also an Entity. Let’s create a query for all customers that has made an Order in the last 30 days; QueryBuilder<HasCustomer> builder = queryBuilderFactory.newQueryBuilder( HasCustomer.class ); LocalDate lastMonth = LocalDate.now().minusMonths( 1 ); Order template1 = templateFor( Order.class ); builder.where( gt( template1.createdDate(), lastMonth ) ); Query<HasCustomer> query = uow.newQuery(builder); for( HasCustomer hasCustomer : query ) { report.addCustomerToReport( hasCustomer.name().get() ); } This covers the most basic Query capabilities and how to use it. For Querying to work, an Indexing subsystem must be assembled during bootstrap. At the time of this writing, only an RDF indexing subsystem exist, and is added most easily by assembly.addAssembler( new RdfNativeSesameStoreAssembler() ). It can be a bit confusing to see Polygene™ use Java itself as a Query language, but since we have practically killed the classes and only operate with interfaces, it is possible to do a lot of seemingly magic stuff. Just keep in mind that it is pure Java, albeit heavy use of dynamic proxies to capture the intent of the query. We have now explored a couple more intricate features of Polygene™, hopefully without being overwhelmed with details on how to create applications from scratch, how to structure applications, and how the entire Polygene™ Extension system works. We have looked at how to add a Concern that uses a private Mixin, we have touched a bit on Generic Concerns, and finally a short introduction to the Query API.
https://polygene.apache.org/java/3.0.0/thirty-minutes-intro.html
CC-MAIN-2022-40
en
refinedweb
Each Answer to this Q is separated by one/two green lines. I am creating a stacked line/area plot using plt.fill_between() method of the pyplot, and after trying so many things I am still not able to figure why it is not displaying any legend or labels (even when I provide them in the code). Here is the code: import matplotlib.pyplot as plt import numpy a1_label="record a1" a2_label="record a2" a1 = numpy.linspace(0,100,40) a2 = numpy.linspace(30,100,40) x = numpy.arange(0, len(a1), 1) plt.fill_between(x, 0, a1, facecolor="green") plt.fill_between(x, a1, a2, facecolor="red") plt.title('some title') plt.grid('on') plt.legend([a1_label, a2_label]) plt.show() Here is the image generated (note that the legend shows empty box instead of labels): The fill_between() command creates a PolyCollection that is not supported by the legend() command. Therefore you will have to use another matplotlib artist (compatible with legend()) as a proxy, without adding it to the axes (so the proxy artist will not be drawn in the main axes) and feed it to the legend function. (see the matplotlib legend guide for more details) In your case, the code below should fix your problem: from matplotlib.patches import Rectangle p1 = Rectangle((0, 0), 1, 1, fc="green") p2 = Rectangle((0, 0), 1, 1, fc="red") legend([p1, p2], [a1_label, a2_label]) gcalmettes’s answer was a helpful start, but I wanted my legend to pick up the colors that the stackplot had automatically assigned. Here’s how I did it: polys = pyplot.stackplot(x, y) legendProxies = [] for poly in polys: legendProxies.append(pyplot.Rectangle((0, 0), 1, 1, fc=poly.get_facecolor()[0])) Another, arguably easier, technique is to plot an empty data set, and use it’s legend entry: plt.plot([], [], color="green", linewidth=10) plt.plot([], [], color="red", linewidth=10) This works well if you have other data labels for the legend, too: Only to give an update about this matter as I as looking for it. At 2016, PolyCollection already provide support to the label attribute as you can see:
https://techstalking.com/programming/python/legend-not-showing-up-in-matplotlib-stacked-area-plot/
CC-MAIN-2022-40
en
refinedweb
WEB-16502 (Bug) Grunt: support navigable links in the output from the grunt-tslint package WEB-16649 (Bug) Gulp: new Gulp configuration is created each time you choose 'Edit '<task name>' settings' in Gulp toolwindow WEB-16632 (Bug) target media not detected when using 'media' attribute in HTML WEB-8240 (Bug) coffescript inspection 'unused local symbols' occurs when parameter used for extending a class IDEA-140659 (Task) Parse CF11 elvis operator IDEA-96134 (Bug) for-in loop parsing errors if any variables with period used IDEA-110574 (Bug) CFML Plugin doesn't recognize tags or functions from ColdFusion 10 IDEA-132315 (Bug) IDEA marks double hashes as Error IDEA-129730 (Bug) Coldfusion Struct supports : and = now IDEA-141113 (Bug) break / default are not formatted properly in CFScript's switch statement IDEA-96699 (Bug) rethrow keyword marked "not a statement" IDEA-118568 (Bug) Unicode characters in StepDef are used should not match \w IDEA-128412 (Bug) Gherkin parser does not work with several features in one file WEB-16674 (Feature) Support ANSI colors in Dart tests output and in Dart command line apps console WEB-16445 (Feature) Quick fix option for inserting a part entry on a part-of declaration without a part entry (and vice versa) WEB-16021 (Feature) Local variable declaration is not highlighted in top-level functions? WEB-16639 (Usability Problem) Unclear how to generate sample content for a new Dart user WEB-16392 (Bug) Const constructed objects not parsed correctly. WEB-16017 (Bug) Please improve parameters semantic highlighting WEB-16064 (Bug) False syntax error in conditional operator WEB-12645 (Bug) Autocomplete for dart doesn't work in for cycle WEB-16019 (Bug) Property extraction highlighting is inconsistent WEB-14575 (Bug) Code Formatting adds line break inside empty Dart object literal WEB-16200 (Bug) hover-over description of method parameters missing types if parameter has this. in name WEB-13967 (Bug) Incorrect fields/variables declaration/access highlighting WEB-16020 (Bug) Inconsistent highlighting for operators WEB-16187 (Bug) Dart: show parameter info for callable objects WEB-15322 (Bug) Intellisense after await WEB-16402 (Bug) Good dart code reported with syntax error. WEB-16723 (Bug) Operation didn't finish in 1000 ms / Dart Analysis Server / analysis_setAnalysisRoots(...) DBE-1282 (Bug) Wrong values when copying big numbers to DB IDEA-141324 (Bug) Line numbers in breakpoints dialog box broken by line wrappin-12196 (Bug) File Watcher: Output Filter: output not parsed when filename contains spaces/brackets. Unusable with Dropbox WEB-16470 (Bug) File watcher: spaces in paths prevent to run program IDEA-135707 (Feature) Mention context in Find in Path results IDEA-139549 (Bug) IntelliJ highlights errors wrongly and breaks editor functionality IDEA-139240 (Bug) Good code red: ActionScript internal members not accessible from within object literal IDEA-138900 (Bug) IDEA highlighting valid code as code with error IDEA-140849 (Feature) Google App Engine. Disable "no_cookies" option. IDEA-138942 (Bug) Gradle: #JAVA_INTERNAL is suggested as Gradle JVM in Import Project Wizard if no other JDKs is configured IDEA-140243 (Bug) Could not import any gradle project using out-of-process mode IDEA-140304 (Bug) Groovy: 'Extract Parameter' introduces additional spaces inside GString expressions WEB-14928 (Bug) End of JavaScript comments detected WEB-15277 (Bug) Repeating breadcrumbs WEB-16599 (Bug) Live template in HTML context results in exception WEB-16748 (Bug) Simultaneous tag editing feature fails at PHP string WEB-16247 (Bug) Some tag names duplicated in html5 completion WEB-16802 (Bug) Misplaced lang attribute in new HTML file (head instead of html) WEB-2329 (Exception) Exception is thrown for Zen coding in injected HTML IDEA-130329 (Bug) "Cannot resolve file" in Hibernate XML <mapping resource="X"/> due to Maven target folder IDEA-139883 (Bug) Changes in file associations are not saved IDEA-139409 (Bug) Persistent message "File type recognized: File extension *.vm was reassigned to VTL" IDEA-141130 (Exception) InvalidVirtualFileAccessException at com.intellij.openapi.vfs.newvfs.persistent.PersistentFSImpl.getFileId IDEA-136562 (Exception) SerializerNotFoundException IDEA-133543 (Bug) Version 14 JPA 2.1 IDEA-140539 (Bug) Incorrect error highlighting when passing generic method references as parameters IDEA-140150 (Bug) Multicatch with generics hangs IDEA IDEA-140376 (Bug) False positive "Abstract method overrides abstract method" using virtual extension methods. IDEA-140384 (Bug) Javadoc @link breaks on refactoring IDEA-140336 (Bug) ThrowableResultOfMethodCallIgnored inspection reports problems even if used with new Java 8 APIs that obviously throw the Throwable IDEA-137407 (Bug) Exception while redeploying application IDEA-140656 (Bug) Deploying to IBM Liberty: Artifact my-webapp:war exploded: Server is not connected. Deploy is not available. WEB-16410 (Usability Problem) Download libraries: when searching for typescript stub, navigate to library with name that starts with entered substring first WEB-16260 (Bug) Failing type inference with an object conforming to a typedef WEB-16476 (Bug) False positive "Unresolved function or method" with valid javascript forward references WEB-15884 (Bug) ES6 'export * from' declarations WEB-14921 (Bug) Better Structure View for ExtJS Files WEB-16198 (Bug) Javascript : unresolved function or method from super class where there is also a "static" method WEB-16594 (Bug) Good code marked red. Javascript getter shown as error if version set to ECMAScript 6 or JSX Harmony. WEB-16620 (Bug) ES6 Modules: defaultVal, * as all — caused errors WEB-16188 (Bug) IntelliJ 14.1 AngularJs 141.2 plugin highlighting functions as unresolved WEB-16111 (Bug) Code completion is not working for nested objects defined outside WEB-16646 (Bug) Reformat code renames variables start which "в" char (maybe other) WEB-16753 (Bug) Component name shows multiple time in the code completion suggestions WEB-16840 (Feature) Add support for eslintConfig field in package.json WEB-16403 (Feature) Code Quality Tools - JSHint Search for configs(s) Correction WEB-16106 (Usability Problem) JSCS integration, plugin not found WEB-16652 (Bug) Code Quality Tools: inspections settings should be taken from profile chosen when running Code/Inspect Code WEB-16544 (Bug) JSCS: add verbose option and correct "validateQuoteMarks" rule type WEB-16816 (Bug) ESLint does not work on *.es6 files WEB-16860 (Bug) ESLint Plugin needs to support the --reset option WEB-16621 (Bug) Extra inspection about reserved word 'default' in 'export defaul' WEB-16550 (Bug) jscs should use esnext flag for es6 languages, not just jsx harmony WEB-15848 (Bug) Support rename of the imported\exported variable WEB-16701 (Bug) Parser throws error on valid LESS variable names beginning with numbers IDEA-140208 (Bug) java.lang.NoSuchMethodError: org.eclipse.aether.RepositorySystem.newResolutionRepositories(Lorg/eclipse/aether/RepositorySystemSession;Ljava/util/List;)Ljava/util/List; IDEA-140673 (Bug) IDEA can not download existing sources because of ArtifactResolutionException WEB-8392 (Feature) node.js: make urls in console clickable WEB-16715 (Bug) Node: detect npm location for nodist on Windows IDEA-140496 (Bug) bnd not picking up changes from dependent module IDEA-141111 (Bug) CustomUncommenter#findMaximumCommentedRange is broken WEB-15966 (Usability Problem) V8 Profiling: CPU/Heap: F1 does nothing WEB-15748 (Bug) Profiling: Call names are not readable when selected IDEA-140360 (Performance Problem) ModulesConfigurator.getModuleEditor replace linear scan with table lookup. WEB-16633 (Bug) Red code in scss file: 'from' and 'to' are not recognized IDEA-141165 (Bug) Error when parsing custom beans with scope Test: “Cannot find custom handler for namespace" WEB-13406 (Bug) Stylus nested media query duplicate rule flag invalid WEB-13162 (Bug) Stylus: Structure view: correctly recognize several @media queries IDEA-131992 (Feature) FogBugz improvements IDEA-139903 (Usability Problem) TaskManagement: Trello: Change 'number' placeholder in commit message WEB-15730 (Bug) TypeScript: "Create field" intention should be available with enabled Compiler WEB-15917 (Bug) Typescript: wrong declare class and declare var constructs formatting WEB-16576 (Bug) Jasmine test generate menu is disabled for fdescribe IDEA-139761 (Bug) Incorrect rendering of scrollbar track IDEA-137838 (Usability Problem) Ctrl + C - should copy a file name into a clipboard in Commit Changes window, not absolute path IDEA-139488 (Bug) Ctrl-C in Local Changes copies to clipboard one file name only IDEA-139870 (Bug) Issue with context menu for annotation panel for files without an associated type IDEA-130530 (Bug) Changes Tool Window - Local Changes - ctrl+c = ctrl+shift+c IDEA-140436 (Usability Problem) Git | Merge Changes with conflicts does not automatically display Resolve Conflicts IDEA-140297 (Usability Problem) Drag-n-drop in interactive rebase editor works incorrectly IDEA-91996 (Usability Problem) Git: rebasing actions are disabled for repository in rebasing state depending on selection in Project View IDEA-140501 (Performance Problem) Using all of the CPU and eventually WebStorm is unusable IDEA-141204 (Bug) Git log displays labels incorrectly for Git 2.4.3 IDEA-129370 (Feature) Support XML Schema (XSD) 1.1 WEB-16843 (Usability Problem) Bower: notify users when searching for packages fails because of time out WEB-16386 (Performance Problem) WebStorm 10.0.2 hangs and does not respond WEB-16467 (Bug) HEAD request not handled correctly in built in server IDEA-141008 (Bug) QuickDocumentation (Ctrl-Q) shows "JavaScript is disabled on your browser" sometimes WEB-16724 (Bug) Extract variable in reactjs .jsx file fails IDEA-140031 (Bug) Call Hierarchy reports wrong results IDEA-141078 (Bug) Javadoc quickdoc popup: "null" text instead of "@Nullable"? IDEA-135540 (Bug) The right Alt (AltGr) doesn't work as usual with Neo / Neo2 keyboard layouts in PyCharm IDEA-138443 (Bug) Selecting Window->(Minified Window) does not show the minified window IDEA-137908 (Bug) External documentation (from javadoc.jar) is not shown WEB-13950 (Bug) Cannot detect Android SDK in path IDEA-141222 (Bug) Cannot switch between projects IDEA-140295 (Bug) Inspection Does Not Honour @Nonnull override of @Nullable method IDEA-140520 (Exception) NPE at com.intellij.ide.plugins.PluginManagerConfigurable.getPreferredFocusedComponent
https://confluence.jetbrains.com/display/IDEADEV/IntelliJ+IDEA+14.1.4+Release+Notes
CC-MAIN-2021-04
en
refinedweb
A python module for accessing Netflix REST webservice, both V1 and V2 supports oauth and oob. Project description Introduction pyflix2 is a BSD licensed python module for accessing netflix API (both v1 and v2) Netflix provides REST interfaces to access it’s catalog and various user data. This module exposes easy to use object oriented interfaces that is inteded to make it even easier for python programmers to use. Install Installing requests is simple with pip: $ pip install pyflix2 or, with easy_install: $ easy_install pyflix2 Example from pyflix2 import * netflix = NetflixAPIV2( 'appname', 'key', 'shared_secret') movies = netflix.title_autocomplete('Terminator', filter='instant') for title in movies['autocomplete']['title']: print title user = netflix.get_user('use_id', 'access_token', 'access_token_secret') reco = user.get_reccomendations() for movie in reco['recommendations']: print movie['title']['regular'] - Note - Here appname, key and shared_secret needs to be obtained from:. - The user_id, access_token, access_token_secret needs to be obtained programmatically using get_request_token and get_access_token Commandline $ python -mpyflix2 -s 'the matrix' -x Or see help: $ python -mpyflix2 -h Features - Supports both V1 and V2 of netflix REST API - Supports both out-of-bound (oauth 1.0a) and vanila three legged oauth auhentication - Provides easy to use and well documented functional interface for all the API exposed by netflix - Throws Exception for all kinds of error situation making it easier to integrate with other program - V1 and V2 APIs are exposed using different classes, so version specific features can be used easily - Internally uses Requests for making HTTP calls - Want any new feature? please file a feature request Documentation: Note: I would like to thank Kirsten Jones for the library As pyflix2 was initially inspired by pyflix. Requirements - Requires requests module minimum v1.1.0 - Install latest version of requests-oauthlib: pip install -U git+git://github.com/requests/requests-oauthlib.git History 0.2.1 (2014-04-29) - Adding a new method to download the entire catalog into a file. 0.2.0 (2013-01-26) - Issue #6: Add support for downloading full catalog in lib as well as in command line - Issue #8: Incorporate netflix api change to api-public.netflix.com - Issue #9: Update codebase to work with requests v1.1.0 Backward incompatible changes - get_user api signature has changed (require one more parameter user_id) - Addition of user_id in ~/.pyflix.cfg - get_access_token returns additional user_id 0.1.3 (2012-07-09) - Fixed access token retrival code in __main__.py - Fixed typo in sample config file 0.1.2 (2012-07-06) - Issue #5: Fixed circular dependency in setup.py 0.1.1 (2012-07-04) - Initial version Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyflix2/
CC-MAIN-2021-04
en
refinedweb
UNKNOWN Project description WebRunner is a pythonic module used for web scrapping and testing. Here are simple instructions about how to use webrunner. First of all, import WebBrowser to your namespace and instanciate it. >>> from webrunner import WebBrowser >>> wb = WebBrowser() Now, you can use the method urlopen for open some url on the web >>> wb.urlopen('') Now, we can ‘see’ the google’s page >>> g_page = wb.current_page Let’s do a search >>> form = g_page.forms[‘f’] >>> form.set_value(‘some search’, ‘q’) >>> wb.submit_form(form) >>> results_page = wb.current_page Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/webrunner/
CC-MAIN-2021-04
en
refinedweb
Spring Boot: The Right Boot For You Still configuring Spring manually? You've got plenty of options to set up your libraries, annotate where necessary, then jump right into your work with Spring Boot. Join the DZone community and get the full member experience.Join For Free Need a little spring in your step? Tired of all those heavy web servers and deploying WAR files? Well, you’re in luck. Spring Boot takes an opinionated view of building production-ready Spring applications. Spring Boot favors convention over configuration and is designed to get you up and running as quickly as possible. In this blog, I will walk you through the step-by-step process for getting Spring Boot going on your machine. Just Put Them on and Lace Them Up… Spring Boot makes it easy to create stand-alone, production-grade Spring-based applications that you can “just run.” You can get started with minimum fuss due to it taking an opinionated view of the Spring platform and third-party libraries. Most Spring Boot applications need very little Spring configuration. These Boots Are Made for Walking… Maybe Running! So the greatest thing about Spring Boot is the ability to be up and running in very little time. You don’t have to install a web server like JBoss, Websphere, or even Tomcat for that matter. All you need to do is pull in the proper libraries, annotate, and fire away. If you are going to do a lot of Spring Boot projects, I would highly suggest using the Spring Tool Suite that is available. It has some great features for making Boot projects really easy to manage. You can, of course, choose between Maven or Gradle to manage dependencies and builds. My examples will be in Maven as it is what I am familiar with. It’s all about your configuration preference. Many Different Styles to Choose From One of the things that make Spring Boot great is that it works really well with other Spring offerings. Wow, go figure? You can use Spring MVC, Jetty, or Thymeleaf just by adding them to your dependencies and Spring Boot automatically adds them in. Every Day Boots Spring Boot wants to make things easy for you. You can do a whole host of things with it. Here is a list of some of the highlights. - Spring Boot lets you package up an application in a standalone JAR file, with a full Tomcat server embedded - Spring Boot lets you package up an application as a WAR still. - Configuration is based on what is in the classpath (MySQL DB in the path, it’ll set it up for you) - It has defaults set (so you don’t have to configure them) - Easily overridden by adding to the classpath (add H2 dependency and it’ll switch) - Let’s new devs learn the ropes in a hurry and make changes later as they learn more. Baby Boots But remember, the aim of this blog is just to get you familiar with how to get Spring Boot going on your machine. It is going to be fairly straightforward and vanilla. The goal is to get you started. We’re not trying to code a new Uber app or something here. Baby steps folks! We just want to get your feet warm. We all know those tutorials that throw tons of stuff at us and just gloss over things. Not here. So to get started the easiest way is to pull down the tutorial code from Spring itself. It has a great getting-started point. It is a good for you to see what is happening without throwing the whole Spring library at you. Clone Boots… Watch Your Aim! First off, let’s clone the Spring example found here. git clone Construction Boots We won’t go into the steps of setting it up in an IDE as everyone will have their own preference. Let’s break things down a bit. What are these annotations about? vcon the classpath. This flags the application as a web application and activates key behaviors such as setting up a DispatcherServlet. @ComponentScantells Spring to look for other components, configurations, and services in the the hello package, allowing it to find the controllers. Wow, I have always liked quality built-ins when looking for a new home! But what’s really happening behind these shiny new items? The main() method calls out Spring Boot’s SpringApplication.run() method to launch. Did we mention (or did you notice) that you didn’t have to mess around with XML? What a bonus! No more web.xml file nonsense. No more wondering if I put the right tag in the file and wondering what the problem is with the paragraph of unreadable error message telling you just about nothing any longer. This is 100% pure Java. No configuration or plumbing needed. They have done it for you. How nice of them! Once it is set up and ready for you to begin editing, let’s take a quick look at the Application.java file. Here you will find a runnable main class. It has an annotation of @SpringBootApplication. This is the key annotation that makes this application a Boot app. package hello; import java.util.Arrays; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.Bean; ); } }; } } Now to run it! If you are using the STS suite (and properly built it), you will see it in your Boot Dashboard. For everyone else, either right click in the IDE and Run As => Java Application, or head to your favorite command line tool. Use the following commands. Maven mvn package && java -jar target/gs-spring-boot-0.1.0.jar Gradle ./gradlew build && java -jar build/libs/gs-spring-boot-0.1.0.jar You did it! You tied your first pair of Spring Boots. The output will show the normal Spring startup of the embedded server and then it will loop over all the beans and write them out for you! Boots on Display To make the sale or just to get your eyes on the prize, this example throws in a CommandLineRunner method marked as a @Bean and this runs on startup. It retrieves all the beans that were created either by your app or were automatically added thanks to Spring Boot. It sorts them and prints them out. You can put other startup information or do some other little bit of work if you would like. Boots Online While shopping for the right boot, we want the nice ones that will go with our favorite pair of jeans or for the ladies a nice skirt, right? Well, Boot provides a simple way to get your boots out to the world for others to see. Well, we need to employ a Controller to do so. How convenient: the Spring code we downloaded has one already for us. package hello; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RequestMapping; @RestController public class HelloController { @RequestMapping("/") public String index() { return "Greetings from Spring Boot!"; } } The two things that are most important here are the @RestController and the @RequestMapping annotations you see. The @RestController is a subliminal message that it is nap time. Errr, wait sorry, I was getting sleepy. No, it means we have a RESTful controller waiting, watching, listening to our application’s call to it. The @RequestMapping is the url designation that calls the particular method. So in the case of the given example, it is the “index” of the application. The example here is simply returning text. Here’s the cool thing; we can return just about anything here that you want to return. Did JSON Have Nice Boots on the Argo? Finally, what I think most adventurers into Spring Boot are doing now is using it as an endpoint to their applications. There are a whole host of different options as to how you can accomplish this. Either by JSON provided data or XML solutions. We’ll just focus on one for now. Jackson is a nice lightweight tool for accomplishing JSON output to the calling scenario. Jackson is conveniently found on the classpath of Spring Boot by default. Check it out for yourself: mvn dependency:tree or: ./gradlew dependencies Let’s add some pizazz to these boots, already! Add a new class wherever you would like to in your source. Just a POJO. public class Greeting { private final long id; private final String content; public Greeting(long id, String content) { this.id = id; this.content = content; } public long getId() { return id; } public String getContent() { return content; } } Now, head back to your Controller and paste this in: private static final String template = "Ahoy, %s!"; private final AtomicLong counter = new AtomicLong(); @RequestMapping(method=RequestMethod.GET) public @ResponseBody Greeting sayHello(@RequestParam(value="name", required=false, defaultValue="Argonaut") String name) { return new Greeting(counter.incrementAndGet(), String.format(template, name)); } Now restart your Boot app. Go back to a browser and instead of /, go to hello-world. You should see some awesome JSON output. If you did, then you are well on your way to creating endpoints in Spring Boot and Jackson. The Argo Needs Another Port Since a lot of folks are writing endpoints and have multiple sites going on, you’ll probably want to change the default port of 8080 to something else. So the easiest and most straightforward way is to add an application.properties file to src/main/resources. All that is need is this: server.port = 8090 Easy peasy. Weigh anchor and set sail! Boot Camp Conclusion So you can see how easy it is to get things going with Spring Boot. We didn’t have to do much in the way of configuration to actually get up and running in a hurry. We avoided the dreaded XML files and only added a small properties file. The built-ins are extremely nice to already have in the stack. Jackson provides an easy to use JSON conversion for those of us wanting to provide endpoints for our shiny frontends. Again, Spring seems to find a way to make life simpler for the developer. This blog was kept simple on purpose. There are many different avenues to venture down in our new boots. Whether you want to leverage microservices, build a traditional monolith, or some other twist that may be out there, you can see how Spring Boot can get you started in a hurry. Published at DZone with permission of Matt McCandless, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/spring-boot-the-right-boot-for-you-1?fromrel=true
CC-MAIN-2021-04
en
refinedweb
All these keywords are part of the main method of any C# program. The Main method, which is the entry point for all C# programs states that what a class does when it executed. using System; class Demo { static void Main(string[] args) { Console.WriteLine("My first program in C#!"); } } public − This is the access specifier that states that the method can be accesses publically. static − Here, the object is not required to access static members. void − This states that the method doesn’t return any value. main − is As stated above, it s the entry point of a C# program i.e. this method is the method that executes first.
https://www.tutorialspoint.com/What-is-the-difference-between-public-static-and-void-keywords-in-Chash
CC-MAIN-2021-04
en
refinedweb
. If, you still try to declare an abstract method final a compile time error is generated saying “illegal combination of modifiers − abstract and private”. In the following Java program, we are trying to declare an abstract method private. abstract class AbstractClassExample { private static abstract void display(); } On compiling, the above program generates the following error. AbstractClassExample.java:2: error: illegal combination of modifiers: abstract and private private static abstract void display(); ^ 1 error Yes, you can declare an abstract method protected. If you do so you can access it from the classes in the same package or from its subclasses. (Any you must to override an abstract method from the subclass and invoke it.) In the following Java program, we are trying to declare an abstract method protected. abstract class MyClass { protected abstract void display(); } public class AbstractClassExample extends MyClass{ public void display() { System.out.println("This is the subclass implementation of the display method "); } public static void main(String args[]) { new AbstractClassExample().display(); } } This is the subclass implementation of the display method
https://www.tutorialspoint.com/can-we-declare-an-abstract-method-private-protected-public-or-default-in-java
CC-MAIN-2021-04
en
refinedweb
Input and Output data are not always coming from or going to the standard input (keyboard) and standard output (screen), respectively. Most of the time, data is read from a file, and output data is written to an output file. Add to this what we mentioned before that some widely-used operating systems like UNIX and Linux distributions consider everything as a file. This makes understanding how to deal with files a vital issue for a developer. This article will tackle this subject. So, are you ready? Let’s go! File I/O Fortunately, C++ has three classes for you that facilitate file input and output operations. The classes are: - ifstream for input. - ofstream for output. - fstream for both input and output from/to files. The idea is simple: create an object of the appropriate class, for the file you want to use for input/output. Once the object is created, use the extraction operator >> with the input file stream object to extract (read) input(s) and assign it(them) to variable(s), this is for input. In output, the insertion operator << is used to send data to output file stream. When done with the required operation, the file is closed. To understand it better, let’s illustrate the idea by examples. Reading File Example For a text file consisting of the following line: We need to read the file, and print its contents. To do, consider the following program: #include <iostream> #include <fstream> using namespace std; int main() { string firstname,lastname,city; short age; ifstream inputfile("f:/info.txt"); inputfile >> firstname >> lastname >> age >> city; cout << "\nEmployee Info:\n" << firstname << " " << lastname << "\nAge: " << age << "\nCity: " << city << endl; inputfile.close(); return 0; } When executed, the program should print the following output: Note that the stream object inputfile that we defined is used the same way we use cin. That shouldn’t be strange if you remember that cin itself is an object that represents the standard input stream. After extracting the input fields into the appropriate variables, and printing them, the file is closed using the close() member function. This is for reading. Now, let’s try the opposite. Writing to File Example The write operation is also straightforward, except one thing: if the named file exists, it is overwritten. If not, the file is created and has data written to it. For this example we need to write the data of some football players to a file named players.txt. Copy the following code into your IDE, and execute it. #include <iostream> #include <fstream> using namespace std; int main() { ofstream outfile("f:/players.txt"); outfile << "Alessandro Del Piero" << endl << "Juvemtus, Italy" << endl << "Zein Eldin Zeidan" << endl << "Real Madrid CF, Spain"<< endl; outfile.close(); return 0; } When executed, the program should create the players.txt file in the specified path, and write the data to it. Similar to what we did with input, the insertion operator << was cascaded to write data to the file using the output stream object outfile. Not only the extraction and insertion operators are useable by user-defined stream objects, but also the member functions like get(), getline() , read(), write() as well. Append to File Example Sometimes you need to keep the contents of the file you are writing to. For this case, the file could be opened in Appending mode. The following program will append two extra lines to the end of the players.txt file. #include <iostream> #include <fstream> using namespace std; int main() { ofstream outfile("f:/players.txt", std::ofstream::app); outfile << "Alessandro Nesta" << endl << "AC Milan, Italy" << endl; outfile.close(); return 0; } Where std::ofstream::app is the output mode. Text Editor Example Now, we are going to implement a very simple text editor. Our editor should prompt the user for the name of the output file. Then, it prompts the user to enter the text to save. When the stop character (delimiter) is encountered, the program stops reading new input, writes the entered text, and closes the created file. Say hello to our first text editor! #include <iostream> #include <fstream> using namespace std; int main() { string filename; int MAX = 1000; char text[MAX]; cout << "Enter filename: "; cin >> filename; ofstream outfile(filename.c_str()); cout << "Enter text (type # when finished) :\n"; cin.getline(text,MAX,'#'); outfile << text; outfile.close(); return 0; } Let’s give it a try! Splendid! Summary In this article, we have discussed File I/O. Three main classes are used for input and output file streaming: ifstream, ofstream, and fstream. Our next topic will be Command-Line arguments. An interesting topic to wait for. See you!
https://blog.eduonix.com/system-programming/learn-inputoutput-file-handling-c-part-3/
CC-MAIN-2021-04
en
refinedweb
This page provides reference documentation and related resources for the Translation Python client library. Installation To install the client library: For more on setting up your Python development environment, refer to the Python Development Environment Setup Guide. pip install google-cloud-translate==2.0.1 Using the client library To use the Python client library for Cloud Translation - Basic, you must import the Cloud Translation API client library as follows: from google.cloud import translate_v2 See how to translate text for additional usage details.
https://cloud.google.com/translate/docs/reference/libraries/v2/python?hl=th
CC-MAIN-2021-04
en
refinedweb
The Model-View-ViewModel (MVVM) pattern helps developers separate an application's business and presentation logic from its user interface. Maintaining a clear separation between application logic and user interface helps address development and design issues, making an application easier to test, maintain, and develop. It can also improve the reusability of code and it allows multiple developers to collaborate more easily when working on the same project. 1. Introduction Using the MVVM pattern, the user interface of the application and the underlying presentation and business logic are separated into three components: - The view component encapsulates the user interface and user interface logic. - The view model component encapsulates presentation logic and state. - The model layer encapsulates the application's business logic and data. There are several frameworks available for implementing the MVVM pattern in a Windows application. Which framework is best for your project depends on your requirements. For this tutorial, we will use MVVM Light, a popular and easy-to-use MVVM framework. This tutorial shows you how to create a Universal Windows app with MVVM Light support. You will learn how to: - create a Universal Windows app and add support for MVVM Light - implement the directory structure - add the view model layer - wire the data context - implement the messenger service to pass messages between view models 2. Project Setup Step 1: Create a Universal Windows App Let's start by creating a Universal Windows app. Select New Project from the File menu in Visual Studio. Expand Templates > Visual C# > Windows > Windows 8 > Universal and select Blank App (Universal Windows 8.1) from the list of project templates. Name your project and click OK to create the project. This creates two new apps (Windows Phone 8.1 and Windows 8.1) and one shared project. The Windows Phone 8.1 and Windows 8.1 projects are platform specific projects and are responsible for creating the application packages (.appx) targeting the respective platforms. The shared project is a container for code that runs on both platforms. Step 2: Add MVVM Light Support Right-click the solution name in the Solution Explorer and select Manage Nuget Packages for Solution. Select the Browse tab and search for MVVM Light. Select the package MvvmLightLibs from the search results. Check both the Windows 8.1 and Windows Phone 8.1 projects, and click Install to add the MVVM Light libraries to the apps. At this point, you have added MVVM Light support to both your applications. 3. Project File Structure A Universal Windows app that adopts the MVVM pattern requires a particular directory structure. The following snapshot shows a possible project file structure for a Universal Windows app. Let me walk you through the project structure of a typical Univesal Windows app that adopt the MVVM pattern: - Controls: This directory contains reusable user interface controls (application independent views). Platform specific controls are added directly to the platform specific project. - Strings: This directory contains strings and resources for application localization. The Strings directory contains separate directories for every supported language. The en-US directory, for example, contains resources for the English (US) language. - Models: In the MVVM pattern, the model encapsulates the business logic and data. Generally, the model implements the facilities that make it easy to bind properties to the view layer. This means that it supports "property changed" and "collection changed" notifications through the INotifyPropertyChangedand INotifyCollectionChangedinterfaces. - ViewModels: The view model in the MVVM pattern encapsulates the presentation logic and data for the view. It has no direct reference to the view or any knowledge about the view's implementation or type. - Converters: This directory contains the value converters. A value converter is a convenient way to convert data from one type to another. It implements the IValueConverterinterface. - Themes: The Themes directory contains theme resources that are of type ResourceDictionary. Platform specific resources are added directly to the specific project and shared resources are added to the shared project. - Services: This section can include classes for web service calls, navigation service, etc. - Utils includes utility functions that can be used across the app. Examples include AppCache, FileUtils, Constants, NetworkAvailability, GeoLocation, etc. - Views: This directory contains the user interface layouts. Platform specific views are added directly to the platform specific project and common views are added to the shared project. Depending on the type of view, the name should end with: - Window, a non-modal window - Dialog, a (modal) dialog window - Page, a page view (mostly used in Windows Phone and Windows Store apps) - View, a view that is used as subview in another view, page, window, or dialog The name of a view model is composed of the corresponding view’s name and the word “Model”. The view models are stored in the same location in the ViewModels directory as their corresponding views in the Views directory. 4. Adding the View Model Layer The view model layer implements properties and commands to which the view can bind data and notify the view of any state changes through change notification events. The properties and commands the view model provides, define the functionality offered by the user interface. The following list summarizes the characteristics, tasks, and responsibilities of the view model layer: - It coordinates the view's interaction with any model class. - The view model and the model classes generally have a one-to-many relationship. - It can convert or manipulate model data so that it can be easily consumed by the view. - It can define additional properties to specifically support the view. - It defines the logical states the view can use to provide visual changes to the user interface. - It defines the commands and actions the user can trigger. In the next steps, we add two files to the view model layer, ViewModelLocator.cs and MainViewModel.cs. Step 1: Add the MainViewModel Class First, right-click the shared project and select Add, New Folder. Name the folder ViewModels. Next, right-click the ViewModels folder and select Add, New Item to add the MainViewModel class. Modify the MainViewModel class to look like this:"; } } The class contains a public property HelloWorld of type string. You can add additional methods, observable properties, and commands to the view model. Step 2: Add the ViewModelLocator Class We will add a public property for all the view models in the ViewModelLocator class and create a new resource, which we will use in the designer. Right-click the ViewModels folder and select Add, New Item. Select a class and name it ViewModelLocator.cs. Update the ViewModelLocator class as shown below.>(); } } The ViewModelLocator class contains a public property Main whose getter returns an instance of the MainViewModel class. The constructor of ViewModelLocator registers the MainViewModel instance to the SimpleIoc service. Next, open App.xaml file and add a new resource with the ViewModelLocator to be used in the designer. <Application.Resources> <viewModels:ViewModelLocator x: </Application.Resources> 5. Wiring Up the Data Context The view and the view model can be constructed and associated at runtime in multiple ways. The simplest approach is for the view to instantiate its corresponding view model in XAML. You can also specify in XAML that the view model is set as the view's data context. <Page.DataContext> <vm:MainViewModel/> </Page.DataContext> When the MainPage.xaml page is initialized, an instance of the MainViewModel is automatically constructed and set as the view's data context. Note that the view model must have a default parameter-less constructor for this approach to work. Another approach is to create the view model instance programmatically in the view's constructor and set it as the data context. public MainPage() { InitializeComponent(); this.DataContext = new MainViewModel(); } Another approach is to create a view model instance and associate it with its view using a view model locator. In the sample app, we use the ViewModelLocator class to resolve the view model for MainPage.xaml. <Page.DataContext> <Binding Path="Main" Source="{StaticResource Locator}" /> </Page.DataContext> Now that the view's data context has been set to the MainViewModel class, we can access its properties in the view. You can bind the text of a TextBlock to the HelloWorld property defined in the view model. <TextBlock Text="{Binding HelloWorld}" FontSize="20" HorizontalAlignment="Center" VerticalAlignment="Center"/> 6. Messenger Service The messenger service in MVVM Light allows for communication between view models or between view models and views. Let's say you have a view model that is used to provide business logic to a search function and two view models on your page that want to process the search to show the output. The messenger would be the ideal way to do this in a loosely bound way. The view model that gets the search data would simply send a "search" message that would be consumed by any view model that was currently registered to consume the message. The benefits of using a messenger service are: - easy communication between view models without each view model having to know about each other - more message consumers can be added with little effort - it keeps the view models simple To send a message: MessengerInstance.Send(payload, token); To receive a message: MessengerInstance.Register<PayloadType>( this, token, payload => SomeAction(payload)); In the sample application, we will send a message from MainViewModel, which will be received by MainPage.xaml. These are the steps required for using the messenger service. Step 1: Create Class to Contain the Message to Be Passed Create a new class in the project and name it ShowMessageDialog. public class ShowMessageDialog { public string Message { get; set; } } Step 2: Instantiate Message Class and Broadcast Message In MainViewModel.cs, create an instance of ShowMessageDialog and use the Messenger object to broadcast the message. private object ShowMessage() { var msg = new ShowMessageDialog { Message = "Hello World" }; Messenger.Default.Send<ShowMessageDialog>(msg); return null; } This broadcasts the message. All that is left for us to do, is to register a recipient and respond to the message. Step 3: Register for Message and Handle When Received Open MainPage.xaml.cs and register for the message in the constructor. public MainPage() { this.InitializeComponent(); Messenger.Default.Register<ShowMessageDialog> ( this, (action) => ReceiveMessage(action) ); } ReceiveMessage is a method that you need to implement. It will take the Message object and use the DialogService to display a dialog box. private async void ReceiveMessage(ShowMessageDialog action) { DialogService dialogService= new DialogService(); await dialogService.ShowMessage(action.Message, "Sample Universal App"); } Step 4: Create a Command to Send Message Now that we can send and receive a message, we need to call the ShowMessage method. MVVM Light provides support for RelayCommand, which can be used to create commands in the view model. Add a public property ShowMessageCommand in the MainViewModel class that invokes the ShowMessage method. private RelayCommand _showMessageCommand; public RelayCommand ShowMessageCommand => _showMessageCommand ?? (_showMessageCommand = new RelayCommand(ShowMessage)); Next, add a Button to MainPage.xaml and bind the ShowMessageCommand to its Command property. <Button Command="{Binding ShowMessageCommand}" Content="Click Me" HorizontalAlignment="Center"/> Deploy the app to see if everything works as expected. Here's a snapshot of how MainPage.xaml looks on Windows 8.1. When you click the Click Me button, a dialog box pops up. Messenger is a powerful component that can make communication easier, but it also makes the code more difficult to debug because it is not always clear at first sight which objects are receiving a message. Conclusion By implementing the MVVM pattern, we have a clear separation between the view, view model, and model layers. Typically, we try to develop the view model so that it doesn’t know anything about the view that it drives. This has multiple advantages: - The developer team can work independently from the user interface team. - The view model can be tested easily, simply by calling some commands and methods, and asserting the value of properties. - Changes can be made to the view without having to worry about the effect it will have on the view model and the model. Feel free to download the tutorial's source files to use as a reference. ><<
https://code.tutsplus.com/tutorials/how-to-use-mvvm-in-a-universal-windows-app--cms-25582
CC-MAIN-2021-04
en
refinedweb
can anyone one help me know to solve this...i am new to coding and i really confused. - Mying Humtsoe last edited by using Swift or Javascript... Create a function called "timeAdder" that can add two time values together. For example, it should be able to add 25 hours and 3 days together. The function should accept 4 parameters: value1, label1, value2, label2 value1 and value2 should accept positive integers label1 and label2 should accept any of the following strings: "seconds", "minutes", "hours", "days", "second", "minute", "hour", "day" For example your function may be called in any of the following ways: timeAdder(1,"minute",3,"minutes") timeAdder(5,"days",25,"hours") timeAdder(1,"minute",240,"seconds") Requirements: Your function should include at least one switch Your function must accept any possible combination of inputs If the inputs are valid, it should return a tuple with 2 variables inside of it: value3, and label3. For example: return (5,"minutes"). The exact label you choose to return for label3 ("minutes" for example) is up to you. - If the inputs are invalid or impossible, it should return false. Here are examples of impossible and invalid inputs: timeAdder(5,"hour",5,"minutes") // This is impossible because "hour" is singular and 5 is plural timeAdder(false,false,5,"minutes") // This is invalid because the first 2 arguments are not the correct types timeAdder({},"days",5,"minutes") // This is invalid because the first argument is the wrong type Hey @Mying-Humtsoe I decided to put my answer in video format. Hey @Mying-Humtsoe I decided to put my answer in video format. - Mying Humtsoe last edited by @avan thank you so much sir from the deepest of my heart, i have spent days trying to find answer to this problem and googling for solution but failed and i wanted to give up, you are a life saver
https://askavan.com/topic/123/can-anyone-one-help-me-know-to-solve-this-i-am-new-to-coding-and-i-really-confused/3
CC-MAIN-2021-04
en
refinedweb
Ruby: 'method_missing' and slightly misled by RubyMine Another library that we’re using on my project is ActionMailer and before reading through the documentation I was confused for quite a while with respect to how it actually worked. We have something similar to the following piece of code… …which when you click its definition in RubyMine takes you to this class definition: class Emailer < ActionMailer::Base def some_email recipients "some@email.com" from "some_other_email@whatever.com" # and so on end end I initially thought that method was called ‘deliver_some_mail’ but having realised that it wasn’t I was led to the ‘magic’ that is ‘method_missing’ on ‘ActionMailer::Base’ which is defined as follows: module ActionMailer ... class Base def method_missing(method_symbol, *parameters) #:nodoc: if match = matches_dynamic_method?(method_symbol) case match[1] when 'create' then new(match[2], *parameters).mail when 'deliver' then new(match[2], *parameters).deliver! when 'new' then nil else super end else super end end end end The ‘matches_dynamic_method?’ function allows us to extract ‘some_email’ from the ‘method_symbol’. That value is then passed into the object’s initializer method and is eventually called executing all the code inside that method. def matches_dynamic_method?(method_name) #:nodoc: method_name = method_name.to_s /^(create|deliver)_([_a-z]\w*)/.match(method_name) || /^(new)$/.match(method_name) end Reading through the documentation, the author gives the following reasons for having separate ‘create’ and ‘deliver’ methods: ApplicationMailer.create_signed_up("david@loudthinking.com") # => tmail object for testing ApplicationMailer.deliver_signed_up("david@loudthinking.com") # sends the email ApplicationMailer.new.signed_up("david@loudthinking.com") # won't work! In C# or Java I think we’d probably use another object to build up the message and then pass that to the ‘Emailer’ to deliver it so it’s quite interesting that both these responsibilities are in the same class. It also takes care of rendering templates and from what I can tell the trade off for having this much complexity in one class is that it makes it quite easy for the library’s clients - we just have to extend ‘ActionMailer::Base’ and we have access to everything that we need. About the author Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database.
https://markhneedham.com/blog/2010/08/23/ruby-method_missing-and-slightly-misled-by-rubymine/
CC-MAIN-2018-30
en
refinedweb
Inspired by cubism and various projects using genetic algorithms to paint the Mona Lisa here a method for teaching your computer to be an artist! I bring you the artificial artist! The idea is to use regression to "learn" what an image looks like and then draw the learned image. By choosing the parameters of the regressor carefully you can achieve some interesting visual effects. If all this artsy talk is too much for you think of this as a way to compress an image. You could store the weights of the regressor instead of the whole image. First some standard imports of things we will need later: %matplotlib inline from base64 import b64encode from tempfile import NamedTemporaryFile import numpy as np import scipy import matplotlib.pyplot as plt from matplotlib import animation from IPython.display import HTML from sklearn.ensemble import RandomForestRegressor as RFR from skimage.io import imread from skimage.color import rgb2lab,lab2rgb from skimage.transform import resize,rescale from JSAnimation import IPython_display lakes = imread('') f,ax = plt.subplots(figsize=(10,6)) ax.xaxis.set_ticks([]); ax.yaxis.set_ticks([]) ax.imshow(lakes, aspect='auto') <matplotlib.image.AxesImage at 0x11e47a850> newfie = imread('') f,ax = plt.subplots(figsize=(10,6)) ax.xaxis.set_ticks([]); ax.yaxis.set_ticks([]) ax.imshow(newfie, aspect='auto') <matplotlib.image.AxesImage at 0x122aadf50> The Artificial Artist: a new Kind of Instagram Filter¶ The artificial artist will be based on a decision tree regressor using the $(x,y)$ coordinates of each pixel in the image as features and the RGB values as the target. Once the tree has been trained we ask it to make a prediction for every pixel in the image, this is our image with the "filter" applied. All this is taken care of in the simple_cubist function. Once you see the first filtered image you will understand why it is called simple_cubist. We also define a compare function which takes care of displaying several images next to each other or compiling them into an animation. This makes it easy to see what we just did. def simple_cubist(img): w,h = img.shape[:2] img = rgb2lab(img) xx,yy = np.meshgrid(np.arange(w), np.arange(h)) X = np.column_stack((xx.reshape(-1,1), yy.reshape(-1,1))) Y = img.reshape(-1,3, order='F') min_samples = int(round(0.001 * len(X))) model = RFR(n_estimators=1, n_jobs=6, max_depth=None, min_samples_leaf=min_samples, random_state=43252, ) model.fit(X, Y) art_img = model.predict(X) art_img = art_img.reshape(w,h,3, order='F') return lab2rgb(art_img) def compare(*imgs, **kwds): """Draw several images at once for easy comparison""" animate = kwds.get("animate", False) if animate: fig, ax = plt.subplots(figsize=(8,5)) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) anim = animation.FuncAnimation(fig, lambda x: ax.imshow(imgs[x%len(imgs)], aspect='auto'), frames=len(imgs), interval=1000) fig.tight_layout() return anim else: figsize = plt.figaspect(len(imgs)) * 1.5 fig, axes = plt.subplots(nrows=len(imgs), figsize=figsize) for a in axes: a.xaxis.set_ticks([]) a.yaxis.set_ticks([]) for ax,img in zip(axes,imgs): ax.imshow(img) fig.tight_layout() # Take the picture of the Lake District and apply our # simple cubist filter to it simple_lakes = simple_cubist(lakes) compare(lakes, simple_lakes, animate=True)
http://betatim.github.io/posts/artificial-artist/
CC-MAIN-2018-30
en
refinedweb
When I started using TypeScript for my Angular applications, I was confused about all the different ways with which you could import other modules. import './polyfills.ts'; import { Component } from '@angular/core'; import HomeComponent from './pages/home/home-page.component'; import * as _ from 'lodash'; import assert = require('assert'); At first, I thought that as a programmer you could choose whether you wanted to use curly braces or not, but I quickly found out that that was not the case. It all depends on how the module that you are importing is structured. I have created an overview of the different ways by which a module can be exported, together with their corresponding import syntax. Most of them are actually plain ECMAScript 2015 (ES6) module syntax that TypeScript uses as well.
https://blog.jdriven.com/tag/es6/
CC-MAIN-2018-30
en
refinedweb
Lazarus 1.10.0 release notes From Free Pascal wiki Lazarus 1ToolBar children ignore Align - 6.2.2 TCustomComboBox.ReadOnly was deprecated - 6.2.3 Predefined clipboard format pcfDelphiBitmap was removed - 6.2.4 TEdit.Action visibility lowered to public - 6.2.5 TControl.ScaleFontsPPI, .DoScaleFontPPI parameter change - 6.2.6 MouseEntered deprecated/missing - 6.2.7 TCustomImageList.Add method - 6.2.8 TCustomTreeView.OnChanging event: Node parameter - 6.2.9 No LCL Application exception dump - 6.2.10 No default LazLogger - 6.2.11 Screenshot for LCLExceptionStackTrace and LazLogger Additions and Overrides - 6.3 Components incompatibilities - 6. Added flags to exclude some graphics format]Width property to decide what custom width at 96 PPI (100% scale) is to be used. Example: TToolBar.Images/ImageWidth, TListView.LargeImages/Large. IDE Changes - several High-DPI IDE improvements and retina support on Cocoa - Delphi Attributes: Find declaration, parameter hints, $modeswitch prefixedattributes. - The IDE parses the custom compiler options for the fpc switch -FN<namespaces>. - pas2js support: - Added IDE package pas2jsdsgn: - create a browser or nodejs webapplication - on Run start Debugger - Alpha: LLDB based debugger for MacOs (no code-signing required) - Alpha: LLDB + FpDebug based debugger for MacOs (no code-signing required) - GDB Debugger: new options: - Added option "FixStackFrameForFpcAssert" to workaround fpc wrong frame pointer (display correct line after assert failed) - Added option "AssemblerStyle": ATT vs Intel - Added option "DisableStartupShell": Required on MacOs. - Added more size limits for data evaluation (avoid errors, timeouts and extremely slow responses) -. - Auto closing of the asm window, if it was opened by breaking at a none source line. - Dragging selected identifier from source editor to watches window, to create a watch. Components TOpenGLControl - New property Options of type set, currently with ocoMacRetinaMode as the only member. If set, ocoMacRetinaMode determines that the OpenGL controls will use retina support (high resolution mode).). Changes affecting compatibility: remove from .lfm manuallyCLExceptionStacktrace" (see screenshot below). Screenshot for LCLExceptionStackTrace and LazLogger Additions and Overrides.
http://wiki.freepascal.org/Lazarus_1.10.0_release_notes
CC-MAIN-2018-30
en
refinedweb
Signal model when content in view changes I want the model to change QCheckState when a linked QLineEdit's content changes. I've subclassed QStyledItemDelegate and tried to connect signals for textEdited() to a method in the delegate that checks a QCheckbox in another column using setData(..,Qt::CheckStateRole). I realized it won't be so easy, because I only have a QModelIndex in createEditor, which gives me only a QAbstractItemModel const and then I cannot call setData(). I've already reimplemented editorEvent(), but it is not triggered for my (persistent) editors. I feel like I have run out of options, because there's no other method that receives a non-const model. Hence I am asking before I have to break the model/view encapsulation or try to cast the const away. Why don't you let the model itself do that work then in the setData method? Yeah, that works when the editing is finished. AFAIK the model's setData() and the delegate's setModelData() only get called when the editor loses focus, e.g. when you click somewhere else or press enter. What I want is a checked checkbox whenever the editing started. Another problem with this is, that when you edit sth. then click on the checkbox, the default editorEvent() implementation of QStyledItemDelegate will just flip the checked state, that you modified in setData() yourself. What I ended up doing is to use the QTableView's indexWidget() method -- that always returns the correct editor for my persistent editors -- for every editor and connect it's signal to a method that looks up the QModelIndex for a specific QWidget and then calls the model's setData() method. Hi kossmoboleat I also need to get a signal when editing of a cell starts (in my case in a QTreeView and a QListView). I like your idea to use QAbstractItemView::indexWidget() and understand that you connect an appropriate signal such as QLineEdit::textEdited() to a slot. - Do you establish the connection in a custom delegate? Is overriding QAbstractItemDelegate::editorEvent() suitable to connect textEdited() to a slot? - In which function (of your delegate) do you disconnect the signal? Or do you leave the disconnect to Qt? Alternatively, I believe QTreeView (and QListView) could be derived and the two virtual functions edit() and closeEditor() overridden to connect and disconnect the textEdited() of the editor widget to a slot. No custom delegate would be needed (but derived view classes). I appreciate to learn these details as it will help me to encounter issues that you may have solved already. Best Al_ Hi Al, I actually had to inherit from the view because you don't have a non-const model in the delegate and I wanted to change data in the model when editing in a cell started. If you don't need to do this, you could possibly do it in the delegate. editorEvent() might be suitable, but I'd prefer edit() as it seems to be "more logical" place. I actually didn't have to bother with that, because I'm using persistent editors and I'm connecting the signals when all the editors have to opened with openPersistentEditor(). I left it to Qt to disconnect the signals when the editors are destroyed. I still don't like the solution so very much, because I have to use dynamic casts. When an editor is changed I have to try to cast it to several editor widgets and react accordingly. I've looked into the implementation of Qt and they do it the same way, so I'm not sure if there is a better solution. regards, tim Hi tim Thanks for the clarifications. I ended up overriding QStyledItemDelegate::editorEvent() and connect the appropriate 'value is edited' signal (signal name, as you also note, depends on the editor widget) with my startEdit() signal. This works nicely if the editor widget is a QLineEdit as this has a signal textEdited(const QString&). For all other widgets, my views notably also use comboboxes, startEdit() is already emitted when the editor is loaded with the current value, i.e., before the user actually has modified the value. Thus, in my application the close button of the dialog is disabled too early and the Cancel and the Save buttons are enabled too early (i.e., even if the user only attempts to edit but eventually decides to keep the value as is). Below my code in case someone has a similar need. (Ignore the paint function: my delegate does also something completely unrelated; it crosses out deleted items that are not yet committed to the model) Best Al_ header file @#ifndef QXCROSSOUTDELEGATE_H #define QXCROSSOUTDELEGATE_H #include <QStyledItemDelegate> class QXCrossoutDelegate : public QStyledItemDelegate{ Q_OBJECT public: explicit QXCrossoutDelegate(QObject parent = 0); virtual void paint (QPainter painter, const QStyleOptionViewItem& option, const QModelIndex& index ) const; virtual QWidget* createEditor(QWidget* parent, const QStyleOptionViewItem& option, const QModelIndex& index) const; signals: void startEdit() const;}; #endif // QXCROSSOUTDELEGATE_H@ implementation file @#include <QLineEdit> #include <QDateTimeEdit> #include <QDoubleSpinBox> #include <QSpinBox> #include <QComboBox> #include "qxcrossoutdelegate.h" QXCrossoutDelegate::QXCrossoutDelegate(QObject *parent) : QStyledItemDelegate(parent){} void QXCrossoutDelegate::paint(QPainter* painter, const QStyleOptionViewItem& option, const QModelIndex& index) const { QStyleOptionViewItem newOption(option); if (index.model()->headerData(index.row(), Qt::Vertical, Qt::DisplayRole).toString() == QLatin1String("!")) newOption.font.setStrikeOut(true); QStyledItemDelegate::paint(painter, newOption, index);} QWidget* QXCrossoutDelegate::createEditor(QWidget* parent, const QStyleOptionViewItem& option, const QModelIndex& index) const { QWidget* editorWidget = QStyledItemDelegate::createEditor(parent, option, index); bool ok(true); if (const QLineEdit* editorWidget_1 = qobject_cast<const QLineEdit*>(editorWidget)) ok = connect(editorWidget_1, SIGNAL(textEdited(const QString&)), this, SIGNAL(startEdit())); /* The following widgets have no signal emitted only upon user change of value. As setEditorData will be called even before the user actually modifies the value, we can as well emit startEdit() already now else if (const QDateTimeEdit* editorWidget_1 = qobject_cast<const QDateTimeEdit*>(editorWidget)) ok = connect(editorWidget_1, SIGNAL(dateTimeChanged(QDateTime)), this, SIGNAL(startEdit())); else if (const QDoubleSpinBox* editorWidget_1 = qobject_cast<const QDoubleSpinBox*>(editorWidget)) ok = connect(editorWidget_1, SIGNAL(valueChanged(double)), this, SIGNAL(startEdit())); else if (const QSpinBox* editorWidget_1 = qobject_cast<const QSpinBox*>(editorWidget)) ok = connect(editorWidget_1, SIGNAL(valueChanged(int)), this, SIGNAL(startEdit())); else if (const QComboBox* editorWidget_1 = qobject_cast<const QComboBox*>(editorWidget)) ok = connect(editorWidget_1, SIGNAL(currentIndexChanged(int)), this, SIGNAL(startEdit())); */ else emit startEdit(); // unknown editor widget, emit startEdit now: better too early than never Q_ASSERT(ok); return editorWidget;} @
https://forum.qt.io/topic/14660/signal-model-when-content-in-view-changes
CC-MAIN-2018-30
en
refinedweb
Technical Articles How to create a high available SAPMNT share? In a distributed SAP landscape, there must be a central SAPMNT share. This SAPMNT share will be accessed using the parameter SAPGLOBALHOST. On a Windows Failover Cluster installation, SAPGLOBALHOST is identical to the virtual hostname used for the (A)SCS instance. But how should you make this very important sapmnt share “high available”? In a Failover Cluster installation, this is done by default because the share is configured on a shared disk and the disk must be therefore highly available. In this blog, I want to show you possible solutions for distributed or special Failover Cluster configurations, in which SAPMNT share is configured on a remote SMB file share. Take a close look at the advantages and disadvantages of each solution to find the right one for your SAP operations. Watch out, this blog will be updated and enhanced in the future! Follow this blog if you want to stay informed. Solution 1: A file share on a shared disk in a Windows Failover Cluster In this example, I have configured a network name “sap-global-host” in DNS and a Client Access Point in a Failover Cluster. Screenshot of a cluster group which contain the necessary resources: Screenshot of sapmnt share: This cluster provides the share \\sap-global-host\sapmnt high available. There are many solutions available on the market to provide a shared disk in a Failover Cluster: - iSCSI attached disks - SAN attached disks - software mirroring solutions like DoubleTake, SIOS, AFSDrive, etc. Remark: The Continuous Availablity feature can be turned on if you are running Windows 2012, 2012 R2 or 2016 with at least the cumulative rollup patch of May, 2017. For more information about this feature read SAP Note 2287140 – “Support of Failover Cluster Continuous Availabilty feature (CA)” Advantages: - High availability of sapmnt share covered by Failover Cluster technology Disadvantages: - shared disk needed, in geographically dispersed clusters this can be expensive Solution 2: A NAS solution from a storage vendor There are many NAS solutions available on the market. Well-known ones are NetApp Filers, EMC Cellera/Isilon/ScaleIO, HP StoreEasy, Nutanix, and many others. All these solutions provide high available SMB/CIFS file shares. Contact the vendor for more information. Choose a solution which supports SMB protocol version 3.0 or higher. The solution must support different additional (= virtual, logical) hostnames to support many different SAPMNT shares of several SAP systems. Example: \\networkname1\sapmnt \\networkname2\sapmnt \\networkname3\sapmnt \\networkname4\sapmnt … Advantages: - High availability of sapmnt share covered by proven third-party vendor solutions - support available from vendor specific solution - scalable for high performance SMB load (depending on solution) Disadvantages: - expensive - configuration needs special know-how, vendor specific Solution 3: SAMBA version 4.1 or higher SAMBA as of version 4.1 supports SMB 3.0 protocol. There are several special HA solutions available on UNIX platforms and Linux distributions. The solution must support different addtitional (= virtual, logical) hostnames to support many different SAPMNT shares. Advantages: - High availability of sapmnt share covered Unix/Linux OS which hosts SAMBA - cheap Disadvantages: - support only available from community - configuration needs special SAMBA and Unix know-how For more information about SAMBA follow this link: Solution 4: A virtual machine (VM) with a Windows configured to host many file shares Let’s call this “poor man’s high availability”. You configure many SAPMNT shares using different hostnames on a normal Windows Server OS. This Windows runs virtualized on VMware, Hyper-V, or any other Hypervisor. Advantages: - For a planned downtime of the Hypervisor host, you can simply “live migrate” the VM to another host with just an interruption of a second - Cheap implementation Disadvantages: - If an unplanned downtime of the Windows VM occured, for example because the Hypervisor host or the guest OS crashed, the SAPMNT shares are not available until the VM will be started on another host - If you have to apply monthly Windows patches to this VM, the Windows VM is also not available for some time Solution 5: A Windows Server configured to host many file shares, running on a special solution like VMware FT, Stratus, or similar solutions This is similar to solution 4. But in this scenario, the Windows OS hosting the SAPMNT shares is provided with high availability by special solutions like VMware’s Fault Tolerance, Stratus, or other solutions available on the market. Advantage: - high availability of the Windows OS and the sapmnt shares - Failover does not cause any interruption Disadvantages: - expensive solutions - no protection against OS crashes - If you have to apply monthly Windows patches to this VM, the Windows VM is also not available for some time Solution 6: “Scale out Fileserver” (SoFs) from Microsoft to provide high available file shares. This solution is similar to solution 1. However, the SoFs provides load balancing to all cluster nodes. It’s limited to one geographically site only. A share on a SoFs would also support the SMB 3.x CA feature. Advantage: - standard solution from Microsoft, good documentation available - scalable for high performance SMB load Disadvantages: - needs a shared disk - limitations, for example one geographically site only Solution 7: Use DFS-R to provide sapmnt share You configure a domain based DFS root in your Active Directory (AD). The DFS replication mechanism is used to replicate the sapmnt data. <the example in older versions of this blog has been removed. It was correct in theory, but didn’t work with SWPM in the real world> If you use want to use DFS you need to create a DFS using a hostname of type \\<hostname>\sapmnt In an older version of this blog I use the example \\<hostname>\<something>\sapmnt This does not work, because this path will not be recognized by SWPM. Advantage: - cheap solution - DFS is a standard solution from Microsoft and well documented - many sapmnt shares using different Namespaces Disadvantages: - must be monitored and operated separately from SAP or AD operations - DFS replication is always asynchronous - cannot be used, if high I/O is expected on sapmnt share - several sapmnt shares mean higher replication load for DFSR services Hello Karl-Heinz, Many thanks for updating the Solution 7, ignoring the path to sapmnt that has to be \\<hostname>\sapmnt there was additional information around using C-Name with DFS-Namespace. It now makes sense to stick with the DFS-Namespace root directory to be part of the UNC path for SWPM to work. We have a scenario where we are using Domain-Based-DFS-Namespace, and your update helps. Thanks again! Regards, Jitendra Singh Even if the question from Jitendra has been answered some time ago via e-mail ... 🙂 ... here is the answer. Don't use a variant like \\<hostname>\<fileshare>\sapmnt! Use this variant to provide several SAPMNT shares for several systems: Examples: \\SAP_ABC_host\sapmnt \\<name of your Windows domain>\sapmnt (this means, sapmnt is a DFS namespace) \\SAP_PROD_ABC\sapmnt ...
https://blogs.sap.com/2017/07/21/how-to-create-a-high-available-sapmnt-share/
CC-MAIN-2022-40
en
refinedweb
napari image arithmetic widget¶ napari is a fast, interactive, multi-dimensional image viewer for python. It uses Qt for the GUI, so it’s easy to extend napari with small, composable widgets created with magicgui. Here we’re going to build this simple image arithmetic widget with a few additional lines of code. For napari-specific magicgui documentation, see the napari docs outline¶ This example demonstrates how to: ⬇️ Create a magicgui widget that can be used in another program (napari) ⬇️ Use an Enum to create a dropdown menu ⬇️ Connect some event listeners to create interactivity. code¶ Code follows, with explanation below… You can also get this example at github. 1from enum import Enum 2 3import numpy 4import napari 5from napari.types import ImageData 6 7from magicgui import magicgui 8 9class Operation(Enum): 10 """A set of valid arithmetic operations for image_arithmetic. 11 12 To create nice dropdown menus with magicgui, it's best 13 (but not required) to use Enums. Here we make an Enum 14 class for all of the image math operations we want to 15 allow. 16 """ 17 add = numpy.add 18 subtract = numpy.subtract 19 multiply = numpy.multiply 20 divide = numpy.divide 21 22 23# here's the magicgui! We also use the additional 24# `call_button` option 25@magicgui(call_button="execute") 26def image_arithmetic( 27 layerA: ImageData, operation: Operation, layerB: ImageData 28) -> ImageData: 29 """Add, subtracts, multiplies, or divides to image layers.""" 30 return operation.value(layerA, layerB) 31 32# create a viewer and add a couple image layers 33viewer = napari.Viewer() 34viewer.add_image(numpy.random.rand(20, 20), name="Layer 1") 35viewer.add_image(numpy.random.rand(20, 20), name="Layer 2") 36 37# add our new magicgui widget to the viewer 38viewer.window.add_dock_widget(image_arithmetic) 39 40# keep the dropdown menus in the gui in sync with the layer model 41viewer.layers.events.inserted.connect(image_arithmetic.reset_choices) 42viewer.layers.events.removed.connect(image_arithmetic.reset_choices) 43 44napari.run() walkthrough¶ We’re going to go a little out of order so that the other code makes more sense. Let’s start with the actual function we’d like to write to do some image arithmetic. the function¶ Our function takes two numpy arrays (in this case, from Image layers), and some mathematical operation (we’ll restrict the options using an Enum). When called, our function calls the selected operation on the data. def image_arithmetic(array1, operation, array2): return operation.value(array1, array2) type annotations¶ magicgui works particularly well with type annotations, and allows third-party libraries to register widgets and behavior for handling their custom types (using magicgui.type_map.register_type()). napari provides support for magicgui by registering a dropdown menu whenever a function parameter is annotated as one of the basic napari Layer types, or, in this case, ImageData indicates we just the data attribute of the layer. Furthermore, it recognizes when a function has a Layer or LayerData return type annotation, and will add the result to the viewer. So we gain a lot by annotating the above function with the appropriate napari types. from napari.types import ImageData def image_arithmetic( layerA: ImageData, operation: Operation, layerB: ImageData ) -> ImageData: return operation.value(layerA, layerB) the magic part¶ Finally, we decorate the function with @magicgui and tell it we’d like to have a call_button that we can click to execute the function. @magicgui(call_button="execute") def image_arithmetic(layerA: ImageData, operation: Operation, layerB: ImageData): return operation.value(layerA, layerB) That’s it! The image_arithmetic function is now a FunctionGui that can be shown, or incorporated into other GUIs (such as the napari GUI shown in this example) Note While type hints aren’t always required in magicgui, they are recommended (see type inference )… and they are required for certain things, like the Operation(Enum) used here for the dropdown and the napari.types.ImageData annotations that napari has registered with magicgui. create dropdowns with Enums¶ We’d like the user to be able to select the operation ( add, subtract, multiply, divide) using a dropdown menu. Enums offer a convenient way to restrict values to a strict set of options, while providing name: value pairs for each of the options. Here, the value for each choice is the actual function we would like to have called when that option is selected. class Operation(enum.Enum): add = numpy.add subtract = numpy.subtract multiply = numpy.multiply divide = numpy.divide add it to napari¶ When we decorated the image_arithmetic function above, it became a FunctionGui. Napari recognizes this type, so we can simply add it to the napari viewer as follows: viewer.window.add_dock_widget(image_arithmetic) Caution This api has changed slightly with version 0.2.0 of magicgui. See the migration guide if you are migrating from a previous version. connect event listeners for interactivity¶ What fun is a GUI without some interactivity? Let’s make stuff happen. We connect the image_arithmetic.reset_choices function to the viewer.layers.events.inserted/removed event from napari, to make sure that the dropdown menus stay in sync if a layer gets added or removed from the napari window: viewer.layers.events.inserted.connect(image_arithmetic.reset_choices) viewer.layers.events.removed.connect(image_arithmetic.reset_choices) Tip An additional offering from magicgui here is that the decorated function also acquires a new attribute “ called” that can be connected to callback functions of your choice. Then, whenever the gui widget or the original function are called, the result will be passed to your callback function: @image_arithmetic.called.connect def print_mean(value): """Callback function that accepts an event""" # the value attribute has the result of calling the function print(np.mean(value)) >>> image_arithmetic() 1.0060037881040373
https://napari.org/magicgui/examples/napari/napari_img_math.html
CC-MAIN-2022-40
en
refinedweb
Python – ‘solved’ program is an example of divide-and-conquer programming approach where the binary search is implemented using python. Binary Search implementation In binary search we take a sorted list of elements and start looking for an element at the middle of the list. If the search value matches with the middle value in the list we complete the search. Otherwise we eleminate half of the list of elements by choosing whether to procees with the right or left half of the list depending on the value of the item searched. This is possible as the list is sorted and it is much quicker than linear search. Here we divide the given list and conquer by choosing the proper half of the list. We repeat this approcah till we find the element or conclude about it’s absence in the list. def bsearch(list, val): list_size = len(list) - 1 idx0 = 0 idxn = list_size # Find the middle most value while idx0 <= idxn: midval = (idx0 + idxn)// 2 if list[midval] == val: return midval # Compare the value the middle most value if val > list[midval]: idx0 = midval + 1 else: idxn = midval - 1 if idx0 > idxn: return None # Initialize the sorted list list = [2,7,19,34,53,72] # Print the search result print(bsearch(list,72)) print(bsearch(list,11)) When the above code is executed, it produces the following result: 5 None
https://scanftree.com/tutorial/python/python-data-structure/python-divide-conquer/
CC-MAIN-2022-40
en
refinedweb
{emayili} Message Templates Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Services like Mailchimp and MailerLite make it easy to create stylish email campaigns. Their templating tools allow you to create elegant HTML messages which are personalised to the recipient. Wouldn’t it be cool if you could do something similar when sending emails from R? Well, with the latest version of {emayili}, that’s now possible (although this feature is definitely in its infancy!). We’ll start by loading {emayili} and checking that we have the right version (anything beyond 0.7.4 will be fine). library(emayili) packageVersion("emayili") [1] '0.7.4' To install this version from GitHub you can use: remotes::install_github("datawookie/emayili", "v0.7.4") We’ll set a couple of options to ensure that we see the contents of a message as it’s built. options(envelope.invisible = FALSE) options(envelope.details = TRUE) How Does It Work? You can create a message template using the Jinja template syntax. The template is populated in R with the {jinjar} package. For the purposes of {emayili} templates must be stored in a separate directory and must be named either template.txt or template.html. We might implement a more flexible system in future, but this works for the moment. Suppose I create a directory simple and a file, template.html, within it with the following content: <p>Hello {{ name }}!</p> There is one placeholder in the template for a variable called name. To use the template in a message you’d call the template() function, providing a reference to the template as well as values for any placeholders in the template. The template reference can be either: - the name of a builtin template (part of {emayili}) - a relative path or - an absolute path. The paths must refer to the directory containing the template file, not the file itself. So, let’s try out our simple template and use "Bob" for the value of name. envelope() %>% template("./templates/simple", name = "Bob") Date: Fri, 21 Jan 2022 07:46:30 GMT X-Mailer: {emayili}-0.7.4 MIME-Version: 1.0 Content-Type: text/html; charset=utf-8 Content-Disposition: inline <html><body><p>Hello Bob!</p></body></html> The resulting message has the content of the template attaced as an HTML body and the {{ name }} placeholder has been replaced with the specified value. This is not too different to the {glue} interpolation which is already supported in the text() and html() methods. However, there’s actually a lot of scope for doing more because the template syntaxt also supports some programming structures too like loops and conditionals. Loops & Conditionals Okay, so let’s get a little more sophisticated and create a template which uses a loop and a conditional. {% if greet %} <p>Hello!</p> {% endif %} {% for paragraph in text -%} <p>{{ paragraph }}</p> {% endfor %} Nothing fancy, but it illustrates the principle. Let’s populate the template. Now we need to provide values for both greet and text. envelope() %>% template( "./templates/loop-conditional", greet = TRUE, text = c( "Lorem ipsum dolor sit amet, lacus nostra est, id, orci magna bibendum felis dis.", "Ut laoreet tincidunt netus sed mi habitasse ut blandit, in, auctor nibh.", "Sem dapibus rhoncus proin dolor, vitae diam sed." ) ) Date: Fri, 21 Jan 2022 07:46:30 GMT X-Mailer: {emayili}-0.7.4 MIME-Version: 1.0 Content-Type: text/html; charset=utf-8 Content-Disposition: inline <html><body> <p>Hello!</p> <p>Lorem ipsum dolor sit amet, lacus nostra est, id, orci magna bibendum felis dis.</p> <p>Ut laoreet tincidunt netus sed mi habitasse ut blandit, in, auctor nibh.</p> <p>Sem dapibus rhoncus proin dolor, vitae diam sed.</p> </body></html> Example Okay, so those examples were cute, but nothing that you’d actually use. I’ve included one template with {emayili} which will generate a fairly attractive HTML message. The code for populating the template is a little lengthy, but you can look at it here. This is what the resulting message looks like (as viewed in Thunderbird). All of the content (text, images and links) was provided by template parameters, so it’s pretty flexible. The articles are populated via a loop, so you can include as many of them as you wish. Conclusion As I mentioned earlier, this feature is still pretty fresh and I’ve no doubt that there are bugs. However, I’m very excited about what’s now possible. You can now replicate services like Mailchimp or MailerLite from within R. The only constraints are your imagination and aesthetics (and I’m sorely lacking in the latter). If you create a template which you’d like included in {emayili}, please swing it my.
https://www.r-bloggers.com/2022/01/emayili-message-templates-2/
CC-MAIN-2022-40
en
refinedweb
Job interviews for software engineering and other programming positions can be tough. There are too many things to study, and even then it still might not be enough. Previously I had written about a common Fibonacci number algorithm and finding duplicate values in array. Those skill refreshers were written in JavaScript. This time we are going to take a turn and validate bracket combinations using the Java programming language. So when I say bracket combinations, what exactly do I mean? Take the following string for example: String validCombo = "{ [ ( { } [ ] ) ] }"; The above string is valid because each opening and closing bracket aligns correctly. You can ignore the spacing between each bracket because they will be ignored. An example of an invalid string might look like the following: String invalidCombo = "{ ( [ ] ] }" Notice that we are trying to pair a ( with a ] which is not correct. So how can we attempt to validate such a string in Java? The recommended way would be to make use of the Stack data type. The idea is to add all opening brackets to a stack and pop them off the stack when closing brackets are found. If the closing bracket doesn’t match the top element (last pushed element) of the stack, then the string is invalid. The string is also invalid if it has been iterated through and the stack is not empty in the end. import java.util.*; public class Brackets { private String brackets; public Brackets(String s) { brackets = s; } public boolean validate() { boolean result = true; Stack<Character> stack = new Stack<Character>(); char current, previous; for(int i = 0; i < this.brackets.length(); i++) { current = this.brackets.charAt(i); if(current == '(' || current == '[' || current == '{') { stack.push(current); } else if(current == ')' || current == ']' || current == '}') { if(stack.isEmpty()) { result = false; } else { previous = stack.peek(); if((current == ')' && previous == '(') || (current == ']' && previous == '[') || (current == '}' && previous == '{')) { stack.pop(); } else { result = false; } } } } if(!stack.isEmpty()) { result = false; } return result; } } The above code operates with a time complexity of O(n) and to demo it you could do something like the following: Brackets b = new Brackets("{[({}())]}"); System.out.println("Valid String: " + b.validate()); Balancing parenthesis and brackets is a good interview question because it is one of the first steps to understanding how to parse and validate data. Imagine if you were asked to write a code interpreter or parse JSON. Knowing how to balance or validate brackets and parenthesis will certainly help. Please share your experience with this topic in the comments if you’ve had it as an interview question or if you think you have a better solution than what I’ve come up with. A video version of this article can be seen below.
https://www.thepolyglotdeveloper.com/2015/02/validate-bracket-parenthesis-combos-using-stacks/
CC-MAIN-2022-40
en
refinedweb
boundaries.month() function boundaries.month() is experimental and subject to change at any time. boundaries.month() returns a record with start and stop boundary timestamps for the current month. now() determines the current month. Function type signature (?month_offset: int) => {stop: time, start: time} Parameters month_offset Number of months to offset from the current month. Default is 0. Use a negative offset to return boundaries from previous months. Use a positive offset to return boundaries for future months. Examples - Return start and stop timestamps for the current month - Query data from this month - Query data from last month Return start and stop timestamps for the current month import "experimental/date/boundaries" option now = () => 2022-05-10T10:10:00Z boundaries.month( )// Returns {start:2022-05-01T00:00:00.000000000Z, stop:2022-06-01T00:00:00.000000000Z} Query data from this month import "experimental/date/boundaries" thisMonth = boundaries.month() from(bucket: "example-bucket") |> range(start: thisMonth.start, stop: thisMonth.stop) Query data from last month import "experimental/date/boundaries" lastMonth = boundaries.month(month_offset: -1) from(bucket: "example-bucket") |> range(start: lastMonth.start, stop: lastMonth.
https://docs.influxdata.com/flux/v0.x/stdlib/experimental/date/boundaries/month/
CC-MAIN-2022-40
en
refinedweb
MicroPython Tutorial XVI Ok, lets do something different. Now LEGO has an iOS app that you can use as a remote. It is very good and lets you design your own controls. But to use it you need to be running the LEGO scratch interface. If you’re on MicroPython SD card you need something else. But wait BEFORE you read on, be warned that the something else in this tutorial needs a Wifi connection. Something like LEGO recommended EDIMAX USB adaptor, without it this tutorial won’t work. I should say too that is isn’t a tutorial like the others I posted to-date. This is a tutorial for those with quite a bit of experience in coding. All in all it will be around 120+ lines of code. Here is a video showing what this does too. It is also not a code base designed to work by itself, but in co-operation with the app remoteCode, that you can download here. OK done that, lets go. #!/usr/bin/env pybricks-micropython from pybricks import ev3brick as brick from pybricks.ev3devices import Motor, UltrasonicSensor, ColorSensor, GyroSensor, InfraredSensor from pybricks.parameters import Port,Color from pybricks.robotics import DriveBase from pybricks.tools import print, wait, StopWatch from time import sleep import threading import sysimport random import socket import os# Section 01hostname = os.popen('hostname -I').read().strip().split(" ") print("hostname address",hostname[0]) hostIPA = hostname[0] port = random.randint(50000,50999)# Section 02left = Motor(Port.C) right = Motor(Port.B) # 56 is the diameter of the wheels in mm # 114 is the distance between them in mm robot = DriveBase(left, right, 56, 114)print("host ",hostIPA) print("port",port)# Section 03online = Trueai = socket.getaddrinfo(hostIPA,port) addr = ai[0][-1]backlog = 5 size = 1024 s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(addr) s.listen(backlog)# Section 0Etry: res = s.accept() while online: client_s = res[0] #client_addr = res[1] req =client_s.recv(1024) data=req.decode('utf-8') print("data ",data)except AssertionError as error: print("Closing socket",error) client_s.close() s.close() We have imported all the classes we will ultimately need for this code to work to keep this tutorial manageable. Ok, we begin in Section 01 by running a query agains’t the OS class to get the IP address of our robot that we’re running on and a random port number. We need to give both these bits of information to the app so that it can communicate with the robot. Next in Section 02 we have simply defined the two connected motors declaring them in the process in a drive pair. In Section 03 we setup the code needed to communicate with the iOS device, and indeed try and make that connection. In Section 0E we launch the main loop thru which the app remoteCode and our MicroPython code will talk, printing out the conversation that is taking place. Yes, 0E doesn’t follow 03, we dropped a few sections to keep things to get you going. Download the app remoteCode here and copy and paste this to a python code file on your robot, run it and then run remoteCode on your iOS device. Enter the IP address you’re of your robot into the iOS app and the port and they should start talking to each other. Assuming you’re connected and it all works, you should see the work “data #:connected” appear on the screen. This is the app talking to your Python app. Now go and select one of the interfaces “Keyboard, Touchpad or Motion”. You should see more text appearing, assuming you choose “Keyboard” for example, it will sa “#:begin” followed by “#:keypad”. Swipe the iPad right and it’ll say “#:end” and then return to the main menu. Note if you iOS device screen locks during the process, you’ll need to quit the Python and re-run the process. You can quit your connection on the iOS device by shaking it. And there you have it, the basis of our remote app. But ok, where do you go from here. Here is some more code to add to the mix. These two procedures show subroutines to interpret the #: commands you just saw returned by the app. # Section 07def actionTrigger(data, client): global transmit if data[:5] == "#:end": stopMotors() brick.sound.beep() peerMode = False if data[:6] == "#:peer": peerMode = True if data[:7] == "#:begin": pass if data[:8] == "#:keypad": brick.light(Color.YELLOW) if data[:10] == "#:touchpad": brick.light(Color.RED) if data[:8] == "#:motion": brick.light(Color.ORANGE) if data[:5] == "#:con": # connected brick.sound.beep()if data[:5] == "#:dis": # disconnect brick.light(Color.BLACK) wait(2000) brick.light(Color.GREEN) client_s.close() s.close() if data[:8] == "#:short": stopMotors() brick.sound.beep() if data[:6] == "#:long": stopMotors() brick.sound.beep()# Section 08def stopMotors(): print("STOP STOP STOP ") robot.stop() This code is reasonably self explicit. As you change to different interfaces, the colours of the robot will change in response to the different “#:” conversations the iOS code sends back to it. We’re almost there. I am going to give you the last section which is the code bases you need for the tracker/motion interfaces and leave the keyboard one as an exercise for you to figure out. You should already have the general gist of the way it works now. Firstly we add a method to interprete the streams of data that come back if you choose the motion or the tracker interface. Note the code here is a little more complicated since I want to ignore duplicate data packets sent by the app. lastPitch = 0 lastRoll = 0# Section 09def joystick(pitch, roll): global lastPitch global lastRoll calcPitch = int(round(pitch / 180 * 800,0)) # needs to be an integer calcRoll = int(round(roll / 720 * 200,0)) # turns need to be slow and deliberate, this reduces roll by 4 if lastPitch != calcPitch or lastRoll != calcRoll: print("calc",calcPitch,calcRoll) robot.drive(calcPitch,calcRoll) lastPitch = calcPitch lastRoll = calcRoll And then I add an else to the if we added early on in the main loop, so after the call to the actionTrigger routine it acts on the data packets prefixed with a “@:” lead. elif data[:2] == "@:": mCommands = data.split("\n") for mCommand in mCommands: if len(mCommand) != 0: cords = mCommand[2:] cord = cords.split(":") try: roll = float(cord[0]) pitch = float(cord[1]) joystick(pitch,roll) except: pass When you’re all done, download the script and run it. Try using it with the touchpad or the motion interface and you’re be able to move you’re robot with your iOS device, under MicroPython! Hint. If you’re in the motion interface, you need to keep the grey circle inside the red one to stop sending data, and indeed tap the circle itself to stop the robot. The touchpad is slightly easier, you just stop touching it. I may publish a tutorial on getting the keyboard interface running in due time, but for now, its time to play :) A final final note, how do I know all this, well yes I confess I wrote the remoteCode app too. [Obviously it isn’t Python, it is in Swift].
https://marklucking.medium.com/micropython-tutorial-xvi-1b34071f4640?source=post_internal_links---------3----------------------------
CC-MAIN-2022-40
en
refinedweb
As we discussed in the previous chapter, the publisher is responsible for the generation of unbounded asynchronous events, and it pushes them to the associated subscribers. It is represented by the org.reactivestreams.Publisher interface, as follows: public interface Publisher<T> { public void subscribe(Subscriber<? super T> s); } The interface provides a single subscribe method. The method is invoked by any party that is interested in listening to events published by the publisher. The interface is quite simple, and it can be used to publish any type of event, be it a UI event (like a mouse-click) or a data event. Since the interface is simple, let's add an implementation for our custom FibonacciPublisher: public class FibonacciPublisher ...
https://www.oreilly.com/library/view/hands-on-reactive-programming/9781789135794/327926ee-078e-4a79-9c5c-ef861dde73ea.xhtml
CC-MAIN-2020-05
en
refinedweb
This article describes a hotfix package for Microsoft Visual Studio 2015 Update 3. The hotfix contains several fixes to the Visual C++ optimizer and code generator (c2.dll). For more information, see the "Issues that are fixed in this hotfix" section. Resolution How to obtain this hotfixThe following file 2015 Update 3 installed. Restart requirementYou may have to restart the computer after you apply this hotfix if no instance of Visual Studio is being used. Hotfix replacement informationThis hotfix doesn't replace other hotfixes. Issues that are fixed in this hotfix This hotfix contains fixes for the following issues: - Fixes a bug in the optimizer when hoisting a loop-variant conditional store outside a loop: #include <cstdlib>#include <cassert>struct Foo{ int a; int b;};int main(){ Foo foo; foo.b = rand(); int a = rand(); int b = foo.b; for (int i = 0; i < 10; ++i) { int inner_b = b; int inner_a = a; if (inner_b < 0) // This gets incorrect hoisted outside the loop. // A workaround is /d2SSAOptimizer- { inner_a = 0; inner_b = 0; } if (inner_b >= 0) assert(inner_a == a); a += b; } return 0;} - Fix for an integer division bug in the optimizer: #include <stdio.h>volatile int z = 0;int main(){ unsigned a, b; __int64 c; a = z; c = a; c = (c == 0) ? 1LL : c; b = (unsigned)((__int64)a * 100 / c); // Division was made unconditional // incorrectly creating a divide by zero. // A workaround is /d2SSAOptimizer- printf("%u\n", b); return 0;} - Fix for an integer division bug in the optimizer: int checkchecksum(int suly, int ell, int utkodert){ int x; ell -= utkodert; ell %= 103; if (suly - 1) utkodert /= (suly - 1); // Division was made unconditional, // incorrectly creating a potential divide by zero // A workaround is /d2SSAOptimizer- return utkodert;} - Fix for an integer division bug in the optimizer: typedef int unsigned uint;volatile uint out_index = 0;bool data[1] = {true};bool __declspec(noinline) BugSSA(uint index){ uint group = index / 3; if (group == 0) // The division result being compared to zero is replaced // with a range check. We then incorrectly move the division { // to the next use of "group", without accounting for the fact // that "index" has changed. A workaround is /d2SSAOptimizer- return false; } index -= 3; group--; bool ret = data[group]; // crash here out_index = index; out_index = index; return ret;}int main(){ volatile uint i = 3; return BugSSA(i);} - Fix for a crash in the optimizer for division of MIN_INT by -1: int test_div(bool flag, int dummy){ int result = std::numeric_limits<int>::min(); int other; if (flag) other = -1; else other = dummy - 1 - dummy; result /= other; // We would push the division up into both arms of the // if-then-else. One of those divisions would cause the // optimizer to evaluate MIN_INT/-1.This is a crash, similar // to dividing by zero. A workaround is /d2SSAOptimizer- return result;} - Fixes a stack overflow in the optimizer: #include <stdio.h>// This example produced a stack overflow in the optimizer, which was // caused by mutually-recursive analysis functions not properly tracking// the number of times they were invocated.// A workaround is /d2SSAOptimizer-typedef unsigned char byte;typedef unsigned long long int uint64;int main(){ const uint64 *sieveData = new uint64[1024]; uint64 bitIndexShift = 0; uint64 curSieveChunk = 0xfafd7bbef7ffffffULL & ~uint64(3); const unsigned int *NumbersCoprimeToModulo = new unsigned int[16]; const unsigned int *PossiblePrimesForModuloPtr = NumbersCoprimeToModulo; while (!curSieveChunk) { curSieveChunk = *(sieveData++); const uint64 NewValues = (16 << 8) | (32 << 24); bitIndexShift = (NewValues >> (bitIndexShift + 8)) & 255; PossiblePrimesForModuloPtr = NumbersCoprimeToModulo + bitIndexShift; } if (PossiblePrimesForModuloPtr - NumbersCoprimeToModulo != 0) { printf("fail"); return 1; } printf("pass"); return 0;} - Fix for incorrect code generation when removing redundant floating point conversions involving convert an int32 parameter to f64: #include <string>__declspec(noinline) void test(int Val){ double Val2 = Val; std::string Str; printf("%lld\n", __int64(Val2)); // We incorrectly try to read 64 bits of // floating point from the parameter area, // instead of reading 32 bits of integer // and converting it. A workaround is // to throw /d2SSAOptimizer-}int main(){ test(6); test(7); return 0;} - Fixes a crash in the optimizer when splitting flow graph nodes in a default statement of a switch block, for more details, see. - Fixes a bug in the loop optimizer where we perform incorrect strength reduction of unsigned secondary induction variables that are multiples of the primary induction variable: #include <assert.h>#include <malloc.h>#include <stdio.h>typedef unsigned int uint;typedef unsigned char byte;/*There is a corner case in the compiler's loop optimizer. The corner case aroseif an induction variable (IV) is a multiple of the loop index, and there's acomparison of the IV to an integer that is less than this multiplication factor.A workaround is to use #pragma optimize("", off) / #pragma optimize("", on)around the affected function.*/int main(int argc, char *argv[]){ const uint w = 256; const uint h = 64; const uint w_new = w >> 1; const uint h_new = h >> 1; const byte *const src = (byte *)malloc(w * h); byte *it_out = (byte *)malloc(w_new * h_new); int fail = 0; for (uint y_new = 0; y_new < h_new; ++y_new) { for (uint x_new = 0; x_new < w_new; ++x_new, ++it_out) { uint x = x_new * 2; uint y = y_new * 2; if (x < 1 || y < 1) { *it_out = 0; continue; } if (x != 0) { } else { fail = 1; } *it_out = 4 * src[y * w + x]; } } if (fail) { printf("fail\n"); return (1); } printf("pass\n"); return (0);} - Offers a workaround for C4883 ": function size suppresses optimizations". When the optimizer sees functions that are massive, it will scale back the optimizations that it performs. It will issue a C4883 warning when it does this, if you have enabled the warning via /we4883. If you want to override this decision to suppress optimizations, throw the /d2OptimizeHugeFunctions switch. - Fixes for a compiler crash in c2!PpCanPropagateForward when you perform optimizations on x64. - Fixes for loop optimizer bugs which involve incorrect induction variable strength reduction. - Fixes for incorrect reordering of expressions which involve reads & writes to memory because of incorrect alias checking. - Fixes for a register allocator bug which involves a compiler-generated temporary existing across multiple exception handling regions. Status Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
https://support.microsoft.com/en-us/help/3207317/visual-c-optimizer-fixes-for-visual-studio-2015-update-3
CC-MAIN-2020-05
en
refinedweb
.. What's the purpose of the Neural Network? The neural network implemented in this article should be able to improve web accessibility by choosing an appropriate font color regarding a background color. For instance, the font color on a dark blue background should be white whereas the font color on a light yellow background should be black. You might wonder: Why would you need a neural network for the task in the first place? It isn't too difficult to compute an accessible font color depending on a background color programmatically, is it? I quickly found a solution on Stack Overflow for the problem and adjusted it to my needs to facilitate colors in RGB space. function getAccessibleColor(rgb) {let [ r, g, b ] = rgb;let colors = [r / 255, g / 255, b / 255];let c = colors.map((col) => {if (col <= 0.03928) {return col / 12.92;}return Math.pow((col + 0.055) / 1.055, 2.4);});let L = (0.2126 * c[0]) + (0.7152 * c[1]) + (0.0722 * c[2]);return (L > 0.179)? [ 0, 0, 0 ]: [ 255, 255, 255 ];} The use case of the neural network isn't too valuable for the real world because there is already a programmatic way to solve the problem. There isn't a need to use a machine trained algorithm for it. However, since there is a programmatic approach to solve the problem, it becomes simple to validate the performance of a neural network which might be able to solve the problem for us too. Checkout the animation in the GitHub repository of a learning neural network to get to know how it will perform eventually and what you are going to build in this tutorial. If you are familiar with machine learning, you might have noticed that the task at hand is a classification problem. An algorithm should decide a binary output (font color: white or black) based on an input (background color). Over the course of training the algorithm with a neural network, it will eventually output the correct font colors based on background colors as inputs. The following sections will give you guidance to setup all parts for your neural network from scratch. It is up to you to wire the parts together in your own file/folder setup. But you can consolidate the previous referenced GitHub repository for the implementation details. Data Set Generation in JavaScript A training set in machine learning consists of input data points and output data points (labels). It is used to train the algorithm which will predict the output for new input data points outside of the training set (e.g. test set). During the training phase, the algorithm trained by the neural network adjusts its weights to predict the given labels of the input data points. In conclusion, the trained algorithm is a function which takes a data point as input and approximates the output label. After the algorithm is trained with the help of the neural network, it can output font colors for new background colors which weren't in the training set. Therefore you will use a test set later on. It is used to verify the accuracy of the trained algorithm. Since we are dealing with colors, it isn't difficult to generate a sample data set of input colors for the neural network. function generateRandomRgbColors(m) {const rawInputs = [];for (let i = 0; i < m; i++) {rawInputs.push(generateRandomRgbColor());}return rawInputs;}function generateRandomRgbColor() {return [randomIntFromInterval(0, 255),randomIntFromInterval(0, 255),randomIntFromInterval(0, 255),];}function randomIntFromInterval(min, max) {return Math.floor(Math.random() * (max - min + 1) + min);} The generateRandomRgbColors() function creates partial data sets of a given size m. The data points in the data sets are colors in the RGB color space. Each color is represented as a row in a matrix whereas each column is a feature of the color. A feature is either the R, G or B encoded value in the RGB space. The data set hasn't any labels yet, so the training set isn't complete (also called unlabeled training set), because it has only input values but no output values. Since the programmatic approach to generate an accessible font color based on a color is known, an adjusted version of the functionality can be derived to generate the labels for the training set (and the test set later on). The labels are adjusted for a binary classification problem and reflect the colors black and white implicitly in the RGB space. Therefore a label is either [0, 1] for the color black or [ 1, 0 ] for the color white. function getAccessibleColor(rgb) {let [ r, g, b ] = rgb;let color = [r / 255, g / 255, b / 255];let c = color.map((col) => {if (col <= 0.03928) {return col / 12.92;}return Math.pow((col + 0.055) / 1.055, 2.4);});let L = (0.2126 * c[0]) + (0.7152 * c[1]) + (0.0722 * c[2]);return (L > 0.179)? [ 0, 1 ] // black: [ 1, 0 ]; // white} Now you have everything in place to generate random data sets (training set, test set) of (background) colors which are classified either for black or white (font) colors. function generateColorSet(m) {const rawInputs = generateRandomRgbColors(m);const rawTargets = rawInputs.map(getAccessibleColor);return { rawInputs, rawTargets };} Another step to give the underlying algorithm in the neural network a better time is feature scaling. In a simplified version of feature scaling, you want to have the values of your RGB channels between 0 and 1. Since you know about the maximum value, you can simply derive the normalized value for each color channel. function normalizeColor(rgb) {return rgb.map(v => v / 255);} It is up to you to put this functionality in your neural network model or as separate utility function. I will put it in the neural network model in the next step. Setup Phase of a Neural Network Model in JavaScript Now comes the exciting part where you will implement a neural network in JavaScript. Before you can start implementing it, you should install the deeplearn.js library. It is a framework for neural networks in JavaScript. The official pitch for it says: "deeplearn.js is an open-source library that brings performant machine learning building blocks to the web, allowing you to train neural networks in a browser or run pre-trained models in inference mode." In this article, you will train your model yourself and run it in inference mode afterward. There are two major advantages to use the library: First, it uses the GPU of your local machine which accelerates the vector computations in machine learning algorithms. These machine learning computations are similar to graphical computations and thus it is computational efficient to use the GPU instead of the CPU. Second, deeplearn.js is structured similar to the popular Tensorflow library which happens to be also developed by Google but is written in Python. So if you want to make the jump to machine learning in Python, deeplearn.js might give you a great gateway to the whole domain in JavaScript. Let's get back to your project. If you have set it up with npm, you can simply install deeplearn.js on the command line. Otherwise check the official documentation of the deeplearn.js project for installation instructions. npm install deeplearn Since I didn't build a vast number of neural networks myself yet, I followed the common practice of architecting the neural network in an object-oriented programming style. In JavaScript, you can use a JavaScript ES6 class to facilitate it. A class gives you the perfect container for your neural network by defining properties and class methods to the specifications of your neural network. For instance, your function to normalize a color could find a spot in the class as method. class ColorAccessibilityModel {normalizeColor(rgb) {return rgb.map(v => v / 255);}}export default ColorAccessibilityModel; Perhaps it is a place for your functions to generate the data sets as well. In my case, I only put the normalization in the class as class method and leave the data set generation outside of the class. You could argue that there are different ways to generate a data set in the future and thus it shouldn't be defined in the neural network model itself. Nevertheless, that's only a implementation detail. The training and inference phase are summarized under the umbrella term session in machine learning. You can setup the session for the neural network in your neural network class. First of all, you can import the NDArrayMathGPU class from deeplearn.js which helps you to perform mathematical calculations on the GPU in a computational efficient way. import {NDArrayMathGPU,} from 'deeplearn';const math = new NDArrayMathGPU();class ColorAccessibilityModel {...}export default ColorAccessibilityModel; Second, declare your class method to setup your session. It takes a training set as argument in its function signature and thus it becomes the perfect consumer for a generated training set from a previous implemented function. In the third step, the session initializes an empty graph. In the next steps, the graph will reflect your architecture of the neural network. It is up to you to define all of its properties. import {Graph,NDArrayMathGPU,} from 'deeplearn';class ColorAccessibilityModel {setupSession(trainingSet) {const graph = new Graph();}..}export default ColorAccessibilityModel; Fourth, you define the shape of your input and output data points for your graph in form of a tensor. A tensor is an array (of arrays) of numbers with a variable number of dimensions. It can be a vector, a matrix or a higher dimensional matrix. The neural network has these tensors as input and output. In our case, there are three input units (one input unit per color channel) and two output units (binary classification, e.g. white and black color). class ColorAccessibilityModel {inputTensor;targetTensor;setupSession(trainingSet) {const graph = new Graph();this.inputTensor = graph.placeholder('input RGB value', [3]);this.targetTensor = graph.placeholder('output classifier', [2]);}...}export default ColorAccessibilityModel; Fifth, a neural network has hidden layers in between. It's the blackbox where the magic happens. Basically, the neural network comes up with its own cross computed paramaters which are trained in the session. After all, it is up to you to define the dimension (layer size with each unit size) of the hidden layer(s).,) {...}...}export default ColorAccessibilityModel; Depending on your number of layers, you are altering the graph to span more and more of its layers. The class method which creates the connected layer takes the graph, the mutated connected layer, the index of the new layer and number of units. The layer property of the graph can be used to return a new tensor that is identified by a name.,) {return graph.layers.dense(`fully_connected_${layerIndex}`,inputLayer,units);}...}export default ColorAccessibilityModel; Each neuron in a neural network has to have a defined activation function. It can be a logistic activation function that you might know already from logistic regression and thus it becomes a logistic unit in the neural network. In our case, the neural network uses rectified linear units as default.,activationFunction) {return graph.layers.dense(`fully_connected_${layerIndex}`,inputLayer,units,activationFunction ? activationFunction : (x) => graph.relu(x));}...}export default ColorAccessibilityModel; Sixth, create the layer which outputs the binary classification. It has 2 output units; one for each discrete value (black, white). class ColorAccessibilityModel {inputTensor;targetTensor;prediction);}...}export default ColorAccessibilityModel; Seventh, declare a cost tensor which defines the loss function. In this case, it will be a mean squared error. It optimizes the algorithm that takes the target tensor (labels) of the training set and the predicted tensor from the trained algorithm to evaluate the cost. class ColorAccessibilityModel );}...}export default ColorAccessibilityModel; Last but not least, setup the session with the architected graph. Afterward, you can start to prepare the incoming training set for the upcoming training phase. import {Graph,Session,NDArrayMathGPU,} from 'deeplearn';class ColorAccessibilityModel {session);this.session = new Session(graph, math);this.prepareTrainingSet(trainingSet);}prepareTrainingSet(trainingSet) {...}...}export default ColorAccessibilityModel; The setup isn't done before preparing the training set for the neural network. First, you can support the computation by using a callback function in the GPU performed math context. But it's not mandatory and you could perform the computation without it. import {Graph,Session,NDArrayMathGPU,} from 'deeplearn';const math = new NDArrayMathGPU();class ColorAccessibilityModel {session;inputTensor;targetTensor;predictionTensor;costTensor;...prepareTrainingSet(trainingSet) {math.scope(() => {...});}...}export default ColorAccessibilityModel; Second, you can destructure the input and output (labels, also called targets) from the training set to map them into a readable format for the neural network. The mathematical computations in deeplearn.js use their in-house NDArrays. After all, you can imagine them as simple array in array matrices or vectors. In addition, the colors from the input array are normalized to improve the performance of the neural network. import {Array1D));});}...}export default ColorAccessibilityModel; Third, the input and target arrays are shuffled. The shuffler provided by deeplearn.js keeps both arrays in sync when shuffling them. The shuffle happens for each training iteration to feed different inputs as batches to the neural network. The whole shuffling process improves the trained algorithm, because it is more likely to make generalizations by avoiding over-fitting. import {Array1D,InCPUMemoryShuffledInputProviderBuilder();});}...}export default ColorAccessibilityModel; Last but not least, the feed entries are the ultimate input for the feedforward algorithm of the neural network in the training phase. It matches data and tensors (which were defined by their shapes in the setup phase). import {Array1D,InCPUMemoryShuffledInputProviderBuilderGraph,Session,NDArrayMathGPU,} from 'deeplearn';const math = new NDArrayMathGPU();class ColorAccessibilityModel {session;inputTensor;targetTensor;predictionTensor;costTensor;feedEntries;..();this.feedEntries = [{ tensor: this.inputTensor, data: inputProvider },{ tensor: this.targetTensor, data: targetProvider },];});}...}export default ColorAccessibilityModel; The setup phase of the neural network is finished. The neural network is implemented with all its layers and units. Moreover the training set is prepared for training. Only two hyperparameters are missing to configure the high level behaviour of the neural network. These are used in the next phase: the training phase. import {Array1D,InCPUMemoryShuffledInputProviderBuilder,Graph,Session,SGDOptimizer,NDArrayMathGPU,} from 'deeplearn';const math = new NDArrayMathGPU();class ColorAccessibilityModel {session;optimizer;batchSize = 300;initialLearningRate = 0.06;inputTensor;targetTensor;predictionTensor;costTensor;feedEntries;constructor() {this.optimizer = new SGDOptimizer(this.initialLearningRate);}...}export default ColorAccessibilityModel; The first parameter is the learning rate. You might remember it from linear or logistic regression with gradient descent. It determines how fast the algorithm converges to minimize the cost. So one could assume it should be high. But it mustn't be too high. Otherwise gradient descent never converges because it cannot find a local optima. The second parameter is the batch size. It defines how many data points of the training set are passed through the neural network in one epoch (iteration). An epoch includes one forward pass and one backward pass of one batch of data points. There are two advantages to training a neural network with batches. First, it is not as computational intensive because the algorithm is trained with less data points in memory. Second, a neural network trains faster with batches because the weights are adjusted with each batch of data points in an epoch rather than the whole training set going through it. Training Phase The setup phase is finished. Next comes the training phases. It doesn't need too much implementation anymore, because all the cornerstones were defined in the setup phase. First of all, the training phase can be defined in a class method. It is executed again in the math context of deeplearn.js. In addition, it uses all the predefined properties of the neural network instance to train the algorithm. class ColorAccessibilityModel {...train() {math.scope(() => {this.session.train(this.costTensor,this.feedEntries,this.batchSize,this.optimizer);});}}export default ColorAccessibilityModel; The train method is only one epoch of the neural network training. So when it is called from outside, it has to be called iteratively. Moreover it trains only one batch. In order to train the algorithm for multiple batches, you have to run multiple iterations of the train method again. That's it for a basic training phase. But it can be improved by adjusting the learning rate over time. The learning rate can be high in the beginning, but when the algorithm converges with each step it takes, the learning rate could be decreased. class ColorAccessibilityModel {...train(step) {let learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));this.optimizer.setLearningRate(learningRate);math.scope(() => {this.session.train(this.costTensor,this.feedEntries,this.batchSize,this.optimizer);}}}export default ColorAccessibilityModel; In our case, the learning rate decreases by 10% every 50 steps. Next, it would be interesting to get the cost in the training phase to verify that it decreases over time. It could be simply returned with each iteration, but that's leads to computational inefficiency. Every time the cost is requested from the neural network, it has to access the GPU to return it. Therefore, we only access the cost once in a while to verify that it's decreasing. If the cost is not requested, the cost reduction constant for the training is defined with NONE (which was the default before). import {Array1D,InCPUMemoryShuffledInputProviderBuilder,Graph,Session,SGDOptimizer,NDArrayMathGPU,CostReduction,} from 'deeplearn';class ColorAccessibilityModel {...train(step, computeCost) {let learningRate = this.initialLearningRate * Math.pow(0.90, Math.floor(step / 50));this.optimizer.setLearningRate(learningRate);let costValue;math.scope(() => {const cost = this.session.train(this.costTensor,this.feedEntries,this.batchSize,this.optimizer,computeCost ? CostReduction.MEAN : CostReduction.NONE,);if (computeCost) {costValue = cost.get();}});return costValue;}}export default ColorAccessibilityModel; Finally, that's it for the training phase. Now it needs only to be executed iteratively from the outside after the session setup with the training set. The outside execution can decide on a condition if the train method should return the cost. Inference Phase The final stage is the inference phase where a test set is used to validate the performance of the trained algorithm. The input is a color in RGB space for the background color and as output it should predict the classifier [ 0, 1 ] or [ 1, 0 ] for either black or white for the font color. Since the input data points were normalized, don't forget to normalize the color in this step as well. class ColorAccessibilityModel {...predict(rgb) {let classifier = [];math.scope(() => {const mapping = [{tensor: this.inputTensor,data: Array1D.new(this.normalizeColor(rgb)),}];classifier = this.session.eval(this.predictionTensor, mapping).getValues();});return [ ...classifier ];}}export default ColorAccessibilityModel; The method run the performance critical parts in the math context again. There it needs to define a mapping that will end up as input for the session evaluation. Keep in mind, that the predict method doesn't need to run strictly after the training phase. It can be used during the training phase to output validations of the test set. Ultimately the neural network is implemented for setup, training and inference phase. Visualize a learning Neural Network in JavaScript Now it's about time using the neural network to train it with a training set in the training phase and validate the predictions in the inference phase with a test set. In its simplest form, you would set up the neural network, run the training phase with a training set, validate over the time of training the minimizing cost and finally predict a couple of data points with a test set. All of it would happen on the developer console in the web browser with a couple of console.log statements. However, since the neural network is about color prediction and deeplearn.js runs in the browser anyway, it would be much more enjoyable to visualize the training phase and inference phase of the neural network. At this point, you can decide on your own how to visualize the phases of your performing neural network. It can be plain JavaScript by using a canvas and the requestAnimationFrame API. But in the case of this article, I will demonstrate it by using React.js, because I write about it on my blog as well. So after setting up the project with create-react-app, the App component will be our entry point for the visualization. First of all, import the neural network class and the functions to generate the data sets from your files. Moreover, add a couple of constants for the training set size, test set sizes and number of training iterations. import React, { Component } from 'react';import './App.css';import generateColorSet from './data';import ColorAccessibilityModel from './neuralNetwork';const ITERATIONS = 750;const TRAINING_SET_SIZE = 1500;const TEST_SET_SIZE = 10;class App extends Component {...}export default App; In the constructor of the App component, generate the data sets (training set, test set), setup the neural network session by passing in the training set, and define the initial local state of the component. Over the course of the training phase, the value for the cost and number of iterations will be displayed somewhere, so these are the properties which end up in the component state. import React, { Component } from 'react';import './App.css';import generateColorSet from './data';import ColorAccessibilityModel from './neuralNetwork';const ITERATIONS = 750;const TRAINING_SET_SIZE = 1500;const TEST_SET_SIZE = 10;class App extends Component {testSet;trainingSet;colorAccessibilityModel;constructor() {super();this.testSet = generateColorSet(TEST_SET_SIZE);this.trainingSet = generateColorSet(TRAINING_SET_SIZE);this.colorAccessibilityModel = new ColorAccessibilityModel();this.colorAccessibilityModel.setupSession(this.trainingSet);this.state = {currentIteration: 0,cost: -42,};}...}export default App; Next, after setting up the session of the neural network in the constructor, you could train the neural network iteratively. In a naive approach you would only need a for loop in a mounting component lifecycle hook of React. class App extends Component {...componentDidMount () {for (let i = 0; i <= ITERATIONS; i++) {this.colorAccessibilityModel.train(i);}};}export default App; However, it wouldn't work to render an output during the training phase in React, because the component couldn't re-render while the neural network blocks the single JavaScript thread. That's where requestAnimationFrame can be used in React. Rather than defining a for loop statement ourselves, each requested animation frame of the browser can be used to run exactly one training iteration. class App extends Component {...componentDidMount () {requestAnimationFrame(this.tick);};tick = () => {this.setState((state) => ({currentIteration: state.currentIteration + 1}));if (this.state.currentIteration < ITERATIONS) {requestAnimationFrame(this.tick);this.colorAccessibilityModel.train(this.state.currentIteration);}};}export default App; In addition, the cost can be computed every 5th step. As mentioned, the GPU needs to be accessed to retrieve the cost. Thus it should be avoided to train the neural network faster. class App extends Component {...componentDidMount () {requestAnimationFrame(this.tick);};tick = () => {this.setState((state) => ({currentIteration: state.currentIteration + 1}));if (this.state.currentIteration < ITERATIONS) {requestAnimationFrame(this.tick);let computeCost = !(this.state.currentIteration % 5);let cost = this.colorAccessibilityModel.train(this.state.currentIteration,computeCost);if (cost > 0) {this.setState(() => ({ cost }));}}};}export default App; The training phase is running once the component mounted. Now it is about rendering the test set with the programmatically computed output and the predicted output. Over time, the predicted output should be the same as the programmatically computed output. The training set itself is never visualized. class App extends Component {...render() {const { currentIteration, cost } = this.state;return (<div className="app"><div><h1>Neural Network for Font Color Accessibility</h1><p>Iterations: {currentIteration}</p><p>Cost: {cost}</p></div><div className="content"><div className="content-item"><ActualTabletestSet={this.testSet}/></div><div className="content-item"><InferenceTablemodel={this.colorAccessibilityModel}testSet={this.testSet}/></div></div></div>);}}const ActualTable = ({ testSet }) =><div><p>Programmatically Computed</p></div>const InferenceTable = ({ testSet, model }) =><div><p>Neural Network Computed</p></div>export default App; The actual table iterates over the size of the test set size to display each color. The test set has the input colors (background colors) and output colors (font colors). Since the output colors are classified into black [ 0, 1 ] and white [ 1, 0 ] vectors when a data set is generated, they need to be transformed into actual colors again. const ActualTable = ({ testSet }) =><div><p>Programmatically Computed</p>{Array(TEST_SET_SIZE).fill(0).map((v, i) =><ColorBoxkey={i}rgbInput={testSet.rawInputs[i]}rgbTarget={fromClassifierToRgb(testSet.rawTargets[i])}/>)}</div>const fromClassifierToRgb = (classifier) =>classifier[0] > classifier[1]? [ 255, 255, 255 ]: [ 0, 0, 0 ] The ColorBox component is a generic component which takes the input color (background color) and target color (font color). It simply displays a rectangle with the input color style, the RGB code of the input color as string and styles the font of the RGB code into the given target color. const ColorBox = ({ rgbInput, rgbTarget }) =><div className="color-box" style={{ backgroundColor: getRgbStyle(rgbInput) }}><span style={{ color: getRgbStyle(rgbTarget) }}><RgbString rgb={rgbInput} /></span></div>const RgbString = ({ rgb }) =>`rgb(${rgb.toString()})`const getRgbStyle = (rgb) =>`rgb(${rgb[0]}, ${rgb[1]}, ${rgb[2]})` Last but not least, the exciting part of visualizing the predicted colors in the inference table. It uses the color box as well, but gives a different set of props into it. const InferenceTable = ({ testSet, model }) =><div><p>Neural Network Computed</p>{Array(TEST_SET_SIZE).fill(0).map((v, i) =><ColorBoxkey={i}rgbInput={testSet.rawInputs[i]}rgbTarget={fromClassifierToRgb(model.predict(testSet.rawInputs[i]))}/>)}</div> The input color is still the color defined in the test set. But the target color isn't the target color from the test set. The crucial part is that the target color is predicted in this component by using the neural network's predict method. It takes the input color and should predict the target color over the course of the training phase. Finally, when you start your application, you should see the neural network in action. Whereas the actual table uses the fixed test set from the beginning, the inference table should change its font colors during the training phase. In fact, while the ActualTable component shows the actual test set, the InferenceTable shows the input data points of the test set, but the predicted output by using the neural network. The React rendered part can be seen in the GitHub repository animation too. The article has shown you how deeplearn.js can be used to build neural networks in JavaScript for machine learning. If you have any recommendation for improvements, please leave a comment below. In addition, I am curious whether you are interested in the crossover of machine learning and JavaScript. If that's is the case, I would write more about it. Furthermore, I would love to get more into the topic and I am open for opportunities in the field of machine learning. At the moment, I apply my learnings in JavaScript, but I am so keen to get into Python at some point as well. So if you know about any opportunities in the field, please reach out to me :-)
https://www.robinwieruch.de/neural-networks-deeplearnjs-javascript/
CC-MAIN-2020-05
en
refinedweb
class #include <Magnum/Platform/Sdl2Application.h> InputEvent Base for input events. Contents Derived classes - class KeyEvent - Key event. - class MouseEvent - Mouse event. - class MouseMoveEvent - Mouse move event. - class MouseScrollEvent - Mouse scroll event. Public types Constructors, destructors, conversion operators - InputEvent(const InputEvent&) deleted - Copying is not allowed. - InputEvent(InputEvent&&) deleted - Moving is not allowed. Public functions - auto operator=(const InputEvent&) -> InputEvent& deleted - Copying is not allowed. - auto operator=(InputEvent&&) -> InputEvent& deleted - Moving is not allowed. - auto isAccepted() const -> bool - Whether the event is accepted. - void setAccepted(bool accepted = true) - Set event as accepted. - auto event() const -> const SDL_Event& - Underlying SDL event. Enum documentation Typedef documentation typedef Containers:: EnumSet<Modifier> Magnum:: Platform:: Sdl2Application:: InputEvent:: Modifiers Set of modifiers. Function documentation void Magnum:: Platform:: Sdl2. const SDL_Event& Magnum:: Platform:: Sdl2Application:: InputEvent:: event() const Underlying SDL event. Of type SDL_KEYDOWN / SDL_KEYUP for KeyEvent, SDL_MOUSEBUTTONUP / SDL_MOUSEBUTTONDOWN for MouseEvent, SDL_MOUSEWHEEL for MouseScrollEvent and SDL_MOUSEMOTION for MouseMoveEvent.
https://doc.magnum.graphics/magnum/classMagnum_1_1Platform_1_1Sdl2Application_1_1InputEvent.html
CC-MAIN-2020-05
en
refinedweb
The month-long political impasse ended dramatically with Devendra Fadnavis returning as the Chief Minister of Maharashtra, with Nationalist Congress Party's (NCP's) Ajit Pawar as Deputy CM. BJP has 105 MLAs and NCP 54 in the 288-member Legislative Assembly. Our correspondents Sharad Vyas, Tanvi Deshpande and Alok Despande from Mumbai, and Nistula Hebbar, Sobhana K. Nair, Sandeep Phukan and Vijaita Singh from New Delhi report. Here are the live updates: Supreme Court to hear Sena-NCP-Congress plea tomorrow Supreme Court to hear on November 24 the Shiv Sena-NCP-Congress combine’s plea challenging the Maharashtra Governor’s decision to swear-in Mr. Fadnavis as Chief Minister. Hearing is scheduled to commence at 11.30 am, says lawyer of the three parties Sunil Fernandes. NCP MLAs moved to a hotel in buses NCP MLAs meeting is over. They are being moved to a hotel in buses. According to Nawab Malik, the party spokesperson, the MLAs will stay in Mumbai. “The BJP-led government in Maharashtra will be defeated in the Assembly Speaker’s election,” Mr. Malik says. “The government has been given time till November 30. We will defeat them in the Speaker’s election. We are sure the Shiv Sena-Congress-NCP will form the government.” Five MLAs not in contact, says NCP A meeting of NCP MLAs is on at the Y.B. Chavan Centre. According to party spokesman Nawab Malik, 43 out of the 54 MLAs are present at the meeting. Out of the 11 MLAs who were absent, six are in contact. Five MLAs are not in contact, says Mr. Malik. Party chief Sharad Pawar and other leaders during the NCP Legislature Party meeting in Mumbai on November 23, 2019. Photo: Twitter/@NCPspeaks Congress-NCP-Sena combine moves Supreme Court The Congress-NCP-Sena combine has moved the Supreme Court in a joint petition against the “illegal usurpation of power” by the BJP in Maharashtra in a “hurried and makeshift” swearing-in ceremony which installed Mr. Fadnavis as Chief Minister. "There is nothing in public domain in what manner Shri Devendra Fadnavis and/or the BJP had staked claim power between the intervening night of 22.11.2019 and 23.11.2019. Further there is no material in the public domain to show that Shri Devendra Fadnavis had carried letters of support of 144 MLAs (which in any event was not legally possible to do). The Petitioners categorically assert that all the MLA’s of the Shiv Sena, NCP and Congress are completely and solidly with the alliance except for Shri Ajit Pawar," the petition contended. The combine wants the Supreme Court to hear the petition tonight itself and seeks the quashing of the Governor’s decision to invite Mr. Fadnavis to form the government. The petition has been formally filed at 8.23 p.m. However, the a midnight hearing is unlikely as Chief Justice of India S.A. Bobde is in Tirupati. Senior lawyers such as Kapil Sibal and A.M. Singhvi are also out of Delhi. Ajit Pawar removed as NCP Legislature Party leader The NCP announced that Sharad Pawar loyalist Jayant Patil has been elected as the party's Legislature leader. It also announced that Ajit Pawar has been removed from the post. The NCP statement removing Ajit Pawar as Legislature Party leader. The resolution passed at the meeting also says that Mr. Ajit Pawar’s right to issue a whip were also revoked. The party has also authorised Mr. Sharad Pawar and Mr. Jayant Patil to decide its stand in light of the developments. NCP chief spokesperson Nawab Malik says the list of NCP MLAs submitted by Mr. Ajit Pawar to the Governor before taking oath was actually a letter signed by the party MLAs when they had attended a meeting earlier. Some of the MLAs, who were present with Ajit Pawar during the oath-taking ceremony, later pledged their loyalty to Mr. Sharad Pawar. Meanwhile, sources say Mr. Ajit Pawar’s primary membership of the party remains intact. “Efforts are on to convince him to return to the party’s fold,” the sources add. Ajit Pawar is under tremendous pressure, says Sanjay Raut "Ajit Pawar will not do this unless there is tremendous pressure," said senior Shiv Sena leader Sanjay Raut. "They (BJP and Ajit Pawar) do not have majority. Even today, we (Sena, Congress, NCP) have majority," said Mr. Raut Shiv Sena approaches Supreme Court After Shiv Sena was sidelined by BJP on Saturday morning, it filed a petition before the Supreme Court seeking a direction to hold a floor test on November 24. The party has filed a plea stating Maharashtra Governor's action was arbitrary and malafide in action in 'purportedly inviting' the State BJP led Devendra Fadnavis to form the Government and allow Mr. Fadnavis to take oath as CM. The petition reads, "He (Mr. Fadnavis) is well short of majority mark of 145 in State Assembly by 40 MLAs and had on November 10 turned down the Governor's invitation to form a Government for lack of numbers, despite having ample time to garner support." The party has urged the court to issue directions in terms of summoning a special session of the 14th Maharashtra Legislative Assembly with the only agenda of administering oath of the MLAs by holding a floor test. -- Sonam Saigal Dhananjay Munde attends NCP Legislature Party meeting Dhananjay Munde, who is rumoured to be the bridge between NCP’s Ajit Pawar and the BJP, reaches YB Chavan centre to attend the NCP Legislature Party meeting. — reports Alok Deshpande Revoking of President's Rule valid, say experts The President decides to revoke President's Rule only after getting a recommendation from the Cabinet. However, the Cabinet didn't meet yesterday or today. Is this legally valid? Yes, according to experts. Rule 12 of Transaction of Business Rules, 1961 was invoked this morning. According to the Rules, "The Prime Minister may, in any case or classes of cases permit or condone a departure from these rules, to the extent he deems necessary," a senior government official told The Hindu. - reports Vijaita Singh BJP formed govt with support of 170 MLAs: Mungantiwar The BJP in Maharashtra has support of 170 MLAs in the 288-member House, senior party leader Sudhir Mungantiwar claimed in Chandipur. “Ajit Pawar is the NCP legislature party leader and it means everyone has given support to the BJP”, Mr. Mungantiwar said amidst confusion over the exact number of MLAs of the Nationalist Congress Party (NCP) supporting the fledgling government. “The BJP and its ally were given a clear mandate but the ally disrespected it. In order to respect the mandate given to us, we formed a government today with the support of 170 MLAs,” Mr. Mungantiwar, who represents Ballarshah seat, told reporters at his office.— PTI We will give a stable government in Maharashtra, says Devendra Fadnavis After being sworn-in as the Chief Minister once again, Devendra Fadnavis held a press conference. "Under the leadership of Prime Minister Narendra Modi we have once again managed to install a BJP-led government in the State," he says. "I welcome Ajit Pawar and his MLAs into our government. We will give a stable government to the people of Maharashtra for the next five years and work for poor and farmers. "It is a fact we have lost one friend (Sena) but found another in (Ajit Pawar). I want to assure people of state as long as Mr. Modi is there we will have a stable government in Maharashtra." November 23 shall go down as black chapter of Indian history, says Surjewala At a press conference, Congress spokesperson Randeep Surjewal said "November 23 shall go down in history of India as a black chapter when an illegitimate government was constituted at the instance of an Home Minister who considers constitution as his personal instrument, after scaring Ajit Pawar about jail." Mr. Surjewala also said "They have betrayed people of Maharashtra." He said "Prior to the elctions Devendra Fadanavis said that Ajit Pawar will be sent to Arthur Road jail for Rs 72000 crore scam. Instead they sent him to Maharashtra secretariat as deputy Chief Minister. That's why they say "Modi hai toh Mumkin hai". Mr. Surjewala said "In history of independent India when at the dead of night constitution was throttled, rule of law was mowed." He said "Governor acted as a hitman for Home Minister Amit Shah to sell the mandate of people." Mr. Surjewala said "Sharad Pawar has already cleared his stand. After his clarification there is no scope of doubt. Till today the NCP has not been split as per the laid down laws." He said "Shiv sena approached us on November 11, Congress has been the quickest to ensure that this alliance should be constituted. Whenever Mr Pawar held a meeting Congress representative was present. There was expeditious action despite initial reservations on day one." BJP will prove the majority: Ravi Shankar Prasad Defending BJP's move to join hands with Ajit Pawar, BJP leader and Union Minister Ravi Shankar Prasad blamed the NCP and Congress for the mess in Maharashtra. The NCP and Congress were trying to capture the financial capital, Mr. Prasad said adding that the BJP had the people's mandate to form the government since it was the largest legislature party in the State. Mr. Prasad said they had the numbers and will prove it in the floor of the House, without saying how many MLAs supported the government. Who is Dhananjay Munde? The name of Dhananjay Munde came into the picture after three NCP MLAs, who attended the swearing-in ceremony told mediapersons they were ferried to Raj Bhavan from Mr. Munde's bungalow. Rajendra Shingne, the NCP MLA who was present at the early-morning swearing-in ceremony, where BJP leader Devendra Fadnavis was sworn in as the Chief Minister of the State, claimed he was not aware of the developments. "We were called at Dhananjay Munde bungalow by Ajit Pawar today morning. We were asked to come with him to Raj Bhavan and the swearing-in ceremony started. We were hurt. I have no intention to betray Sharad Pawar," Mr. Shingne said. NCP MLAs Sandeep Kshirsagar and Sunil Bhusara too said they were with Mr. Sharad Pawar. Dhananjay Munde is an NCP leader and an MLA from Parli constituency. He is the nephew of the late veteran BJP leader and former Union Minister Gopinath Munde. He joined the NCP in 2012 after quitting the BJP. Mr. Munde is the Leader of Opposition in the Maharashtra legislative Council. In the recently conculed Maharashtra Assembly elections he defeated his cousin Pankaja by over 30,000 votes. Ajit Pawar was blackmailed into joining hands with BJP: Raut Shiv Sena leader Sanjay Raut on Saturday claimed that NCP’s Ajit Pawar was “blackmailed” into joining hands with the BJP to form government in Maharashtra. Of the eight MLAs who had gone with Ajit Pawar to Raj Bhavan, five have come back. They were told a lie, put into a car, almost like a kidnapping, the Shiv Sena leader said. “NCP leader Dhananjay Munde has been contacted. Ajit Pawar may also return (to NCP fold). We have information how Ajit Pawar has been blackmailed and will expose this soon,” Mr. Raut told reporters. Munde, who is said to have been in touch with Fadnavis since the last few days, had gone incommunicado. Sharad Pawar should join NDA, will be rewarded: Athawale Squarely blaming the Shiv Sena for the dramatic turn of events in Maharashtra, Union minister Ramdas Athawale urged NCP chief Sharad Pawar to walk over to the NDA camp, hinting that he might be rewarded with a plum portfolio at the Centre. . Mr. Athawale, who heads the RPI a Maharasthra-based pro-Dalit outfit is the Minister of State for Social Justice and Empowerment in the Narendra Modi government. Disgusting, says DMK The opposition DMK in Tamil Nadu on Saturday described the political developments in Maharashtra, where BJP formed government with the support of NCP supremo Sharad Pawar’s nephew Ajit Pawar, as “disgusting.” “What can one call the politically disgusting (development) in Maharashtra... is it indecency or ugliness..what can it be compared with,” the DMK leader M.K. Stalin said in a Facebook post. “One feels even calling this a murder of democracy will become an understatement--if it will minimise the gravity of the situation... The face of Indian democracy has been blackened. This is a big shame,” he added. We will tackle it politically and legally, says Ahmed Patel Senior Congress leader Ahmed Patel, who has rushed to Mumbai, told the media that his party is with NCP and Shiv Sena. Condemning the government formation and calling it "anti-democratic," Mr. Patel said: "The secret swearing in ceremony of Devendra Fadnavis shows that something is wrong in this. This is anti-constitutional." "The process of government formation in Maharashtra with Congress , NCP and Sena was in progress. The talks between Cong and NcP were positive. Even the three party meet yesterday was positive as well. "All three parties are together. We will defeat the BJP and those who are forming the govt with the BJP," he added. NCP Legislature Party will be meeting at 4 pm. Show your strength by taking to the streets, Digvijaya to NCP-Congress-Shiv Sena combine The Shiv Sena, the NCP and the Congress should show their strength by taking to the streets, Congress MP and former Madhya Pradesh Chief Minister Digvijaya Singh has said. "Let's see who the people of Maharashtra are with. This is a question of existence for all the three parties. Especially, it's a matter of prestige for the Thackeray family," he tweeted. Mr. Singh asked the Maharashtra Governor if he had received a letter of support from the NCP. "The Governor should have asked the NCP MLAs to take oath only after receiving a letter from the NCP president Jayant Patil," he said. We don't play night games: Thackeray Shiv Sena chief Uddhav Thackeray and NCP chief Sharad Pawar during a press meet in Mumbai on Saturday. | Photo Credit: Vivek Bendre Shiv Sena chief Uddhav Thackeray says, "this is shameful what is happening in the name of democracy." We don't play night games. We don't split parties, Mr. Thackeray says. This is a surgical strike on Maharashtra. We will not tolerate. We are together and we will remain together, he adds. I don't think elections are even needed. Everyone knows what Chhatrapati Shivaji Maharaj did when betrayed and attacked from the back, Mr. Thackeray says. They don't have the numbers: Sharad Pawar In a joint press conference with Sena chief Uddhav Thackeray, NCP chief Sharad Pawar says the new government won't be able to prove its majority. "They don't have the numbers." "Ajit Pawar has taken two lists of MLAs from party office claiming those were support letters. Those signs were for party's internal programme. It had nothing to do with govt formation," the senior Pawar says. "If this has been done, then the Governor too has been duped," he adds. We are together, we will remain so. We are ready to face any difficulty. We will face it collectively, Mr. Pawar says. No intention to hurt NCP, says party MLA who took part in swearing-in ceremony Rajendra Shingne, the NCP MALS who was with Ajit Pawar has came to Sharad Pawar's press meet. "We were called at Dhananjay Munde bungalow by Ajit Pawar today morning. We were asked to come with them to Raj Bhavan and the swearing in ceremony started. We were hurt. I have no intention to betray Sharad Pawar. I am with NCP," Mr. Shingne. Party was not aware of Ajit's decision: Sharad Pawar NCP chief Sharad Pawar and Shiv Sena chief Uddhav Thackeray are addressing a joint press meet at Y.C. Chavan Centre. Mr. Pawar says: "Some members of NCP had gone with Ajit Pawar to Raj Bhavan without informing us or the party, but one of the MLAs called us up and informed us this was happening. "This decision to go to Raj Bhavan was Ajit Pawar's own decision and those who went with him may or may not be aware of the law related to defecting. "Whatever happens from here, it would be hard for the BJP government to prove majority on the floor. We are together and we remain together with Shiv Sena from here." Political immorality of the BJP has reached its nadir: CPI (M) The political immorality of the BJP has reached its nadir the CPI(M) said reacting sharply to the developments in Maharashtra. "The clandestine manner in which the Chief Minister and Deputy Chief Minister of Maharashtra have been sworn in shows the extent to which the BJP can stoop to grab power,"the CPI(M) Polit Bureau said in a statement. BJP did similar political machinations, the party said, in Goa, Karnataka and north eastern states. "It is unfortunate that both the Constitutional authorities – President’s office and the Governor’s office – have been misused to achieve their political purpose," the statement added. OPS greets Fadnavis, Ajit Pawar AIADMK Coordinator and Tamil Nadu Deputy Chief Minister O. Panneerselvam greeted BJP leader Devendra Fadnavis and NCP’s Ajit Pawar for taking over as Chief Minister and Deputy chief minister of Maharashtra. “I extend my greetings (to both) to work towards a sustained growth of Maharashtra,” he said in a tweet. The AIADMK is a constituent of the BJP-headed National Democratic Alliance (NDA). Sena-NCP-Congress alliance intact, says Nawab Malik The Shiv Sena-NCP-Congress alliance is intact, NCP spokesperson Nawab Malik said on Saturday, hours after his party’s legislature unit leader Ajit Pawar joined hands with the BJP to form government in Maharashtra. Malik said some MLAs were deluded into attending the ceremony in which Mr. Ajit Pawar took oath as deputy CM, along with Chief Minister Devendra Fadnavis. “Many of those MLAs who were deluded into attending the oath taking ceremony met Sharad Pawar saheb and will be present in his Press conference here later today,” Mr. Malik said. “The Shiv Sena-Congress-NCP MLAs are together,” Mr. Malik told reporters. Kerala NCP says it is with Sharad Pawar Meanwhile, the Kerala unit of NCP has said it is with Sharad Pawar. “We Kerala leaders and workers of NCP stand solidly behind Sharad Pawar-ji,” NCP All India General Secretary T.P. Peethambaran Master. Senior NCP leader and Kerala MLA Mani C Kappen urged the central leadership to initiate steps to disqualify Ajit Pawar and his supporters on the basis of anti-defection law. The NCP is part of ruling Left Democratic Front in Kerala. - PTI Shiv Sena chief Uddhav Thackeray has left from Matoshree in Mumbai. He will be meeting NCP chief Sharad Pawar and both the leaders are expected to address a joint press meet at Y B Chavan Centre. NCP MP and Sharad Pawar's daughter Supriya Sule arrives at Y.B. Chavan centre at Nariman point in Mumbai. Unexpected, says Eknath Shinde Shiv Sena legislative party leader Eknath Shinde said said that the party will reveal its position at 12.30 p.m. "This was unexpected," he said, adding that nobody knew about the development. "Uddhav Thackeray will mention Shiv Sena's position at 12.30 p.m.," he said. Fadnavis thanks PM Modi Newly sworn-in Chief Minister Devendra Fadnavis thanked Prime Minister Narendra Modi for his "guidance and leadership." "Thank you so much Hon’ble PM Narendra Modi ji.Under your guidance and leadership, we are once again looking forward to take Maharashtra to newer and greater heights," he wrote on Twitter. Illegitimate formation, says Ahmed Patel Senior Congress leader Ahmed Patel referred to the government as an "illegitimate formation." "Illegal and evil manoeuvres take place in the secrecy of midnight. Such was the shame that they had to do the swearing in hiding. This illegitimate formation will self destruct," he wrote on Twitter. Ashok Gehlot questions morality of revoking President’s rule, Maharashtra govt formation Rajasthan Chief Minister Ashok Gehlot questioned the morality of revoking the President’s rule and swearing-in of the Chief Minister and Deputy Chief Minister in Maharashtra. Mr. Gehlot said both the CM and Deputy CM were “guilt conscious” and raised doubt whether they would be able to deliver good governance. Referring to the political development in Maharashtra, Mr. Gehlot asked what morality was there in the sudden revocation of President’s rule and such swearing-in. “Which direction are they taking democracy to?” Mr. Gehlot asked on Twitter, adding that people would teach a lesson to the BJP on an appropriate time. Mr. Gehlot said he was doubtful whether Devendra Fadnavis would be successful as a CM and deliver good governance, adding that the people of Maharashtra would suffer. — PTI Adityanath congratulates Fadnavis, Ajit Pawar on Maha govt formation Uttar Pradesh Chief Minister Yogi Adityanath on Saturday congratulated Devendra Fadnavis and Ajit Pawar. In a tweet, Mr. Adityanath congratulated them and expressed confidence that “under the leadership of Sri Devendra Fadnavis ji and Sri Ajit Pawar ji, Maharashtra will be on the path of development.” — PTI We have the numbers: BJP leader Girish Mahajan BJP leader and former Minister Girish Mahajan said that the government has clear support from several NCP leaders. "We have a clear support from several NCP leaders and MLAs, and our numbers are beyond 170. Ajit Pawar is the leader of NCP's legislative party, he came with his supporters and things would be clear soon," he said. As many 15 MLAs were present with Ajit Pawar during the oath taking ceremony, he said. He blamed Shiv Sena leader Sanjay Raut for the developments. "The BJP and our leaders blame Sanjay Raut for these developments in the State and we will not forget the language he has used against Prime Minister Narendra Modi. He is responsible for this mess between BJP and Shiv Sena," he said. He blamed Mr. Raut for "cheating the the party leaders." "Sanjay Raut has cheated even party leaders like Uddhav Thackeray. We are fed up of his language and tone during the past few days. Enough is enough," said Mr. Mahajan. — as reported by Sharad Vyas Betrayal of people's mandate, says Congress Meanwhile, Congress termed the move as a "betrayal" of people's mandate. "This is backstabbing. What else can you expect from them (NCP)," said a senior Congress leader. The leader said that another meeting was due at 12.30 p.m. today. AICC Secretary Sanjay Dutt said that the role of Governor has come under scanner. "The role of Governor Shri B.S. Koshyari has again come under the scanner for the hush-hush manner BJP CM was sworn in. While he refused to extend time to other parties, BJP wasn't even asked to show proof of majority, before swearing-in. It is invitation to horse-trading," he said. The party is expected to address the media at 11.30 a.m. — as reported by Sandeep Phukan Sena did not respect people's mandate: BJP State president BJP State president Chandrakant Patil blamed the suprise move on Sena's indecision. "BJP and Shiv Sena had a clear mandate but Sena did not respect it. People of this State have watched what went on between the three parties even as farmers were suffering. So Mr. Fadnavis was compelled to take the oath," he said. "I am surprised Sanjay Raut is talking the tone he is talking, he has hurt the State the most," he added. — as reported by Sharad Vyas Ajit Pawar cheated his uncle and the voters: Sanjay Raut Shiv Sena's Sanjay Raut said that Ajit Pawar has "cheated" both his uncle and the voters. "Ajit Pawar was with us all night in the meeting, but his body language was suspicious. He stepped out of the meeting and his phone was switched off," said Mr. Raut. "Sharad Pawar has nothing to do with this, and I can say that with confidence. Ajit Pawar has cheated not only the voters of the State but even his own uncle," he said. "The BJP is behind this. More than NCP's split, it is the BJP which has lured the NCP leader and MLAs using unconstitutional means," he added. Mr. Raut added that the one who can cheat his own uncle will never be forgiven by the people of the State. "The people of the state are watching how money and power was used to effect this coup and this is nothing less than a crime," he said. The Shiv Sena leader also criticised the Governor. "We though that the Governor came here to protect law and order, but instead he administered oath to BJP leaders keeping people of the State in dark," said Mr. Raut.— as reported by Sharad Vyas Ajit Pawar's personal decision without our support: Sharad Pawar Meanwhile, NCP chief Sharad Pawar has denied any role in the events that transpired. "Ajit Pawar's decision to support the BJP to form the Maharashtra Government is his personal decision and not that of the Nationalist Congress Party (NCP). We place on record that we do not support or endorse this decision of his," he said. — as reported by Sharad Vyas How long can State like Maharashtra be under President’s Rule, asks Fadnavis Speaking to the media after the surprise swearing-in, Mr. Fadnavis said that the people of Maharashtra had given a clear mandate to the pre-poll alliance of the BJP and Shiv Sena, but that had not been honoured. Maharashtra CM Devendra Fadnavis' family during the oath-taking ceremony at Raj Bhavan. | Photo Credit: Deepak Salvi “How long can a State like Maharashtra be under President’s Rule?,” he said. A senior BJP leader told The Hindu that Mr. Ajit Pawar was the leader of the NCP legislative party and the entire legislative party was with the BJP in government formation. — as reported by Nistula Hebbar President's rule revoked at 5.47 am President Ram Nath Kovind has revoked the proclamation imposing President's rule in Maharashtra at 5.47 am. In a notification issued early on Saturday signed by Union Home Secretary Ajay Kumar Bhalla, the Central rule was revoked. Screenshot of the gazette notification regarding the decision to revoke President's rule in Maharashtra | Photo Credit: Special Arrangement . The notification was digitally signed by the concerned official at 5.47 a.m. on Saturday before being uploaded on the official gazette portal- egazette.nic.in. An official said that a notification can only be digitally signed once the physical copy has been signed by the President of India. The timeline suggests that the Union Home Ministry worked overnight to prepare the notification and the President signed it before 5.47 a.m. on Saturday. Last time when the proclamation imposing Central rule was issued, home ministry officials waited for Mr. Kovind to return from Punjab. After he signed the proclamation, Central rule was formally imposed at around 5.30 p.m. on November 12 even though the Governor sent a report around 12 noon the same day recommending Central rule. — as reported by Vijaita Singh New Maha govt will be committed to state’s development & welfare: Amit Shah BJP president and Union Minister Amit Shah on Saturday expressed confidence that the new Maharashtra government under Devendra Fadnavis will scale new heights of development. Mr. Shah tweeted his congratulation to Fadnavis and NCP leader Ajit Pawar after the oath-taking ceremony. “Hearty congratulations to Shri @Dev_Fadnavis ji on taking oath as Chief Minister of Maharashtra and Shri @AjitPawarSpeaks as Deputy Chief Minister of the State," Mr. Shah wrote. “I am confident that this government will be continuously committed to the development and welfare of Maharashtra and will set new standards of progress in the State,” he added. BJP working president J.P. Nadda also congratulated Mr. Fadnavis and Ajit Pawar. “I Congratulate @Dev_Fadnavis Ji and @AjitPawarSpeaks Ji on taking oath as the CM and Deputy CM of Maharashtra respectively. I am sure that under the guidance of Hon PM @narendramodi Ji, BJP-NCP Gov will take Maharashtra to newer heights,” he wrote. - PTI PM Modi congratulates Devendra Fadnavis, Ajit Pawar. - PTI Please Email the Editor
https://www.thehindu.com/news/cities/mumbai/maharashtra-developments-live-updates-bjp-devendra-fadnavis-ncp-pawar/article30058509.ece
CC-MAIN-2020-05
en
refinedweb
I just downloaded PyCharm community edition: PyCharm Community Edition 2016.1.2 Build #PC-145.844, built on April 8, 2016 JRE: 1.8.0_60-b27 x86 JVM: Java HotSpot(TM) Server VM by Oracle Corporation I'm getting started with learning the IDE. I noticed that this simple one-line program: raise Exception causes the word "Exception" to be underlined in a red squiggly. If I hover over it I see the message: Unresolved reference 'Exception' But, this is a built-in exception. What should I do so PyCharm will not flag built-in exceptions as unresolved? Also, I tried to copy the text of the error message to paste here, but I cannot just highlight it and copy the error message. Is that a limitation? P.S. FWIW There seems to be a problem with Python built-ins in general. e.g. import math triggers an Unresolved Reference error.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207375155-Built-in-exceptions-causing-Unresolved-Reference-errors
CC-MAIN-2020-05
en
refinedweb
Announcing NgRx Version 7 — Docs, Testing, and more Today we are announcing the version 7 release of the NgRx platform. This release contains bug fixes, new features, and some breaking changes, all aimed at improving the developer experience when using NgRx libraries. There are also updates to announcements with version 6. Official Docs In case you haven’t noticed already, NgRx has an official documentation site at! The documentation site is a PWA built with Angular using a similar infrastructure, look, and feel to what you see on the Angular docs. This has been a long-awaited feature, and will give us a space for tutorials, guides, real-world examples, and recipes covering advanced topics. We will continue to add new content to the documentation site that will help developers learn the “why” behind NgRx and the “how” when using it. Contributing is also very simple, as you can edit any page using the pencil icon in the top right corner of each page. This opens a GitHub editor where you can edit the markdown files quickly and submit pull requests. Previews of your changes are generated with your pull request for quicker feedback and turnaround. All of the existing documentation has been moved to the new site, along with a new tutorial, and refreshed guides! Be sure to check out the site at. Feel free to file new issues and feature requests for the docs on our GitHub page. Breaking Changes Deprecated in version 6.1, the ofType method on the Actions class in @ngrx/effects class was removed. This change was made to align with the pipeable operators introduced in RxJS V6. Before: import { Effect, Actions } from '@ngrx/effects';@Injectable() export class MyEffects { @Effect() someEffect$: Observable<Action> = this.actions$ .ofType(UserActions.LOGIN) .pipe(map(() => new AnotherAction())); constructor(private actions$: Actions) {} } After: // import ofType operator import { Effect, Actions, ofType } from '@ngrx/effects';@Injectable() export class MyEffects { @Effect() someEffect$: Observable<Action> = this.actions$.pipe( ofType(UserActions.LOGIN), // use the pipeable ofType operator map(() => new AnotherAction()) ); constructor(private actions$: Actions<UserActions>) {} } This along with the addition of a generic type to the Actions class provides a more type-safe way to infer the type of the actions in your effects class.We have released a standalone tool to aid you in your migration to the new operator syntax. We are also exploring the option of a migration integrated with the Angular CLI to make this transition easier. As with each major release, we have also updated to the Angular V7 and RxJS 6.3 dependencies. Updating all your packages to the latest version can be done using the Angular CLI command, ng update. ng update @ngrx/store There are additional minor breaking changes with the necessary changes listed in our V7 migration guide. New Features We also introduced some new features in version 7 across many of the NgRx libraries including deeper integration with the Angular CLI, more support for the Redux Devtools Extension features, and testing just to name a few. Many of these features were requested and developed by community contributors. Some of the highlighted features are listed below. Store Selector Props In V7, we introduced two new features for interacting with the Store during development and testing. Now with selectors, you can use props to provide additional information needed for computing data models. Whether you need to pass an id from the router, or provide some static data, props give you this functionality with a selector out of the box. Create a selector that uses props: export const getCount = createSelector( getCounterValue, (counter, props) => counter * props.multiply); And define the props in the select operator: this.store.pipe(select(fromRoot.getCount, { multiply: 2 })) Testing Package In an effort to make testing using Store easier, we have added a testing package for mocking out the state used by the Store. The provideMockStore() method allows you to provide a mock store to push state changes during unit testing as an alternative to providing your reducers. Now you can completely test your Store-aware components and effects in isolation with less setup. import { Store } from '@ngrx/store'; import { provideMockStore, MockStore } from '@ngrx/store/testing'; import { take } from 'rxjs/operators';describe('Mock Store', () => { let mockStore: MockStore<{ counter1: number, counter2: number }>; beforeEach(() => { const initialState = { counter1: 0, counter2: 1 }; TestBed.configureTestingModule({ providers: [ provideMockStore({ initialState }) ], }); mockStore = TestBed.get(Store); }); it('should set the new state', () => { mockStore.setState({ counter1: 1, counter2: 2 }); mockStore.pipe(take(1)).subscribe(state => { expect(state.counter1).toBe(1); }); }); }); Effects Lifecycle hooks To give more insight into the Effects lifecycle, we introduced new lifecycle interfaces, OnInitEffects and OnIdentifyEffects. When implemented on your effects class, the OnInitEffects lifecycle method is called after the effects are registered. Here you can provide any initialization logic, and return an action that is dispatched to the store. import { OnInitEffects } from '@ngrx/effects';class UserEffects implements OnInitEffects { ngrxOnInitEffects(): Action { return { type: '[UserEffects]: Init' }; } } The OnIdentifyEffects is a more advanced hook that gives you control over unique instances of your Effects classes. For each unique identifier, a separate instance effect is created. import { OnIdentifyEffects } from '@ngrx/effects';class EffectWithIdentifier implements OnIdentifyEffects { constructor(private effectIdentifier: string) {}ngrxOnIdentifyEffects() { return this.effectIdentifier; } } Entity New Adapter Methods Entity is all about managing collection, so we updated and additional adapter methods to use when dealing with collections, including mapping, removing, and deleting entities. The map method is similar to the Array.map method, iterating over your collection, allowing you to make updates to entities in your collection using a predicate function. The updateMany and removeMany methods now also support a predicate method to give you more control over which entities to update and remove based on a predicate function. Router Store New Actions To further integrate with the Angular Router, we have introduced new actions that provide more information about the state of the router in your store. The new actions include when the router has started a new navigation, as well as when a navigation cycle has successfully completed. This gives you access to more of the router state after guards and resolvers have finished running. All these new actions are provided with more descriptive action types for the existing actions, so you can easily glance at the developer tools to see when the actions are triggered. Store Devtools Extension features To enhance developer productivity with the Redux Devtools Extension, we have added support for more extension features, including blacklisting and white-listing of actions, persisting state across page reloads, locking, and pausing of the extension. The new options are fully configurable through the StoreDevtools instrumentation, giving you the flexibility to turn features on and off as needed. To see the full list of new features, look at our release changelog. Updates NgRx Data With V6, we announced that Tim Deschryver, John Papa and Ward Bell were joining the NgRx team delivering high-quality libraries for the platform. We are continuing to work with John and Ward to integrate the ngrx-data library officially into the platform as a first-party package. NgRx Data provides an out of the box solution for managing large sets of entity collections with external APIs, simple configuration, and customization options. Along with integrating NgRx Data, are still working on new APIs for server-side rendering, serialization, and exploring new functionality that will be provided with the Ivy renderer in Angular. We also welcome new ideas, so if you have any, feel free to open an issue. Official NgRx Workshop! The NgRx team will be at ng-conf 2019 in a big, albeit different way this year! Last year, we were part of a complete fair-day track during ng-conf dedicated to NgRx and its ecosystem. This year we will be providing an official workshop! The “NgRx: A Reactive State of Mind” workshop is a full 2-day workshop not just to learn how NgRx works, but to learn the foundations of the framework, best practices, and how to think reactively when using NgRx. We will work through the NgRx architecture in-depth with practical examples for state management, data models using selectors, side effects, and testing. Whether you’re just starting with NgRx, or you’re already using it, there will be something for you at this workshop. Visit the registration page to sign up. Thanks NgRx continues to be a community-driven project. Design, development, documentation, and testing all are done with the help of the community. We would like to thank the contributors to the latest version of NgRx including: kouMatsumoto, UncleJimmy, dummdidumm, kanafgan, tja4472, peterbsmith2, Guillaume de Jabrun, SerkanSipahi, cmckni3, patrickmcd, matepapp, alex-okrushko, rkirov, krzysztof-grzybek, xxluke, TeoTN, thefliik, icepeng, myspivey, wesleygrimes, null-reference, joostme, LukeHowellDev, baumandm, maxime1992, roopkt, bagbag, luchsamapparat, Fost, bcbanes, Teamop, seekheart, adrianfaciu, rafa-as, ngfelixl, sheikalthaf, itayod, karanveersp, and emilio-martinez. If you are interested in contributing, visit our GitHub page and look through our open issues, some marked specifically for new contributors. Along with the community, we cannot sustain NgRx without the help of our past and present backers and sponsors. Whether it be $2, $5, or more, our monthly backers have continued to support our efforts in developing and maintaining the platform. Our bronze level sponsors include Oasis Digital, Lukas Ruebbelke, Alex Okrushko, along with a significant contribution from Deborah Kurata.. NgRx requires significant time and effort that often goes unpaid, and we would like to change that. If you or your company wants to contribute to NgRx as a backer or sponsor, please visit our OpenCollective page for different contribution options or contact us directly for other sponsorship opportunities. Follow us on here and on Twitter for the latest updates about the NgRx platform.
https://medium.com/ngrx/announcing-ngrx-version-7-docs-testing-and-more-b43eee2795a4
CC-MAIN-2020-05
en
refinedweb
Flutter SMS Inbox Flutter android SMS inbox library based on Flutter SMS. Installation Install the library from pub: dependencies: flutter_sms_inbox: ^0.1.0 Querying SMS messages Add the import statement for sms and create an instance of the SmsQuery class: import 'package:flutter_sms_inbox/flutter_sms_inbox.dart'; void main() { SmsQuery query = new SmsQuery(); } Getting all SMS messages List<SmsMessage> messages = await query.getAllSms;() });
https://pub.dev/documentation/flutter_sms_inbox/latest/
CC-MAIN-2020-05
en
refinedweb
Hi! Until EPiServer adds on-page editing you can add an editor descriptor for all of your PropertyList<T> properties to at least get the multiple popups thing working: [EditorDescriptorRegistration(TargetType = typeof(IList<MyModel>))] public class OnPageCollectionEditorDescriptor : CollectionEditorDescriptor<MyModel> { public override void ModifyMetadata(ExtendedMetadata metadata, IEnumerable<Attribute> attributes) { base.ModifyMetadata(metadata, attributes); metadata.CustomEditorSettings["uiType"] = metadata.ClientEditingClass; metadata.CustomEditorSettings["uiWrapperType"] = UiWrapperType.Floating; } } Then you need to inject some css in edit mode to fix the dialog width: .dijitDialogPaneContentArea .epi-collection-editor { min-width: 640px; } Don't forget to remove the EditorDescriptorAttribute from your property if you go for this workaround. I'm using PropertyList properties, and while I realize that's still in beta I was wondering if there's a way to support on-page editing? If I use @Html.EditAttributes(m => m.MyListOfItems) I do get an overlay, but when I click it I get the "legacy popup" with a simple textbox for the underlying serialized property value. I was expecting a popup with the list editor, but then I realized that would potentially lead to multiple popups when items are added/edited - so I guess that wouldn't be optimal. :) So, I guess my question is two-fold: is there a way to support on-page editing and, if not, are there plans to support it?
https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2016/1/on-page-editing-for-propertylist-properties/
CC-MAIN-2020-05
en
refinedweb
_KERNFS(8) OpenBSD System Manager's Manual MOUNT_KERNFS(8) NAME mount_kernfs - mount the /kern file system SYNOPSIS mount_kernfs [-o options] /kern mount_point DESCRIPTION The mount_kern command attaches an instance of the kernel parameter namespace to the global filesystem namespace. The conventional mount point is /kern. This command is normally executed by mount(8) at boot time. The filesystem includes several regular files which can be read, some of which can also be written. The contents of the files is in a machine-in- dependent format, either a string, or an integer in decimal ASCII. Where numbers are returned, a trailing newline character is also added. The options are as follows: -o Options are specified with a -o flag followed by a comma separat- ed string of options. See the mount(8) man page for possible op- tions and their meanings. FILES boottime the time at which the system was last booted (decimal ASCII). byteorder the _BYTE_ORDER for this kernel. hostname the hostname, with a trailing newline. The hostname can be changed by writing to this file. A trailing newline will be stripped from the hostname being written. domainname the domainname, with a trailing newline, behaves like a host- name. hz the frequency of the system clock (decimal ASCII). loadavg the 1, 5 and 15 minute load average in kernel fixed-point for- mat. The final integer is the fix-point scaling factor. All numbers are in decimal ASCII. machine the architecture this kernel compiled for. model the model of the processor this machine running on. msgbuf the kernel message buffer, also read by syslogd(8), through the log device, and by dmesg(8). ncpu the number of CPUs in this machine. ostype the OS type for this kernel ("OpenBSD"). osrelease the release number of the OS. osrev the revision number of the OS (BSD from <sys/param.h>). pagesize the machine pagesize (decimal ASCII). posix the _POSIX_VERSION for this kernel. physmem the number of pages of physical memory in the machine (decimal ASCII). rootdev the root device. rrootdev the raw root device. time the second and microsecond value of the system clock. Both numbers are in decimal ASCII. usermem the number of pages of physical memory available for user pro- cesses. version the kernel version string. The head line for /etc/motd can be generated by running: ``sed 1q /kern/version'' SEE ALSO mount(2), unmount(2), fstab(5), dmesg(8), mount(8), syslogd(8) CAVEATS This filesystem may not be NFS-exported. HISTORY The mount_kernfs utility first appeared in 4.4BSD. 4.4BSD March 27, 1994 2
http://rocketaware.com/man/man8/mount_kernfs.8.htm
CC-MAIN-2018-34
en
refinedweb
0 Hello, I am taking a computer programming class this semester and working on a project. I am trying to read a text file into a 2D list. The text file contains ten rows and ten columns of numbers, either a 0,1,or 2. When i try to read the file it adds the numbers on to the end of the list...this is what i have def getfromfile(): infile = input("Enter the name of the file with the original state of the network: ") infile = open(infile,"r") OLD =[ [" "]*COLUMN for i in range(ROW)] for line in infile: for i in range(ROW): OLD.append(i) for j in range(COLUMN): OLD.append(j) for i in range(ROW): for j in range(COLUMN): if i == 0: OLD[i][j] = DEAD elif i == 1: OLD[i][j] = CLEAN else: OLD[i][j] = INFECTED The whole idea is that the list is a network of computers that are infected. OLD is the list of computers that is the network and as each time cycle passes, more computers get infected...i am just having a problem getting the "original state" from the text file... THANKS!!!
https://www.daniweb.com/programming/software-development/threads/398951/python-school-project-program-help
CC-MAIN-2018-34
en
refinedweb
java.lang.Object PIRL.Utilities.AuthenticationPIRL.Utilities.Authentication public class Authentication The Authentication class contains a set of functions used for public-private key pair authentication. The typical uses for the Authentication functions is with a server that needs to authenticate a client connection. For example: The password, a text string, need not be limited to a single, short word; a pass phrase is likely to be more secure. How the client and server obtain the password is application specific. A client is likely to obtain the password by means of an interactive dialog with the user. A server is likely to obtain the password from a permission protected file. The key pair provides a public key that will be sent to the client and a private key that will be used to decrypt the encrypted password returned by the client. This may be done once at server startup if a single key pair is considered suffient for authenticating all clients. Alternatively each time a client connection occurs a key pair may be generated in which case each key pair must be associated with the corresponding client. The public key from the key pair is serialized and encoded as a hexadecimal representation string. This is done to ensure that the public key can be sent by the server to the client even when binary data transport is not supported. As with the key pair, this may be done once for use with all clients or for each client connection. When a client connects to the server a handshake occurs during which authentication information is exchanged. The first step of the authentication handshake exchange is the server sending the public key to the client. The public key received from the server is used to encrypt and encode the password. As with the public key, the encoded form of the encrypted password provides a hexidecimal respresentation string of encryption bytes which can be sent to the server even if binary data transport is not avaialable. This is the second step of the client-server authentication handshake exchange. The server decodes and decrypts the encoded password received from the client using the private key of the key pair. If the resultant string matches the password that server knows then the client is successfully authenticated. Otherwise the client has failed to provide the required authentication information. public static final String ID public static KeyPair Keys() throws NoSuchAlgorithmException, NoSuchProviderException A KeyPairGenerator is initialized with a SecureRandom seed and then used to generate a KeyPair. NoSuchAlgorithmException NoSuchProviderException public static String Public_Key(KeyPair keys) throws IOException The public key of the public-private key pair is serialized as a byte array that is encoded as a string in hexadecimal notation. An encoded public key is expected to be sent to a client for use in generating an encoded password. keys- A KeyPair that contains a public key. IOException public static String Encoded_Password(String password, String public_key) An encoded public key is decoded from its hexadecimal representation and then de-serialzed to a PublicKey object. This is used to encrypt the password string. The encrypted bytes are encoded into a hexadecimal representation string. An encoded password provides secure authentication credentials for a client. It is expected to be sent to the server that provided the encoded public key for authentication. password- The clear text String that is to be encrypted. public_key- A hexadecimal String representation of a serialized PublicKey. public static boolean Authenticate(KeyPair keys, String encoded_password, String password) The encoded password string is decoded and then decrypted using the private key of the key pair. If the result matches all the same characters of the specified password string then authentication has succeeded. N.B.: No exceptions are thrown; if any problem occurs the authentication fails. keys- The KeyPair used to decrypt the encoded password. If null false is returned. encoded_password- An encodedpassword string. If null false is returned. password- The password string to be compared against the encoded password after it has been decoded and decrypted. If null false is returned. public static String Encode_Hex(byte[] bytes) Each byte in the array is represented by two characters that are the hexadecimal value of the byte ('0' padded if the value of the byte is less than or equal to 0xF). All hex characters for each byte of the array are concatenated in byte array order. bytes- An array of byte values. If null, null is returned. Decode_Hex(String) public static byte[] Decode_Hex(String string) Each pair of characters in the string are translated into the binary value they represent and stored in a byte array in the order in which they occur in the string. string- A String of hexadecimal character pairs. If null, null is returned. NumberFormatException- If the length of the string is odd or any character in it does not represent a hexadecimal value (0-9 and a-f, case insensitive). Encode_Hex(byte[])
http://pirlwww.lpl.arizona.edu/software/PIRL_Java_Packages/PIRL/Utilities/Authentication.html
CC-MAIN-2018-34
en
refinedweb
Oracle’s release of JDK 7 is expected to occur this coming fall. This new release will offer a suite of new features for you to learn. This article, the second in a four-part series that introduces you to some of these features (read Part 1 here, focuses on JDK 7’s improved support for translucent and shaped windows. Java SE 6u10 (build 12) introduced com.sun.awt.AWTUtilities to support translucent and shaped windows. This temporary class was introduced because 6u10 wasn't a major Java SE release; no new Abstract Window Toolkit APIs could be added or existing APIs modified. AWTUtilities doesn't exist in JDK 7. Instead, the necessary changes have been made to various AWT classes to support translucent and shaped windows. This article examines the AWT's three kinds of translucency support, and also examines its support for shaped windows. Simple Translucency Simple translucency results in an evenly translucent window; all pixels have the same opacity value. The smaller this value, the more translucent the window until it becomes transparent; the larger this value, the less translucent the window until it becomes opaque. JDK 7 supports simple translucency by adding public void setOpacity(float opacity) and public float getOpacity() methods to the java.awt.Window class. The former method requires an opacity argument ranging from 0.0 (transparent) to 1.0 (opaque). Invoke setOpacity() to activate simple translucency for the window on which this method is invoked. Don't specify an argument that is less than 0.0 or greater than 1.0; otherwise, setOpacity() will throw IllegalArgumentException. The setOpacity() method also throws java.awt.IllegalComponentStateException if the window is in full-screen mode and the opacity is less than 1.0, and UnsupportedOperationException if simple translucency isn't supported and the opacity is less than 1.0. The java.awt.GraphicsDevice class provides a public Window getFullScreenWindow() method for determining if the window is in full-screen mode. This class also provides the following method for determining if the current graphics device supports simple translucency: public boolean isWindowTranslucencySupported(GraphicsDevice.WindowTranslucency translucencyKind) The isWindowTranslucencySupported() method returns true if the kind of translucency specified by its argument is supported. For simple translucency, this argument must be GraphicsDevice.WindowTranslucency.TRANSLUCENT, as demonstrated below: GraphicsEnvironment ge; ge = GraphicsEnvironment.getLocalGraphicsEnvironment (); if (!ge.getDefaultScreenDevice (). isWindowTranslucencySupported (GraphicsDevice.WindowTranslucency.TRANSLUCENT)) { System.err.println ("simple translucency isn't supported"); return; } I've created a STDemo application that demonstrates simple translucency. Use its user interface's (UI's) slider component to adjust its frame window opacity from opaque to transparent (at which point the window disappears). Listing 1 presents the application's source code. Listing 1STDemo.java // STDemo.java import java.awt.EventQueue; import java.awt.FlowLayout; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; public class STDemo extends JFrame { public STDemo () { super ("Simple Translucency Demo"); setDefaultCloseOperation (EXIT_ON_CLOSE); final JSlider slider = new JSlider (0, 100, 100); ChangeListener cl; cl = new ChangeListener () { public void stateChanged (ChangeEvent ce) { JSlider source = (JSlider) ce.getSource (); STDemo.this.setOpacity (source.getValue ()/100.0f); } }; slider.addChangeListener (cl); getContentPane ().setLayout (new FlowLayout ()); getContentPane ().add (new JLabel ("TRANSP")); getContentPane ().add (new JPanel () {{ add (slider); }}); getContentPane ().add (new JLabel ("OPAQUE")); getRootPane ().setDoubleBuffered (false); pack (); setVisible (true); } public static void main (String [] args) { Runnable r; r = new Runnable () { public void run () { GraphicsEnvironment ge; ge = GraphicsEnvironment.getLocalGraphicsEnvironment (); if (!ge.getDefaultScreenDevice (). isWindowTranslucencySupported (GraphicsDevice.WindowTranslucency. TRANSLUCENT)) { System.err.println ("simple translucency isn't "+ "supported"); return; } new STDemo (); } }; EventQueue.invokeLater (r); } } Listing 1 creates a slider and registers a change listener with this component. While the slider control moves, this component fires change events to the listener, which responds by invoking setOpacity() with the slider's current value converted to [0.0, 1.0]. The listing takes advantage of the new JPanel () {{ add (slider); }} shortcut to create a Swing panel and add the slider component to the panel. Essentially, this shortcut instantiates a subclass of JPanel and uses the subclass's instance initializer to add the slider. Swing's component double buffering yields an unexpected visual artifact where an opaque slider image is left behind when you drag the translucent frame window. Listing 1 disables double buffering, via getRootPane ().setDoubleBuffered (false);, to avoid this artifact. Compile Listing 1; then run STDemo. Adjusting the slider results in a translucent window and translucent slider (see Figure 1). Don't release the mouse button once you reach transparent; otherwise, you won't be able to revert to translucent and opaque.
http://www.informit.com/articles/article.aspx?p=1592963
CC-MAIN-2018-34
en
refinedweb
Why Would Somebody Program Treepython Instead of Python Treepython is a variation of python which works in my structure editor textended. I am planning to write a computer game using this new language this February. In this post I will be adding new semantic patterns into treepython. It hopefully explains my motivation to break up with plaintext programming. Here's the samples/clearscreen.t+ from textended-edit. When I run it, it grabs the editor display and colors it black. The 0.0, 0.0, 0.0, 1.0 represents the black color here. Programmers see their colors like this, which stands to explain where the term "programmer art" is coming from. We could do a slightly better job by representing the color in the way they appear in an image manipulation software. This is called hexadecimal notation. In python you could create a function which constructs the needed format from a string, and input the color using that function as a helper. It would look like this: OpenGL.GL.glClearColor(*rgba_to_vec4("#000000")) The star stands for variable number of arguments. The rgba_to_vec4 produces 4 values in a list, but glClearColor wants 4 values. The star expands the list to fill 4 argument slots. This is slightly more readable, because you can copy the string into your preferred color chooser to see the color. But it doesn't come even close to what I can do in the treepython: If you coded in trees, you could represent the color right here. Next I'm going to show how that happens. First we annotate a string with symbol float-rgba. This will represent our hexadecimal colors and should translate to a tuple, each channel represented by a floating point number. If we try to evaluate the program, it returns us an error. The editor highlights the bad construct and shows the error message on the right. Extending Treepython with semantics The error message tells that it cannot be recognised as an expression. Lets extend treepython to recognize strings that are labeled float-rgba. @semantic(expr, String("float-rgba")) def float_rgba_expression(env, hexdec): channels = [c / 255.0 for c in hex_to_rgb(hexdec)] + [1.0] return ast.Tuple( [ast.Num(x, lineno=0, col_offset=0) for x in channels[:4]], ast.Load(), lineno=0, col_offset=0) def hex_to_rgb(value): lv = len(value) return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)) Treepython is a translator between textended tree structures and python ast, so the above code resembles a lisp macro. It's up to the language's implementor to decide how his language is extended. Next if we try to run the program, the program crashes and produces the following error message to the terminal: Traceback (most recent call last): File "main.py", line 550, in <module> paint(time.time()) File "t+", line 0, in paint TypeError: this function takes at least 4 arguments (1 given) I intend to get it overlay this error over the file, just like it did small while ago. Anyway you may see why it's happening here. We are passing a tuple to the call. That there's vararg implemented may give you a false sense of completeness of this project. I just coded in the support for variable argument semantics while writing this blog post. Extending Treepython's layouter with semantics Here's we got the code to extend our layouter with the float-rgba string semantics: ) return hpack([Glue(2), ImageBox(12, 10, 4, None, rgba)] + prefix + text) def hex_to_rgb(value): lv = len(value) return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)) Would you have rather wanted a border around it? No problem: ) yield Padding( hpack([ImageBox(12, 10, 4, None, rgba)] + prefix + text), (1, 1, 1, 1), Patch9('assets/border-1px.png')) yield Glue(2) As you may see now, we've improved readability of our program. Why show the hexadecimal if you can just show the color? That's because the editor needs a visual form to change the color. I think I'll be able to loosen that requirement later on. I modified an existing language, but all these changes could have been isolated apart and introduced into the file with a directive, like this: To support some other things, such asadding anonymous functions into a language that doesn't support them would not be as easy. Many languages might still end up to not support extensibility at all. But there are clearly less costs to implementing new semantics in the first place... ...Well you could do this kind of things in lisp of course! But when was the last time your lisp looked like python?
http://boxbase.org/entries/2015/jan/26/treepython/
CC-MAIN-2018-34
en
refinedweb
NumExpr 2.0 User Guide¶ The numexpr package supplies routines for the fast evaluation of array expressions elementwise by using a vector-based virtual machine. Using it is simple: >>> import numpy as np >>> import numexpr as ne >>> a = np.arange(10) >>> b = np.arange(0, 20, 2) >>> c = ne.evaluate("2*a+3*b") >>> c array([ 0, 8, 16, 24, 32, 40, 48, 56, 64, 72]) Building¶ NumExpr requires Python 2.6 or greater, and NumPy 1.7 or greater. It is built in the standard Python way: $ python setup.py build $ python setup.py install You must have a C-compiler (i.e. MSVC on Windows and GCC on Linux) installed. You can test numexpr with: $ python -c "import numexpr; numexpr.test()" Enabling Intel VML support¶ Starting from release 1.2 on, numexpr includes support for Intel’s VML library. This allows for better performance on Intel architectures, mainly when evaluating transcendental functions (trigonometrical, exponential, …). It also enables numexpr using several CPU cores. If you have Intel’s MKL (the library that embeds VML), just copy the site.cfg.example that comes in the distribution to site.cfg and edit the latter giving proper directions on how to find your MKL libraries in your system. After doing this, you can proceed with the usual building instructions listed above. Pay attention to the messages during the building process in order to know whether MKL has been detected or not. Finally, you can check the speed-ups on your machine by running the bench/vml_timing.py script (you can play with different parameters to the set_vml_accuracy_mode() and set_vml_num_threads() functions in the script so as to see how it would affect performance). Usage Notes¶ NumExpr’s principal routine is: evaluate(ex, local_dict=None, global_dict=None, optimization='aggressive', truediv='auto') where. The optimization parameter can take the values 'moderate' or 'aggressive'. 'moderate' means that no optimization is made that can affect precision at all. 'aggressive' (the default) means that the expression can be rewritten in a way that precision could be affected, but normally very little. For example, in 'aggressive' mode, the transformation x~**3 -> x*x*x is made, but not in 'moderate' mode. The truediv parameter specifies whether the division is a ‘floor division’ (False) or a ‘true division’ (True). The default is the value of __future__.division in the interpreter. See PEP 238 for details.). If they are not in the previous set of types, they will be properly upcasted for internal use (the result will be affected as well). The arrays must all be the same size. Datatypes supported internally¶¶ Casting rules in NumExpr follow closely those of NumPy. However, for implementation reasons, there are some known exceptions to this rule, namely: - When an array with type int8, uint8, int16or uint16is used inside NumExpr, it is internally upcasted to an int(or int32in NumPy notation). - When an array with type uint32is used inside NumExpr, it is internally upcasted to a long(or int64in NumPy notation). - A floating point function (e.g. sin) acting on int8or int16types returns a float64type, instead of the float32that is returned by NumPy functions. This is mainly due to the absence of native int8or int16types in NumExpr. - In operations implying a scalar and an array, the normal rules of casting are used in NumExpr, in contrast with NumPy, where array types takes priority. For example, if ais an array of type float32and bis an scalar of type float64(or Python floattype, which is equivalent), then a*breturns a float64in NumExpr, but a float32in NumPy (i.e. array operands take priority in determining the result type). If you need to keep the result a float32, be sure you use a float32scalar too. Supported operators¶ NumExpr supports the set of operators listed below: - Logical operators: &, |, ~ - Comparison operators: <, <=, ==, !=, >=, > - Unary arithmetic operators: - - Binary arithmetic operators: +, -, *, /, **, %, <<, >> Supported functions¶. - contains(str, str): bool– returns True for every string in op1that contains op2. Notes¶ - abs()for complex inputs returns a complexoutput too. This is a departure from NumPy where a floatis returned instead. However, NumExpr is not flexible enough yet so as to allow this to happen. Meanwhile, if you want to mimic NumPy behaviour, you may want to select the real part via the realfunction (e.g. real(abs(cplx))) or via the realselector (e.g. abs(cplx).real). More functions can be added if you need them. Note however that NumExpr 2.6 is in maintenance mode and a new major revision is under development. Supported reduction operations¶ The next are the current supported set: - sum(number, axis=None): Sum of array elements over a given axis. Negative axis are not supported. - prod(number, axis=None): Product of array elements over a given axis. Negative axis are not supported. Note: because of internal limitations, reduction operations must appear the last in the stack. If not, it will be issued an error like: >>> ne.evaluate('sum(1)*(-1)') RuntimeError: invalid program: reduction operations must occur last General routines¶ - evaluate(expression, local_dict=None, global_dict=None, optimization='aggressive', truediv='auto'): Evaluate a simple array expression element-wise. See examples above. - re_evaluate(local_dict=None): Re-evaluate the last. See note below to see how the number of threads is set via environment variables. If you are using on a system. Note on the maximum number of threads: Threads are spawned at import-time, with the number being set by the environment variable NUMEXPR_MAX_THREADS. Example: import os; os.environ['NUMEXPR_MAX_THREADS'] = '16' The default maximum thread count is 64. The initial number of threads _that are used_ will be set to the number of cores detected in the system or 8, whichever is lower. For historical reasons, the NUMEXPR_NUM_THREADS environment variable is also honored at initialization time and, if defined, the initial number of threads will be set to this value instead. Alternatively, the OMP_NUM_THREADS environment variable is also honored, but beware because that might affect to other OpenMP applications too. Intel’s VML specific support routines¶ When compiled with Intel’s VML (Vector Math Library), you will be able to use some additional functions for controlling its use. These are: - set_vml_accuracy_mode(mode): Set the accuracy for VML operations. The modeparameter can take the values: - 'low': Equivalent to VML_LA - low accuracy VML functions are called - 'high': Equivalent to VML_HA - high accuracy VML functions are called - 'fast':.
http://numexpr.readthedocs.io/en/latest/user_guide.html
CC-MAIN-2018-34
en
refinedweb
Introduction to Spark Structured Streaming - Part 3 : Stateful WordCount third post in the series. In this post, we discuss about the aggregation on stream using word count example. You can read all the posts in the series here. TL;DR You can access code on github. Word Count Word count is a hello world example of big data. Whenever we learn new API’s, we start with simple example which shows important aspects of the API. Word count is unique in that sense, it shows how API handles single row and multi row operations. Using this simple example, we can understand many different aspects of the structured streaming API. Reading data As we did in last post, we will read our data from socket stream. The below is the code to read from socket and create a dataframe. val socketStreamDf = sparkSession.readStream .format("socket") .option("host", "localhost") .option("port", 50050) .load() Dataframe to Dataset In the above code, socketStreamDf is a dataframe. Each row of the dataframe will be each line of the socket. To implement the word count, first we need split the whole line to multiple words. Doing that in dataframe dsl or sql is tricky. The logic is easy to implement in functional API like flatMap. So rather than working with dataframe abstraction, we can work with dataset abstraction which gives us good functional API’s. We know the dataframe has single column value of type string. So we can represent it using Dataset[String]. import sparkSession.implicits._ val socketDs = socketStreamDf.as[String] The above code creates a dataset socketDs. The implicit import makes sure we have right encoders for string to convert to dataset. Words Once we have the dataset, we can use flatMap to get words. val wordsDs = socketDs.flatMap(value => value.split(" ")) Group By and Aggregation Once we have words, next step is to group by words and aggregate. As structured streaming is based on dataframe abstraction, we can use sql group by and aggregation function on stream. This is one of the strength of moving to dataframe abstraction. We can use all the batch API’s on stream seamlessly. val countDs = wordsDs.groupBy("value").count() Run using Query Once we have the logic implemented, next step is to connect to a sink and create query. We will be using console sink as last post. val query = countDs.writeStream.format("console").outputMode(OutputMode.Complete()) query.start().awaitTermination() You can access complete code on github. Output Mode In the above code, we have used output mode complete. In last post, we used we used append mode. What are these signify?. In structured streaming, output of the stream processing is a dataframe or table. The output modes of the query signify how this infinite output table is written to the sink, in our example to console. There are three output modes, they are Append - In this mode, the only records which arrive in the last trigger(batch) will be written to sink. This is supported for simple transformations like select, filter etc. As these transformations don’t change the rows which are calculated for earlier batches, appending the new rows work fine. Complete - In this mode, every time complete resulting table will be written to sink. Typically used with aggregation queries. In case of aggregations, the output of the result will be keep on changing as and when the new data arrives. Update - In this mode, the only records that are changed from last trigger will be written to sink. We will talk about this mode in future posts. Depending upon the queries we use , we need to select appropriate output mode. Choosing wrong one result in run time exception as below. org.apache.spark.sql.AnalysisException: Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark; You can read more about compatibility of different queries with different output modes here. State Management Once you run the program, you can observe that whenever we enter new lines it updates the global wordcount. So every time spark processes the data, it gives complete wordcount from the beginning of the program. This indicates spark is keeping track of the state of us. So it’s a stateful wordcount. In structured streaming, all aggregation by default stateful. All the complexities involved in keeping state across the stream and failures is hidden from the user. User just writes the simple dataframe based code and spark figures out the intricacies of the state management. It’s different from the earlier DStream API. In that API, by default everything was stateless and it’s user responsibility to handle the state. But it was tedious to handle state and it became one of the pain point of the API. So in structured streaming spark has made sure that most of the common work is done at the framework level itself. This makes writing stateful stream processing much more simpler. Conclusion We have written a stateful wordcount example using dataframe API’s. We also learnt about output types and state management.
http://blog.madhukaraphatak.com/introduction-to-spark-structured-streaming-part-3/?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=SF%20Data%20Weekly
CC-MAIN-2018-34
en
refinedweb
I did solve it using my own way, as follows: def censor(text,word): newLst=[] for i in text: newLst.append(i) print newLst for i in range(len(text)): if text[i:i+len(word)]==word: for n in range(len(word)): newLst[i+n]="*" return "".join(newLst) but when I checked the hint to see what codecademy’s method was, I couldn’t figure out all the steps… (was .split() fuction taught in the course?) Can we distinguish between 2 functions that give the correct results to the same problem in diff ways, to figure which is better?(maybe something like time taken etc.?) P.S: How does codecademy make those cool default display pictures that are different for every user?
https://discuss.codecademy.com/t/10-15-how-to-use-the-hint-given/343921
CC-MAIN-2018-34
en
refinedweb
AS3, Dictionary & Weak Method Closures This is going to be a technical post so those of you not of the code persuasion look away now.. Okay great, now those guys have gone I can get down to it. Some of my recent work on the SWFt project has revolved around the use of Robert Penners AS3Signals. If you dont know what Signals are I strongly reccomend that you check out Roberts blog for more info. In brief, they are an alternative to the Events system found in Flash, based on the Signal / Slot pattern of Qt and C# they are much faster and more elegant (my opinion) than native events. I have been trying to incorporate signals in SWft for both the elegance and performance gains that they bring, however there is an issue that was brought to my attention by Shaun Smith on the mailing list. The issue is that my current use of them will cause memory leaks. I realised that this too would apply to the work I had been doing using the RobotLegs and Signals libraries. RobotLegs (for those of you that dont know) is an excellent Dependency Injection framework inspired by the very popular PureMVC framework. I have blogged before about its excellence. Signals have been incorporated into RobotLegs as a separate ‘plugin’ by Joel Hooks in the form of the SignalCommandMap. The SignalCommandMap does as the name implies, it allows you to map signals to commands so that whenever a mapped signal is dispatched then the corresponding command is executed. Its a very nice, elegant, solution to RIA development. However there is one catch. I have so far been using signals such as: [codesyntax lang=”actionscript3” lines=”normal”] public class MyMediator extends Mediator { // View [Inject] public var view : MyView; // Signals [Inject] public var eventOccured : ViewEventOccuredSignal; [Inject] public var modelChanged : ModelChangedSignal; override public function onRegister():void { view.someSignal.add(eventOccured.dispatch); modelChanged.add(onModelChanged); } protected function onModelChanged() { view.updateView(); } } [/codesyntax] So here we can see a typical use of Signals in a mediator. There are two things going on here that are of concern, lets break them down. Firstly on line 12 we are listening to a signal on the view, then passing on the event directly to an app-level event, notice how nice and clean this is, this is what I love about using RL & Signals. Line 13 we are listening for an app-level signal for a change on the model then updating the view to reflect this. It all looks well and good but unfortunately in its current state it could cause a memory leak. This is because we are listening to events on signals without then removing the listen. For example, we are listening to the app-level event on line 13 “modelChanged.add(onModelChanged);” so now the “modelChanged” signal has a reference to this Mediator. This will cause a leak when the View is removed from the display list. Normally the mediator would also be make available for garbage collection, however, because the singleton Signal has a reference to the Mediator it cannot be removed. The same goes for line 12. Suppose the “ViewEventOccuredSignal” that is injected is not a singleton and is swapped out for another instance it could not be garbage collected as the “view.someSignal” has a reference to its dispatch function. Realising this problem I knew that the solution was simply to be careful and add a “onRemoved” override function in my Mediator then clean up by removing the signal listeners. However I like the simplicity and beauty of current way of doing things so I started to wonder if there was another way. I started thinking about whether I could use weak references with the Signal. If I could then I wouldnt have to worry about cleaning up as the Signal wouldnt store any hard-references to the functions and so the listener would be free for collection. After some digging however I realised that there was no option for weak listening in Robert Penners AS3Signals. I thought to myself why the hell not? I knew that the Dictionary object in AS3 has an option to store its contents weakly so I thought so long as you don’t require order dependant execution of your listeners it should be possible to store the listener functions in a weakly referenced Dictionary. It was at this point that I noticed Roberts post on the subject of weakly referenced Signals:. In it he references Grant Skinners post concerning a bug with storing functions in a weakly referenced Dictionary. From Grant’s post:. This was starting to look bad for my idea. Me being me however, I thought I knew better, and that post was written pre Flash 10 so I thought to myself: perhaps its been fixed in Flash 10. So I set to work coding a simple example. I created a very simple Signal dispatcher: [codesyntax lang=”actionscript3” lines=”normal”] package { import flash.events.EventDispatcher; import flash.utils.Dictionary; public class SimpleDispatcher { protected var _listeners : Dictionary; public function SimpleDispatcher(useWeak:Boolean) { _listeners = new Dictionary(useWeak); } public function add(f:Function) : void { _listeners[f] = true; } public function dispatch() : void { for (var o:* in _listeners) { o(); } } } } [/codesyntax] And a very simple listening object: [codesyntax lang=”actionscript3” lines=”normal”] package { public class SimpleListener { public function listen(d:SimpleDispatcher) : void { d.add(onPing); } protected function onPing() : void { trace(this+" - ping"); } } } [/codesyntax] And then a simple Application to hook it all together: [codesyntax lang=”mxml” lines=”normal”] <?xml version=”1.0” encoding=”utf-8”?> <s:Application xmlns:fx=”“ xmlns:s=”library://ns.adobe.com/flex/spark” xmlns:mx=”library://ns.adobe.com/flex/mx”> <fx:Script> <![CDATA[ import mx.controls.List; protected var _dispatcher : SimpleDispatcher = new SimpleDispatcher(true); protected var _listener : SimpleListener; protected function onAddListenerClicked(event:MouseEvent):void { _listener = new SimpleListener(); _listener.listen(_dispatcher); } protected function onRunGCClicked(event:MouseEvent):void { try { new LocalConnection().connect('foo'); new LocalConnection().connect('foo'); } catch (e:*) {} } protected function onDispatchClicked(event:MouseEvent):void { _dispatcher.dispatch(); } ]]> </fx:Script> <s:VGroup <s:Button <s:Button <s:Button </s:VGroup> </s:Application> [/codesyntax] So what I should expect to see from this example is that when I click “Add Listener” it should create a listener reference which will then listen for when the signal is dispatched and trace out a “ping”. What actually happens is you get nothing. No trace out, despite the fact that there is clearly still a reference to the listener in the Application file. So whats happening here? If you break into the debugger at the point that the listener is added then you get the following: You can see that the type “MethodClosure” is added as the key to the dictionary rather than Function which is passed in. MethodClosure is a special native Flash Type that you dont have access to. It exists to resolve the issues we used to have in AS2 where passing a function of a class to a listener would cause the listener to go out of scope and other nasties. From the Adobe docs: Event handling is simplified in ActionScript 3.0 thanks to method closures, which provide built-in event delegation. In ActionScript 2.0, a closure would not remember what object instance it was extracted from, leading to unexpected behavior when the closure was invoked. .. This class is no longer needed because in ActionScript 3.0, a method closure will be generated when someMethod is referenced. The method closure will automatically remember its original object instance. The only problem is that it seems that using a MethodClosure as a key in a weak dictionary causes the MethodClosure to have no references and hence be free for garbage collection as soon as its added to the Dictionary which is not good :( So thats about as far as I got, I have spent a few evenings on this one now and I think im about ready to call it quits. I had a few ideas about creating Delegate handlers to make functions very much in the same way as was done in AS2 but then I read this post: and the subsequent comments and realised it probably wasnt going to work. I also had an idea about using the only other method of holding weak references the EventDispatcher class. I thought perhaps somehow I could get it to hold the weak references then I could loop through the listeners in there calling dispatch manually. Despite “listeners” property showing up in the Flex debugger for an EventDispatcher you dont actually have access to that property unfortunately so hence cant get access to the listening functions. Interestingly however the EventDispatcher uses “WeakMethodClosure” object instead of the “MethodClosure” object according to the debugger. Well I guess for now Ill have to make sure I code more carefully and unlisten from my Signals ;)
https://mikecann.co.uk/actionscript/flex/programming/swft/as3-dictionary-weak-method-closures/
CC-MAIN-2018-34
en
refinedweb
A week of symfony #536 (3-9 April 2017) This week Symfony started working on stabilizing the new features introduced for Symfony 3.3, specially the ones related to autowiring. Symfony also added a new Kernel::getProjectDir() method to get the root directory of the project instead of the kernel directory. Lastly, the first blog posts about Symfony 4 were published, outlining the future of the Symfony project. Symfony development highlights - c7163e2: [DependencyInjection] fixed fatal error at ContainerBuilder::compile() if config is not installed - a9da8a3: [ExpressionLanguage] provide the expression in syntax errors - a2cd63c: fixed more risky tests - a200357: [Translation] avoid creating cache files for fallback locales - 933835c: [DependencyInjection] fixed the XML schema - 2adfb37: [Validator] check for empty host when calling checkdnsrr - 0eed690: [ExpressionLanguage] avoid ExpressionLanguage dependency on ctype - ee10bf2: [DependencyInjection] don't use auto-registered services to populate type-candidates - bad24d3: [DependencyInjection] autowiring and factories are incompatible with each others - 91b025a: [DependencyInjection] prevent AutowirePass from triggering irrelevant deprecations - 80cea46: [PropertyInfo] supported nullable array or collection - 247e797: [PropertyInfo] allow Upper Case property names - db8231e: [HttpKernel] fixed forward compat with Request::setTrustedProxies() - 91b025a: [DependencyInjection] prevent AutowirePass from triggering irrelevant deprecations - 47740ce: [Workflow] added workflow_marked_places() Twig function - 88c587d: [DependencyInjection] don't trigger deprecation for event_dispatcher service - 2450449: [DependencyInjection] added ServiceLocatorTagPass::register() to share service locators - a146e4d: [FrameworkBundle] returns the kernel instance in KernelTestCase::bootKernel - 6e54cdf: [Console] give errors back to error handler if not handled by console.error listeners - 54495b2: [Console] allow to catch CommandNotFoundException - 2a40b6f: [WebProfiler] fixed race condition in fast Ajax requests - 937045c: [Yaml] report deprecations when linting YAML files - d33c0ee: [TwigBundle] redesigned the exception pages - ab93fea: [DependencyInjection] always autowire "by id" instead of using reflection against all existing services - d662b21: [DependencyInjection] restrict autowired registration to "same-vendor" namespaces - 3458edf, 49ae724: [DependencyInjection] improved autowiring error messages - abb8d2b: [WebServerBundle] added a way to dump current status host/port/address when getting the status - 3f07e10: [HttpKernel] dump container logs in Kernel to have them also on errors - ec2cc08: [HttpKernel] resolve invokable controllers short notations using ServiceValueResolver - f04c0b5: [HttpKernel] skip ContainerAwareInterface::setContainer from service_arguments actions registration - 7b8409a: [HttpKernel] added Kernel::getProjectDir() Newest issues and pull requests - [Config] Support dots in node names - [Security] Make firewall more extensible - CookieSessionHandler proposal - Browserkit back should not go back to URLs that redirect - [WebProfiler] File links (event panel) are not resolved using the router They talked about us - API Platform 2.1: when Symfony meets ReactJS (Symfony Live 2017) - DDD with Symfony: repositories - Symfony 4: Compose your Applications - Symfony 4: Monolith vs Micro - Symfony 4: Best Practices - A better .htaccess for Silex/Symfony Applications - Retour sur le Symfony Live Paris 2017 (les-tilleuls.coop) - Retour sur le Symfony Live 2017 (JoliCode) - Le Symfony Live Paris 2017 (Eleven Labs) - Système de badges sur Symfony 3 - Symfony 4: Componer en vez de heredar - Symfony 4: Microaplicaciones y monolitos - Symfony 4: Buenas prácticas - Symfony Flex, как будет выглядеть ваше приложение с Symfony 4 - PHP: Хранение сессий в защищённых куках - Сети Петри с Symfony а-ля WorkFlow компонент In Symfony 4, can you make default configuration directory name configurable? I find /etc to not be explicit enough (I know it originates from Unix's /etc, but not every Symfony Developer is Unix Master ;) ). I would like to easily change it to /config or /conf. Could there be a simple method like getConfigDir() similar to getCacheDir() or getLogDir()? To ensure that comments stay relevant, they are closed for old posts. Adam Prager said on Apr 10, 2017 at 19:19 #1 I'm not sure about the bundle-less approach though. Sure it will make symfony "friendlier", reduces the learning curve for newcomers, lowers the bar small projects to consider symfony (less overhead, simpler structure). These are very positive points, and the decision is totally understandable. However I'm wondering about mid+ size project, and project growth. In my experience initial requirements rarely reflect the real complexity of a project. Let's say I have a bundle-less, mid sized symfony project. The project also has a bunch of CMS features, it would make sense to create a separate bundle for them. Most of the time these bundles are not completely decoupled, not open sourced -> not reused. They exists purely as an organization pattern. With the AppBundle, this was very easy to do, only a small refactor... basically a namespace change. Will the bundle-less version support this? Or will we have to solve these problems inside our domain (like App\Cms\Entity) ? What would happen If I had to extend a 3rd party bundle, like FOSUser? IMO the guidelines should cover scenarios like this. Shouldn't be too hands off...
http://symfony.com/blog/a-week-of-symfony-536-3-9-april-2017
CC-MAIN-2018-34
en
refinedweb
It is pretty easy to create a Spring Batch job that reads its input data from a CSV or an XML file because these file formats are supported out of the box. However, if we want to read the input data of our batch job from a .XLS or .XLSX file that was created with Excel, we have to work a bit harder. This blog post helps us to solve this problem.. Introduction to Our Example Application During this tutorial we will implement several Spring Batch jobs that processes the student information of an online course. This time we need to create a Spring Batch job that can import student information from an Excel file. This file contains a student list that provides the following information for our application: - The name of the student. - The email address of the student. - The name of the purchased package. When we read the student information from an Excel file, we have to transform that information; } } Before we can configure an ItemReader that reads student information from our Excel file, we have to add a few additional dependencies into our build script. Getting the Required Dependencies If we want to read the input data of our Spring Batch job from an Excel document, we have to add the following dependency declarations into our build script: - Spring Batch Excel is a Spring Batch extension that provides ItemReader implementations for Excel. Unfortunately at the moment the only way to get the required jar file is to build it from the source. - Apache POI provides a Java API for Microsoft Office documents. It is an optional dependency of Spring Batch Excel, and we can use it for reading input data from .XLS and .XLSX documents. Additional Reading: After we have added the required dependency declarations into our build script, we can finally configure the ItemReader that can read the student information from our Excel spreadsheet. Reading Information From an Excel File The students.xlsx file contains the student list of our course. This file is found from the classpath and its full path is: data/students.xlsx. The content of this Excel spreadsheet looks as follows: NAME |EMAIL_ADDRESS |PURCHASED_PACKAGE Tony Tester |tony.tester@gmail.com |master Nick Newbie |nick.newbie@gmail.com |starter Ian Intermediate|ian.intermediate@gmail.com |intermediate As we already know, we can provide input data for our Spring batch job by configuring an ItemReader bean. We can configure an ItemReader bean, which reads the student information from the students.xlsx file, by following these steps: - Create an ExcelFileToDatabaseJobConfig class and annotate it with the @Configuration annotation. This class is the configuration class of our batch job, and it contains the beans that describe the flow of our batch job. - Create a method that configures our ItemReader bean and ensure that the method returns an ItemReader<StudentDTO> object. - Implement the created method by following these steps: - Create a new PoiItemReader<StudentDTO> object. - Ensure that the created reader ignores the header of our spreadsheet. - Configure the created reader to read the student information from the data/students.xlsx file that is found from the classpath. - Configure the reader to transform a student information row into a StudentDTO object with the BeanWrapperRowMapper class. This class populates the fields of the created StudentDTO object by using the column names given on header row of our spreadsheet. - Return the created PoiItemReader<StudentDTO> object. The source code of the ExcelFileToDatabaseJobConfig class looks as follows: import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.excel.RowMapper; import org.springframework.batch.item.excel.mapping.BeanWrapperRowMapper; import org.springframework.batch.item.excel.poi.PoiItemReader; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.env.Environment; import org.springframework.core.io.ClassPathResource; @Configuration public class ExcelFileToDatabaseJobConfig { @Bean ItemReader<StudentDTO> excelStudentReader() { PoiItemReader<StudentDTO> reader = new PoiItemReader<>(); reader.setLinesToSkip(1); reader.setResource(new ClassPathResource("data/students.xlsx")); reader.setRowMapper(excelRowMapper()); return reader; } private RowMapper<StudentDTO> excelRowMapper() { BeanWrapperRowMapper<StudentDTO> rowMapper = new BeanWrapperRowMapper<>(); rowMapper.setTargetType(StudentDTO.class); return rowMapper; } } This approach works as long as our Excel spreadsheet has a header row and the column names of the header row can be resolved into the field names of the StudentDTO class. However, it is entirely possible that we have to read the input data from a spreadsheet that doesn’t have a header row. If this is the case, we have to create a custom RowMapper that transforms the rows of our spreadsheet into StudentDTO objects. We can create a custom RowMapper by following these steps: - Create a StudentExcelRowMapper class. - Implement the RowMapper<T> interface and pass the type of created object (StudentDTO) as a type parameter. - Implement the T mapRow(RowSet rowSet) method of the RowMapper<T> interface by following these steps: - Create a new StudentDTO object. - Populate the field values of the created object. We can read the column values of the processed row by invoking the getColumnValue(int columnIndex) method of the RowSet interface. Also, we must remember that the index of the first column is 0. - Return the created StudentDTO object. The source code of the StudentExcelRowMapper class looks as follows: import org.springframework.batch.item.excel.RowMapper; import org.springframework.batch.item.excel.support.rowset.RowSet; public class StudentExcelRowMapper implements RowMapper<StudentDTO> { @Override public StudentDTO mapRow(RowSet rowSet) throws Exception { StudentDTO student = new StudentDTO(); student.setName(rowSet.getColumnValue(0)); student.setEmailAddress(rowSet.getColumnValue(1)); student.setPurchasedPackage(rowSet.getColumnValue(2)); return student; } } After we have created our custom row mapper, we have to make the following changes to the configuration of our ItemReader bean: - Ensure that the our ItemReader does not ignore the first line of the input data. - Replace the old excelRowMapper() method with a method that returns a new StudentExcelRowMapper object. After we have made these changes to the ExcelFileToDatabaseJobConfig class, its source code looks as follows: import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.excel.RowMapper; import org.springframework.batch.item.excel.poi.PoiItemReader; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.io.ClassPathResource; @Configuration public class ExcelFileToDatabaseJobConfig { @Bean ItemReader<StudentDTO> excelStudentReader() { PoiItemReader<StudentDTO> reader = new PoiItemReader<>(); reader.setResource(new ClassPathResource("data/students.xlsx")); reader.setRowMapper(excelRowMapper()); return reader; } private RowMapper<StudentDTO> excelRowMapper() { return new StudentExcelRowMapper(); } } Let’s summarize what we learned from this blog post. Summary This blog post has taught us four things: - If we want to read the input data of a Spring Batch job from an Excel spreadsheet, we have to add Spring Batch Excel and Apache POI dependencies into our build script. - If we want to read input data by using Spring Batch Excel and Apache POI, we have to use the PoiItemReader class. - We can map the rows of our spreadsheet into T objects by using the BeanWrapperRowMapper<T> class as long as our Excel spreadsheet has a header row and the column names of the header row can be resolved into the field names of the T class. - If our Excel spreadsheet doesn’t have a header row or the column names of the header row cannot resolved into the field names of the T class, we have to create a custom row mapper component that implements the RowMapper<T> interface. P.S. You can get the example applications of this blog post from Github: Spring example and Spring Boot example. Hi, Could you give an example of passing the resource path dynamically.Instead of hardcoding the file’s path in the reader. Normally this would be dynamic, you could have a website where people upload files and they have to be processed. Thanks Petri, I am facing the similar issue as described by Amos. Need to pass in a file dynamically rather than hard coding it. Help would be appreciated. Thanks! Please provide the sample code for the above example… Thank you. Your example is very good please send code for dynamic resources path to excel upload using spring batch. Hi, I don’t know how you can use a “dynamic file name” and use Spring Batch Excel. If you write your own ItemReader, you can of course implement the file reading logic as you wish, but this means that you have to also write the code that reads the content of the input file. One option is that you could create your own Resourcethat can locate input file dynamically, but I am not sure if this possible and how it should be implemented. Hi, I have an excel file which contains rowspan and colspan. Can you please share an example which we can read span data from excel file. I tried the example to read excel file by using spring batch. I have kept only one xlsx file i.e. Student.xlsx file. For the first time it works fine but the second time when scheduler starts the job it is giving me exception ‘java.lang.IllegalArgumentException: Sheet index (1) is out of range (0..0)’. Mine xlsx file has only one sheet. May I know from where is it incrementing the sheet index? Thanks. Hmm. I think that you have just found a bug from my example. I will take a closer look at it. Hey, have you fixed it? sorry I Ask but it happes to me too Ah. I forgot to report my findings. I am sorry about that :( In any way, it seems that this problem is caused by the Spring Batch Excel. The problem is that it doesn’t reset the number of the current sheet after it has processed the input file. This means that when the job is run for the second time, it fails because the sheet cannot be opened (because it doesn’t exist). In other words, you can fix this problem by cloning the Spring Batch Excel extension and making the required change to the AbstractExcelItemReaderclass. After you have made the required change, you have to create new jars and use them instead of the jars that are provided by this example. Do you have an example where the Excel file is being submitted as Multipartfile.? Your example reads the excel file from the classpath and sets the resource on the Item reader. I am hoping for an example where the job is triggered when a user sends a request with the excel file payload as such open an input stream read the excel. If you we are reading the excel file as input stream, how do we set the resource on the reader bean? Unfortunately I don’t have an example that reads the Excel file from an HTTP request. However, you could save the uploaded file to a directory and read the file from the upload directory by using Spring Batch. Actually, I will add this example to my to-do list and I maybe I will implement it after I have released my testing course. Have you had a chance to implement the uploading Excel file with HTTP request? Hi, Unfortunately I haven’t had any time to write that example (I am still recording my testing course). Did you get a chance to implement the uploading Excel file with HTTP request?I googled so many things but not finding any proper solution. Hi, I am sorry that it took me some time to answer to your comment. As you probably guessed, I am still recording my testing course. That being said, the course should be finally done after a few weeks, and I can concentrate on writing more content to my blog. About your problem: I haven’t been able to find a proper solution to it because Spring Batch doesn’t provide a good support for reading data from files that are determined at runtime. I assume that it’s possible to support this use case, but I just haven’t found the solution yet. How about if I want to read item from a particular sheet? how to do it in reader ya? Thank You. I took a quick look at the source of the PoiItemReaderand AbstractExcelItemReaderclasses, and it seems that there is no way to set the opened sheet (or at least I couldn’t find it). In other words, if you want to do this, yo have to make the required changes directly to the source code of Spring Batch Excel. The main classes of PIO have that option. but the Extension of spring batch doesnt. you just set the sheet you want using .getSheet(X) I have an requirement where i need to read dynamic excel sheets and process same and write in db. All of the examples i have seen so far requires a dto to access the excel file.Is there anyway i can read excel sheet dynamically without dto ? do you resolve this problem ?? i have the same problem Hi, You have to implement a custom ItemReaderthat reads the input file by using Apache POI. Unfortunately I cannot give you the exact steps because they depend from the structure of the input file. I have added spring batch excel dependencies to my build script but I am getting error : import org.springframework.batch.item.excel.* cannot be resolved. Are you using Maven or Gradle? Also, did your IDE reload the dependencies of your project after you made changes to your build script? if you like to help. Hi This helped me a lot. Unfortunately facing some issues. excelStudentReader works fine for the firsttime. When i make a call again with the same excel, getting below error Caused by: java.lang.IllegalArgumentException: Sheet index (3) is out of range (0..2) I tried changing the below line in PoiItemReader getSheet() return new PoiSheet(this.workbook.getSheetAt(sheet)); to return new PoiSheet(this.workbook.getSheetAt(0)); This time it worked but not as expected. Can you please look into it. Hi, This is a known bug of the Spring Batch Excel library. It seems that the original project is abandoned and someone has created a fork that fixes this problem (check the bug report). I am going to update my Spring Batch examples during next summer, and I am going to solve this problem when I do so.
https://www.petrikainulainen.net/programming/spring-framework/spring-batch-tutorial-reading-information-from-an-excel-file/
CC-MAIN-2018-22
en
refinedweb