audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | null | 2018-05-21 | 2018-05-21 09:45:54 | 2018-05-21 | 2018-05-21 09:51:18 | 2 | false | en | 2018-05-21 | 2018-05-21 09:51:18 | 2 | 10dc4d58a55b | 2.062579 | 0 | 0 | 0 | Last time we looked at the five most “similar” words to a specific word from corpus of 50 art reviews. Using the same 50 art reviews as our… | 5 | The Makings of an Art Thesaurus
Last time we looked at the five most “similar” words to a specific word from corpus of 50 art reviews. Using the same 50 art reviews as our database of words we have now created a more interactive tool which lets you compare the “similarity” of two words and outputs a score between -100 and +100.
VIEW THESAURUS
The dynamics are similar to our previous experiment. We are using the GLoVE vector representation of words to convert our database of words into vectors. Then we find how “similar” two words are by finding the distance of the vectors, in our case we have used the cosine similarity. This calculates the angle between the two vectors. We can think of two words being very similar if their vector representation have similar orientation which translates to a small angle. In the limit, when the vectors have exactly the same orientation, the angle will be zero and cosine of zero is 1 (in our tool we have scaled this up by a factor of 100). When the angle is 90 degrees, the vectors are said to be “orthogonal” and most “dissimilar” or uncorrelated and cosine similarity is 0. Finally when the angle is 180 degrees the vectors have exactly the opposite orientation and the cosine similarity is -1.
Taking a minute to try to understand what this means for our tool when comparing two words we need to understand a bit about how the GLoVE vector representation was calculated. An in-depth explanation is outside the scope of this post but the intuition is as follows. For a given input word e.g. “surrealism”, the algorithm looks at the words surrounding it in many different examples and then outputs estimates of the probability of having every other word in the database surrounding it, the vector of the word is made up of these probabilities. So for our example of “surrealism”, words like “tractor” or “coffee” are going to have low probabilities whereas “art” and “painting” will have higher probabilities. Equally, taking another example “impressionism”, this will also have low probabilities for “tractor” and “coffee” but high for “art” and “painting” and so the vectors may have similar orientation and we will get a higher score in our tool!
VIEW THESAURUS
We are interested to explore the possibility of using this kind of tool to make art more accessible in the same way that a classic thesaurus is used to expand one’s vocabulary, could we use an algorithm and the vast resources of art-related text out there to create a tool that helps visitors of cultural institutions expand their knowledge? We hope so.
| The Makings of an Art Thesaurus | 0 | the-makings-of-an-art-thesaurus-10dc4d58a55b | 2018-05-21 | 2018-05-21 09:51:19 | https://medium.com/s/story/the-makings-of-an-art-thesaurus-10dc4d58a55b | false | 445 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Oliver Flynn | Founder of ARTimbarc. Messes around with Machine Learning and Artificial Intelligence and blogs about it. Former Morgan Stanley employee and Cambridge graduate. | 13c26106a61c | olwflynn | 8 | 41 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-25 | 2018-09-25 12:49:11 | 2018-09-25 | 2018-09-25 00:00:00 | 1 | false | en | 2018-09-25 | 2018-09-25 12:49:42 | 6 | 10dc6f7031c4 | 1.54717 | 0 | 0 | 0 | Science fiction films no longer dominate the artificial intelligence realm. It is a part of our everyday lives and in our classrooms. We… | 4 | The Role of Artificial Intelligence in Education
Science fiction films no longer dominate the artificial intelligence realm. It is a part of our everyday lives and in our classrooms. We are all too familiar with tools such as Siri and Amazon’s Alexa, but this is just the beginning for AI in education. We should expect to see more in the distant future. The Artificial Intelligence Market in the US Education Sector 2017–2021 report suggests that experts expect AI in education to grow by 47.50% during the period 2017–2021. Below is a glimpse into some of the roles it will play in the classroom.
Automate Grading
Teachers will finally be able to focus more on teaching then grading. With AI, the role of grader can be passed along. There is already AI technology that exists and is able to automate grading of multiple choice materials. As AI develops and becomes more intelligent, it is expected that the technology will one day be able to grade more than standardized assessments.
Support Teachers
AI will play a large part in helping teachers besides grading. AI will support teachers in many other ways such as communication with students. For example, a college professor successfully used an AI chatbot to communicate with students as a teaching assistant all semester without students knowing they were not talking to a human being.
Supporting Students
In the future, students will most likely have a lifelong AI learning companion. The next generation of children will have a companion that knows their personal history and school history. It will then in turn, know each individual students strengths and weaknesses.
Meet Student Needs
Students with special needs will also be assisted with a AI personalized learning companion by adapting materials to lead them to success. Studies are already showing positive results for AI teaching students social skills.
Allow Teachers to Act as Learning Motivators
AI will eventually take on a teaching role by providing students with basic information. It will change the role of teachers in the classroom. Teachers will move into the role of classroom facilitator or learning motivator.
Originally published at vasantramachandran.com on September 25, 2018.
| The Role of Artificial Intelligence in Education | 0 | the-role-of-artificial-intelligence-in-education-10dc6f7031c4 | 2018-09-25 | 2018-09-25 12:49:42 | https://medium.com/s/story/the-role-of-artificial-intelligence-in-education-10dc6f7031c4 | false | 357 | null | null | null | null | null | null | null | null | null | Education | education | Education | 211,342 | Vasant Ramachandran | Vasant Ramachandran is an entrepreneur and CTO at Daric, a financial software platform for borrowers and lenders. http://VasantRamachandran.org | f8ebaf86e189 | vasantramachandran | 15 | 88 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-05 | 2017-09-05 04:08:24 | 2017-09-07 | 2017-09-07 06:24:38 | 0 | false | en | 2017-09-07 | 2017-09-07 06:24:38 | 0 | 10defb68d84e | 1.218868 | 0 | 0 | 0 | According to Malcom Gladwell and his book Outliers, the amount of time needed to become an world-class expert in a field is 10,000 hours of… | 1 | 10,000 Hours of Coding
According to Malcom Gladwell and his book Outliers, the amount of time needed to become an world-class expert in a field is 10,000 hours of dedicated practice.
I want to reach 10,000 hours of programming within 3 years or starting. I figure at 10 hours at a day for the next 3 years, I’ll be able to reach this milestone.
Why is this so important?
I really, really want to become proficient at coding. Moreso, that even getting a job. I mean, hey, if I am an awesome programmer do I even need to worry about every finding a job?
Also, I will need coding chops to pick up machine learning and artificial intelligence systems.
Machine Learning and Artificial Intelligence are complex systems to learn and especially to master and having great coding skills will help me learn these subject areas.
My study habits are already starting to pay off as per I’ve been dedicating at least 1 hour a day to machine learning and artificial intelligence. Right now, it’s a lot of news articles and tutorials and math review but I am understanding the subject better every day. Since machine learning and artificial intelligence is such a broad subject there’s so much to learn and study and review.
Currently, my biggest interest in machine learning and artificial intelligence is automation and neural networks. Luckily, I have been able to meet automation engineers, data scientists, machine learning experts and artificial intelligence engineers via volunteering for the Machine Learning Society in San Diego.
It’s great to be able to speak to these experts one on one and be able to pick their brains. It’s a fantastic opportunity to have. To be able to learn more about how they got where they did is valuable information.
Hopefully, I’ll be able to turn one of those relationships into an internship which will turn into a fellowship at OpenAI, my dream job.
| 10,000 Hours of Coding | 0 | 10-000-hours-of-coding-10defb68d84e | 2017-12-29 | 2017-12-29 12:36:36 | https://medium.com/s/story/10-000-hours-of-coding-10defb68d84e | false | 323 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Daniel Joshua Silverman | Coding with the dog! | 45c894308198 | ghaash | 34 | 37 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | b9e8abb267f6 | 2018-02-02 | 2018-02-02 15:33:27 | 2018-02-05 | 2018-02-05 08:01:01 | 1 | false | en | 2018-02-05 | 2018-02-05 08:01:01 | 7 | 10dfac43d9cb | 0.615094 | 14 | 0 | 0 | IBM Watson is a machine learning platform that can be used for voice recognition on the web using JavaScript (among many other things). In… | 5 | Voice recognition on the web using IBM Watson
IBM Watson is a machine learning platform that can be used for voice recognition on the web using JavaScript (among many other things). In this episode, I play around with it and integrate it with React.js.
Full video below:
💖 Survey: Please tell me your opinion on this episode
https://funfunfunction.typeform.com/to/sUvfTW?ep=wavo
🔗 Code from the episode
https://github.com/mpj/myvoicething
🔗 Bind & this video
https://www.youtube.com/watch?v=GhbhD1HR5vk
🔗 Discuss this video on the Fun Fun Forum
https://www.funfunforum.com/t/voice-recognition-on-the-web-using-ibm-watson/3920
🔗 Support the show by becoming a Patreon
https://www.patreon.com/funfunfunction
🔗 mpj on Twitter
https://twitter.com/mpjme
🔗 Help translate the show to your language
http://www.youtube.com/timedtext_cs_panel?tab=2&c=UCO1cgjhGzsSYb1rsB4bFe4Q
| Voice recognition on the web using IBM Watson | 129 | voice-recognition-on-the-web-using-ibm-watson-10dfac43d9cb | 2018-06-04 | 2018-06-04 00:16:50 | https://medium.com/s/story/voice-recognition-on-the-web-using-ibm-watson-10dfac43d9cb | false | 110 | A fun, personal and down-to-earth show about programming | null | funfunfunctionshow | null | Fun Fun Function | null | humans-create-software | CODING,PROGRAMMING,SOFTWARE ENGINEERING,JAVASCRIPT,WEB DEVELOPMENT | mpjme | Web Development | web-development | Web Development | 87,466 | Mattias Petter Johansson | Creator of Fun Fun Function, a YouTube show about programming. | 5b25bc58c2e | mpjme | 11,659 | 222 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-04 | 2018-09-04 17:01:03 | 2018-09-04 | 2018-09-04 17:26:35 | 3 | false | en | 2018-09-04 | 2018-09-04 17:35:27 | 5 | 10e136bf726c | 2.991509 | 0 | 0 | 0 | The Deep Valley of Uncanniness | 5 | Week 1 Proximity between Humans and Robots
The Deep Valley of Uncanniness
Mashiro Mori’s the Uncanny Valley shows the issue of designing things to be especially human-like but not quite yet. The valley occurs between two peaks where the object becomes extremely human-like but are missing important features of “human-ness”. Mori used Prosthetic Hand and Myoelectric Hand as an example to show the effect of the uncanny valley. Though these hands look somewhat realistic, these hands lack details such as human’s warmth touch. The Uncanny Valley address the issue of designing human-like robots and their negative effect on human affinity towards these robots. The graph of the Uncanny Valley shows that affinity is maximized around the middle and at the end. This shows that human-likeness need not be achieved to increase affinity. This shows that Robot, AI, and Cyborg need not be human-like to increase our affection towards them. Mori urged robotics designer to capitalize on the first peak as it is less likely to reach the uncanny valley. The Uncanny Valley show that such close proximity of robots to human could create the uncanny effect and decrease our affinity towards the robot.
The Uncanny Valley — Mashiro Mori
The R.U.R and Origin of Humanoid Robots
The R.U.R. writer Čapek coined the term “robota” which mean artificially created person. The shows revolve around artificial human and their human counterpart. The story shows that the robots and people were very similar as they were biologically created but they do lack what is referred to as “soul”. This is similar to the nuances that are missing in human-like robots that create uncanniness. In Act 1, the conversation between Helena and Dr. Gall shows that the destruction of humankind started by creating more human-like robots and the consequences of creating such progress was not heeded as the warning for Dr. Gall.
“Helena: If only you knew how he hates us! Are all of your robots like this? All the ones you started to make . . . differently?
Dr. Gall: Well, they do seem somewhat more excitable, but what can you expect? They’re more like people than Rossum’s robots were.
Helena: And what about that . . . that hatred? Is that more like people?
Dr. Gall: (shrugs shoulders) Even that is progress.”
From R.U.R Act 1
Similar to Mori’s warning, R.U.R exposed the larger threat of creating a human-like robotics beyond just appearance and interaction. R.U.R shows that decision making artificial intelligence could destroy humanity because of the flaw of human that are being programmed into these robots. Thus, the topic such as AI, safety, and ethics have become increasingly important to discuss.
Designing the Robot of the Future
In the 21st century, there are robots that follow Mori’s suggestion and other examples that do not. Such examples include Vector and Jibo, which are “smart” companion robots that do not have human resemblance. They do not have much “productive” functionality except answering easy questions that can be accessed on the internet. These robots are programmed to have affective behavior such as cute voice, dance, and emotive eyes. I believe these designs of robot distance itself from the humanoid robots by capitalizing on the first peak of the Uncanny Valley and they need not be humanoid.
Jibo — https://www.jibo.com/
Anki Vector — https://www.anki.com/en-us/vector
On the other hand, the robot such as Sophia dove straight into the Uncanny Valley because it tries to mimic the human facial expression and body language, however, it is still missing some nuances of human expression and body language. Even though Sophia is just a research project to increase human-like expression for human-robot interaction, the question stills remain whether such nuances of mimicry can be achieved? and if so, what are the implication of creating a humanoid robot? Furthermore, what future design decision must be made so that these negative effects do not harm human in any way?
Sophia by Hanson Robotics — http://www.hansonrobotics.com/robot/sophia/
| Week 1 Proximity between Humans and Robots | 0 | week-1-proximity-between-humans-and-robots-10e136bf726c | 2018-09-04 | 2018-09-04 17:35:27 | https://medium.com/s/story/week-1-proximity-between-humans-and-robots-10e136bf726c | false | 647 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Wei Wei Chi | Wei Wei is currently a graduate student at Carnegie Mellon University where he is pursuing a Master of Science in Computational Design. | f8c7a5b45db7 | wei_sq | 1 | 9 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-15 | 2017-09-15 21:10:50 | 2017-09-15 | 2017-09-15 21:11:21 | 0 | false | en | 2017-09-16 | 2017-09-16 01:38:12 | 1 | 10e15a82b681 | 0.524528 | 0 | 0 | 0 | And this is Sam Harris, a eminent neuroscientist, Ted Talk veteran, and podcast hit in the Bay Area, who is tight with Elon Musk, and who I… | 3 | It’s Pretty Bleeping Scary.
And this is Sam Harris, a eminent neuroscientist, Ted Talk veteran, and podcast hit in the Bay Area, who is tight with Elon Musk, and who I think is prescient in saying, in no uncertain terms, that when we do develop AGI and we will, absolutely (likely in 30 years or so, although he doesn’t make any promises), it will be the end of our existence. He thinks that the AGI (whatever shape it takes) will be so vastly superior to humans that it will be like humans to an ant, and we have zero problems killing ants whenever we want to, with no more compunction than taking out the trash. And I think he’s absolutely right, within our lifetime, and it’s pretty bleeping scary.
http://nepr.net/post/sam-harris-what-happens-when-humans-develop-super-intelligent-ai#stream/0
| It’s Pretty Bleeping Scary. | 0 | its-pretty-bleeping-scary-10e15a82b681 | 2017-12-15 | 2017-12-15 18:33:29 | https://medium.com/s/story/its-pretty-bleeping-scary-10e15a82b681 | false | 139 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Peter Marshall | I am extremely interested in AI, especially the not-so-good side of AI weapons and AI war, although the good parts are magnificent and wonderful too, naturally. | f6bab8ee3d29 | ideasware | 1,765 | 276 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | ae729d7cd594 | 2018-09-19 | 2018-09-19 05:11:03 | 2018-09-19 | 2018-09-19 05:19:16 | 2 | false | en | 2018-09-19 | 2018-09-19 05:23:27 | 1 | 10e17ddb54ca | 2.670126 | 0 | 0 | 0 | In recent years Non Performing Assets have emerged as a major headache for banking organizations. According to a recent report, bad loans… | 5 |
How can you save your financial services firm from foreclosures?
In recent years Non Performing Assets have emerged as a major headache for banking organizations. According to a recent report, bad loans have touched a new high of 9.8 lakh crores. If the economic cycle doesn’t pick up fast enough then the amount of bad loans can increase exponentially in near future. Although large corporate houses account for the major chunk of NPAs but loan defaults by SMEs is also a cause of concern for financial institutions. NPAs accumulate because businesses are not able to generate the expected cash flow from their business and do not have any other resources to repay their loan. This can happen due to a variety of reasons like:
Disruption in supply of raw materials
Sudden decrease in global price of finished products
Operational costs going out of control
Loss of money because of fines imposed by regulators
Bad economic environment
Business maintenance issue
To recover the value of the loans, banks have to start the process of taking possession of a mortgaged property when the mortgagor fails to keep up their mortgage payments. This process is known as foreclosure. Foreclosures are painful for the banks. Suppose the mortgaged property is the land from where the business is operating then it is difficult for the bank to calculate how much it will cost to improve the structure or bring it up to habitable standards so as to sell the property and recover costs. As a result, it is important for banks to avoid foreclosures.If the banks can predict borrowers at risk of foreclosures in advance then they can apply appropriate remedial methods to avoid this money leaking process.
Predictive analytics can solve this problem. A Financial Service Company which offered an integrated suite of financial services like Loans to Small and Medium Enterprises and Housing Finance wanted to reduce LAP foreclosures it lent to SME sector through “Working Capital Loans” and “Loans against Property”. The client utilized the data available to build a model using machine learning to predict SMEs with high probability to foreclose LAPs i.e. predict loan attrition. The data used was from a variety of sources like:
The client collected the above-mentioned information of every customer for the past 1.5 years and removed the customers who were less than 180 days into the system. The client didn’t have enough data points available for these relatively new customers, as a result, they had to be excluded from the analysis. As there were a lot of data points available so the client had to identify the relative importance of all these points in relation to the foreclosure rate. So the variable importance was determined and the predictive model to gauge the probability to default for each customer was calculated taking into account the relative importance of each variable/data point.
Finally, the client was successfully able to identify SMEs at risk of foreclosures 1 month in advance to apply relevant remedial measures so as to reduce foreclosures.
The client also harnessed the power of the given data to find out which tier cities and agents had the highest number of incoming foreclosure requests. The client analysed the effect of repo rate and Sharpe ratio on the foreclosure rate. It was observed there was an increase in actual foreclosure rate for mortgage when the repo rate was decreased and vice versa.
By now I’m sure you want to save your organisation from the painful process of foreclosures. Don’t worry we will help you out. Write to us at [email protected] and we will figure out the right course of action for you.
| How can you save your financial services firm from foreclosures? | 0 | how-can-you-save-your-financial-services-firm-from-foreclosures-10e17ddb54ca | 2018-09-19 | 2018-09-19 05:23:27 | https://medium.com/s/story/how-can-you-save-your-financial-services-firm-from-foreclosures-10e17ddb54ca | false | 606 | TransOrg Analytics is an award winning ‘Big Data and Predictive analytics’ company, which offers advanced analytics solutions to industry leaders. | null | null | null | TransOrg Analytics | transorg-analytics | null | TransOrg | Fintech | fintech | Fintech | 38,568 | TransOrg Analytics | TransOrg is a specialist data science company which leverages machine learning and artificial intelligence as a medium to drive business growth. | 71aa88207fa5 | transorganalytics | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 863f502aede2 | 2018-09-26 | 2018-09-26 17:10:01 | 2018-09-26 | 2018-09-26 17:13:41 | 7 | false | en | 2018-09-26 | 2018-09-26 21:11:57 | 8 | 10e678f4aa23 | 3.106604 | 21 | 1 | 1 | Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of… | 5 | Georgia Tech & Google Brain’s GAN Lab Visualizes Model Training in Browsers
Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of complex machine learning model Generative Adversarial Networks (GANs). Even machine learning newbs can now experiment with GAN models using only a common web browser.
This real-time visualization tool is deployed using Google’s new machine learning JavaScript library TensorFlow.js. Users can check, step-by-step, to see how a GAN learns the distribution of points in a 2D (x,y) space. Readers interested in the software details can download GAN Lab’s open-sourced code from GitHub.
Core components of a GAN framework include a generator and a discriminator. The generator creates fake data instances and the discriminator attempts to distinguish the fakes from real data. Both components improve themselves as the model is trained: the more they compete with each other the more realistic-looking outputs the generator will produce.
Schematic of a commonly used GAN architecture
GAN Lab’s interactive features enable users to conduct and control experiments. Users can start a GAN training session by selecting a built-in sample distribution or draw one themselves. An attendant animation displays the function of each model module in real time. Users can manually play back the animation in slow motion for more detailed analysis.
Users can also train individual iterations. Hyperparameters such as the number of hidden layers and neurons in the network, loss functions, optimization algorithms and model learning rate are all configurable for GAN model training.
Training a simple distribution of datapoints using GAN Lab
After hitting the “play” button, GAN Lab begins animating the entire process of input-to-output transformation from noise to fake samples. The fake samples’ positions and distributions are continuously updated and they begin to overlap with real samples.
The discriminator’s decision boundary is presented in the layered distributions view as a 2D heatmap. When model training begins the discriminator can easily classify real and fake, and most data samples fall into correspondingly coloured regions.
A good performance of the discriminator interpreted through a 2D heatmap
GAN Lab’s layered distributions view also shows how the fakes move toward the generator’s gradient direction, which is determined by the current location of fake samples in the discriminator’s classification. As training progresses, the loss function value decreases and model accuracy increases.
Fake sample movements directed by the generator’s gradients
Responding to the generator’s improvements, the discriminator constantly updates its decision boundary to identify fakes. As a GAN model approaches optimization, all samples will appear in a region where the heatmap is mostly gray, suggesting fake samples are so realistic that the discriminator can barely tell the difference them and the real samples.
GAN Lab was co-developed by Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas and Martin Wattenberg. Their paper GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation is available at arXiv and has been accepted by the respected academic journal IEEE Transactions on Visualization and Computer Graphics (TVCG).
Source: Synced China
Localization: Tingting Cao | Editor: Michael Sarazen
Follow us on Twitter @Synced_Global for more AI updates!
Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !
We are honored to have Ning Jiang, CTO of OneClick.ai as our guest speaker in DTalk E5 “AutoML — The Future of AI”. Sign up athttps://goo.gl/dY9Dnwand learn about how AutoML is changing the AI industry!
| Georgia Tech & Google Brain’s GAN Lab Visualizes Model Training in Browsers | 174 | georgia-tech-google-brains-gan-lab-visualizes-model-training-in-browsers-10e678f4aa23 | 2018-09-26 | 2018-09-26 21:11:57 | https://medium.com/s/story/georgia-tech-google-brains-gan-lab-visualizes-model-training-in-browsers-10e678f4aa23 | false | 545 | We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights. | null | SyncedGlobal | null | SyncedReview | syncedreview | ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS | Synced_Global | Machine Learning | machine-learning | Machine Learning | 51,320 | Synced | AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B | 960feca52112 | Synced | 8,138 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-06 | 2017-10-06 10:28:10 | 2017-10-06 | 2017-10-06 10:35:28 | 2 | false | en | 2017-12-18 | 2017-12-18 12:54:45 | 2 | 10e69f1842f6 | 2.349371 | 3 | 0 | 0 | A brief look at Machine Learning | 2 | “No Free Lunch” Theorem (NFL)
A brief look at Machine Learning
The popularity of machine learning has been on the increase since the mid 80’s becoming more accessible via the advancements of more frameworks, libraries and programming languages. It is therefore interesting to see the exponential growth of online interest since around 2015 as shown by the graph below.
Keyword searched on Google.com “Machine Learning”
To help us understand machine learning in a basic sense the diagram below depicts the flow of information whereby a large dataset/input is required for the algorithm of choice to perform statistical, stochastic and probabilistic methods to extract meaning or future predictions. The data itself is made up of “features” which we can think of as a structure for the algorithm to make sense of. The model or learning algorithm is essentially a “black box” meaning that it is currently impossible or extremely difficult to understand in terms of the inner workings. The training examples are used to test the model and this is a simple set of data. Once the user is satisfied with the output that the algorithm gives it is then better placed to accept the large dataset and give us the “honey from the hive”.
The “No Free Lunch Theorem”
The “No Free Lunch” (NFL) theorem was introduced to try and better understand the complex nature of the machine learning model, exploring “the connection between effective optimisation algorithms” (Wolpert & Macready, 1996). Put simply, which of these methods are best placed to solve a particular real-world problem. The theorem itself is comprised of two approaches, one which focusses on the performance limits of the algorithm tested over a number of problems / datasets and the other focuses on one data set and a number of algorithms to see which gave the best results.
Findings
Overall, their findings proved that the initial question posed on the algorithms was inappropriate, this includes datasets collected over a certain timeframe or taken from a snapshot in time. They discovered that “the average performance of any pair of algorithms across all possible problems is exactly identical” (Wolpert & Macready, 1996). Meaning that it’s impossible to choose which algorithm is the best to solve all problems as reality is just not that simple, there are a number of factors which could affect the real world problems trying to be solved and thus it’s a case of trial and error to see which algorithm best fits the data being analysed and the problem you want to solve.
Course Reflection
Throughout the Natural Computing module I took at Goldsmiths University I found this finding to be true with the problems we were asked to solve. In particular using Genetic Algorithm, described in another blog post of mine. Here the processes were quite flexible in their approach to a problem. I could also see a good way to mix certain processes from one algorithm into another to create hybrid swarm intelligence algorithms which would produce better and faster accuracy in its results.
References
https://en.wikipedia.org/wiki/No_free_lunch_theorem
http://www.no-free-lunch.org/WoMa96a.pdf
Wolpert, D. H. & Macready, W. G., 1996. No Free Lunch Theorems for Search, NM: The Santa Fe Institute.
| “No Free Lunch” Theorem (NFL) | 3 | no-free-lunch-theorem-nfl-10e69f1842f6 | 2017-12-18 | 2017-12-18 12:54:46 | https://medium.com/s/story/no-free-lunch-theorem-nfl-10e69f1842f6 | false | 521 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Terry Clark | null | 22b3a2c5decf | iamterryclark | 38 | 35 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 7ff81454a4c3 | 2018-01-14 | 2018-01-14 13:42:32 | 2018-01-15 | 2018-01-15 09:16:55 | 1 | false | en | 2018-04-12 | 2018-04-12 21:52:27 | 5 | 10e7baef82f9 | 2.350943 | 2 | 0 | 0 | Common Interview Questions for Data Scientist Positions | 5 | Alastair Majury on Common Interview Questions for Data Scientist Positions
Alastair Majury is an experienced Senior Business Data Analyst
Common Interview Questions for Data Scientist Positions
The field of data science is rapidly growing. In fact, there are 6.5 times more data science jobs today than there were only five years ago, according to data from LinkedIn. That should come as no surprise, given how technologically-driven our world is becoming. This is fantastic news if you are looking for a position in data science, because there are so many options available. However, if you are a young, recent graduate, you are probably worrying about finding that dream data science job. And when you’re just beginning your professional career, you have no clue what to expect for your first interview.
So let’s say you’ve landed an interview for a data science position you really want. How do you prepare? Well, here are a few of the most common questions to expect for a data science position.
Describe a project that you’ve worked on in detail
This is your chance to really impress your potential employer with any outstanding projects that you’ve worked on. Whether these projects have been for your education or real-life examples, make sure to take detailed notes on your experience. Practice telling this story as many times as possible so that you have it drilled in your head. Make sure to cover the preparation, strategy and execution of your data science project as thoroughly as possible.
Can you share your process of searching for data and findings to a team?
When the average person thinks of data science, they don’t usually assume that you have to be a great communicator, but you do. While a large portion of data science does, obviously, revolve around data, that’s only one half. Data scientists must be able to communicate their findings in order to achieve company goals. This is precisely why you should be focusing on your communication abilities. Make sure that you can not only find data, but also share that data in an insightful and meaningful way.
What are some of your favorite data science tools and techniques?
Data scientists have the luxury of a wide variety of tools at their disposal. Make sure to become familiar with most, if not, all of them. Your employer is simply testing to see your process and if you are familiar with a data scientist’s toolbox. If you have any particular methods that you find to be successful, make sure to mention it; feel free to provide an example of how you use these strategies and tools as well.
Preparing for any job interview is difficult. Hopefully this blog can ease your nerves and allow you to exude the confidence and ability that will land you that job.
Good luck!
Alastair Majury resides locally in Dunblane, and is an IT Consultant working across the country. Alastair is also a volunteer officer at the local Boys’ Brigade company, a charity which focuses on enriching the lives of children and young people, and building a stronger community.
Alastair Majury is also a highly experienced and respected Senior Business Analyst / Data Scientist with a proven track record of success planning, developing, implementing and delivering migrations, organisational change, regulatory, legislative, and process improvements, when providing my Senior Business Analyst / Data Scientist services for global financial organisations, covering the Challenger Bank, Retail Banking, Investment Banking, Wealth Management, and Life & Pensions sectors.
| Alastair Majury on Common Interview Questions for Data Scientist Positions | 15 | alastair-majury-on-common-interview-questions-for-data-scientist-positions-10e7baef82f9 | 2018-04-12 | 2018-04-12 21:52:28 | https://medium.com/s/story/alastair-majury-on-common-interview-questions-for-data-scientist-positions-10e7baef82f9 | false | 570 | Our team is made up of 2 MSc Data Science students who want a place for readers to find out more about the field of Data Science and some interesting applications of the field. Stories will range from projects to informative stories. Follow, share and enjoy your read! | null | null | null | DataRegressed | dataregressed | DATA SCIENCE,PROGRAMMING,MATHEMATICS,COMPUTER SCIENCE,DATA | null | Data Science | data-science | Data Science | 33,617 | Alastair Majury | Alastair Majury resides locally in Dunblane, and is an IT Consultant working across the UK. Alastair Majury is an experienced Business Data Analyst in FS. | 7a7c3b99f8e5 | majury1981 | 16 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-27 | 2018-06-27 13:19:06 | 2018-06-27 | 2018-06-27 13:29:08 | 1 | false | en | 2018-06-28 | 2018-06-28 07:20:46 | 6 | 10e883044539 | 4.211321 | 2 | 0 | 0 | This is one of the blogs that I have really enjoyed writing. The concept of empathy, not just in Data Science but life in general, has… | 5 | The Data Yogi: Data Science = Empathy!?
This is one of the blogs that I have really enjoyed writing. The concept of empathy, not just in Data Science but life in general, has really been overlooked. I would consider empathy a skill rather than a talent.
I was very astonished to come across the Japanese concept of “sa-shi-su-se-so of flattery.” Japanese women are taught to use certain phrases to please men and potentially score a future husband. For example:
· Su: Sugoi! — Amazing! Awesome! Wow!
· Se: Sensu ii desu ne! — You have good taste!
· So: Sou nan da! — Really? I see! Is that so?
Not to promote sexism, but just to show an example of how empathy is viewed in another culture.
Yes, empathy is not to be taken lightly. Especially, in this era where humanity is in the ne plus ultra of developing super humans.
Coming to Data Science, how important is empathy in slicing, dicing & churning data?
“Putting yourself in customer’s shoes. If the model says call the customer twice a day, what would you do? Call and annoy him? Or use empathy? Would you like to receive what you propose to the customer? — Krishna Kumar CS, Chief Analytics Officer and Director, Rainman Consulting”
Here is the snippet of an interesting conversation that I had with Mr Krishna Kumar about empathy in Data Science.
KK: “Do unto your customers as you would want done unto you”
Me: “Jesus would have been an awesome Data Scientist”
KK: “Remember, he was also crucified”
Me: “True that, being empathetic isn’t easy. Especially in Data Science where you are torn between your model and your instinct”
Empathy need not mean weakness, as is widely taught these days. Here is a snippet from the widely watched series “Suits”
https://www.youtube.com/watch?v=yMJzThNPkpA
Fact or Fiction? An intelligent blend of both?
A time has come to be wary of analysts who ignore the cultural ramifications of technology and present facts divorced of an emotional context. Need of the hour is to empathize with the user, understanding a problem’s wider factors in order to arrive at the best solution.
With projects ranging from spatial analytics, using sensors and cameras to help retailers read a room to optimize customer flow , to effective and emotional computing including designing interfaces that can respond to facial and vocal cues, blending of qualitative and quantitative is more in demand than ever.
“Humans are social beings and, to live in the society we need kindness and empathy. That’s what defines our culture and differentiates us from animals. Machines could be as smart as us in 10 to 20 years. If they are not empathetic humanity could face troubles. Machine Learning is a powerful weapon which needs to be handled with care. E.g.: using ML in building war machines, using analytics to find sensitive people and marketing those products which can make them addictive etc. — Shivaram KR, CEO, Curl Analytics “
There is a rising need for demystifying certain concepts. People hear about artificial intelligence and machine learning taking our jobs but no one is talking about the more tangible problems in the field that need to be solved, like how people can’t do the math they need to keep their jobs.
Speaking the language of the client rather than your own and being collaborative is something that only humans can do. Not even one Data Scientist has come out in the open and declared that machines can replace human intelligence completely someday. They are talking about humanizing technology.
Here are 3 reasons why technology needs to become more human:
Convenience: Consumers are affected by information overload. They want personalized, quick and easy access to the information or results that are relevant for their specific situation.
Simplification: Technology is supposed to reduce the complexity in our daily lives, which is partly caused by technology itself. Advancing technology will continue to become more human-oriented to help us to simplify, assess and filter.
Greater inclusion: The speed at which technology is advancing bears the risk of excluding less tech-savvy people from its benefits. For example, people who don’t know how to operate a search engine, smartphones or apps, will have difficulties to access information that may be relevant for them. Google is known for its focus on useful and human-friendly technologies. It uses a machine-learning artificial intelligence system called “RankBrain” and is constantly taking steps to refine its algorithms to interpret and understand the searches that people submit.
Human subjectivity is not poison It is the antidote.
Human judgment is not a contaminant that can be removed from data. Data empathy makes for better, smarter, more powerful data science.
Consider an example where the stakeholder don’t necessarily have the statistical know-how when it comes to dealing with data. More often than not, his decisions are driven by gut-feel based on what the data tells them. This is called data inspired decision making. These gut-feel about data can be correct or wrong. And the stakeholders learn from it, becoming experienced knowledge, or what economists call tacit knowledge.
Travis Kalanick faced serious gales when Uber introduced surge pricing. The controversial CEO’s move seemed sure to anger and alienate. He was called crazy. But Uber stuck to its guns and the laws of supply and demand, and modified its surge policy when appropriate. Alors! Dynamic pricing is now an accepted aspect of its business, and even companies like Disney are experimenting with the concept.
The average client may be daunted by the science, but he understands the stories.
Visualizations help a lot. Humans react to stories and pictures more than to numbers. Data storytelling is a beautiful blend of Data Analytics & effective communication.
We, at The Institute of Product Leadership are one of the pioneers to understand the importance of Empathy in Data Science. We have even designed our Data Science programs to align well to this concept. In fact we have an exclusive and immersive Data Science Skill Bootcamp dedicated to weaving wonderful stories from data with empathy towards client requirement.
We want to build a community of empathetic Data Science Professionals and will stop at nothing to make this happen. To this end we bring to you a webinar on this important aspect of Data Science. Book your slot NOW!
| The Data Yogi: Data Science = Empathy!? | 2 | the-data-yogi-data-science-empathy-10e883044539 | 2018-06-29 | 2018-06-29 21:58:19 | https://medium.com/s/story/the-data-yogi-data-science-empathy-10e883044539 | false | 1,063 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | IPL- School of Data Science | https://www.productleadership.com/datascience/ | 78dbb778fa27 | menaka_17260 | 35 | 41 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-08 | 2018-04-08 19:40:44 | 2018-04-08 | 2018-04-08 19:52:15 | 0 | false | en | 2018-04-08 | 2018-04-08 19:52:15 | 0 | 10ea232a0a77 | 0.679245 | 0 | 0 | 0 | The following are three quick tips for earning money by writing: | 5 | Writing for Fast Cash!
The following are three quick tips for earning money by writing:
Be in it for the right reasons. This post will get a disproportionate number of views because everyone thinks they can write and phrases like Fast Cash are catnip to the masses.
Realize good writing is inversely correlated with speed. Did you really think you could just vomit into Word and your PayPal would explode?
Stop clicking on these links. The reason clickbait exists is because people literally can’t help themselves. As someone with a shred of conscience, I feel bad for those who truly think one of these links will eventually pay off. But as someone with a shred of duty to his fellow man, I must tell you that they will not. Ever.
If you want to be a writer, you have chosen an arduous journey; one with payoffs so infrequent and small as to discourage even the most passionate among us. Those not blessed with iron will and Armstrongian endurance have a better chance playing the lottery. Which you shouldn’t do either.
| Writing for Fast Cash! | 0 | writing-for-fast-cash-10ea232a0a77 | 2018-04-08 | 2018-04-08 19:52:16 | https://medium.com/s/story/writing-for-fast-cash-10ea232a0a77 | false | 180 | null | null | null | null | null | null | null | null | null | Writing | writing | Writing | 167,305 | Kevin K. | I write weird things here in hopes it will make my actual writing better. But maybe this just…IS my actual writing. | bd18e09c05cc | kmkelleher | 44 | 97 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e8dd4fd2bda0 | 2018-09-07 | 2018-09-07 10:59:51 | 2018-09-07 | 2018-09-07 11:16:58 | 11 | false | en | 2018-09-07 | 2018-09-07 11:16:58 | 18 | 10eda9d5921 | 1.941509 | 0 | 0 | 0 | Artificial. And, absolute. | 5 | Comparison and Choice: The Basis for Intelligence
Artificial. And, absolute.
In order to make a choice, we have to do a comparison. And, also, vice versa. So, first off, we have to ‘know’ what (and how) a choice (comparison) is (made). It looks like this:
Comparison.
Choice.
What we have, here, is an ambiguous circle. Circle as a noun. And circle as a verb.
Ambiguous circumference and diameter.
Circle as a noun.
Circle as a verb.
Meaning, we rely on ambiguity in order to do a comparison. And, also, make a choice.
Zero and one. One and two.
Meaning, we can ‘name’ a zero and one, one and two. Proving ‘ambiguity’ is the ‘basis’ point for everything.
Basis. Point.
Therefore, X and X is X and Y (X and X’). Which explains, succinctly, both comparison, and choice.
X and X.
X and Y.
Meaning, you cannot have X without Y (because you cannot have circumference without diameter). Explaining how you compare. And why you (must) choose (X and Y) (X or Y).
And that’s all there is to intelligence. Artificial and-or real. Relative and-or absolute.
Conservation of the circle is the core dynamic (intelligence) in Nature.
https://www.amazon.com/Circularity-Natures-Constant-Yang-Zero/dp/1721120483
| Comparison and Choice: The Basis for Intelligence | 0 | comparison-and-choice-the-basis-for-intelligence-10eda9d5921 | 2018-09-09 | 2018-09-09 11:04:22 | https://medium.com/s/story/comparison-and-choice-the-basis-for-intelligence-10eda9d5921 | false | 170 | Conservation of the circle is the core dynamic in nature. | null | null | null | The Circular Theory | the-circular-theory | CIRCULAR THEORY,DATA SCIENCE,PSYCHOLOGY,PHYSICS,MATHEMATICS | ilexayardley | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Ilexa Yardley | Author, The Circular Theory | 8ca1d457f2cb | IlexaYardley | 3,596 | 533 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-23 | 2017-10-23 08:11:46 | 2017-10-23 | 2017-10-23 08:17:07 | 1 | false | en | 2017-10-23 | 2017-10-23 08:17:07 | 1 | 10edd45b2e15 | 0.218868 | 0 | 0 | 0 | http://www.robolution.center/blog/wp-content/uploads/2017/10/2.jpg | 5 | Jargon Buster “cybernetics”
http://www.robolution.center/blog/wp-content/uploads/2017/10/2.jpg
| Jargon Buster “cybernetics” | 0 | jargon-buster-cybernetics-10edd45b2e15 | 2018-02-13 | 2018-02-13 16:43:31 | https://medium.com/s/story/jargon-buster-cybernetics-10edd45b2e15 | false | 5 | null | null | null | null | null | null | null | null | null | Manufacturing | manufacturing | Manufacturing | 7,752 | Robolution.Center | WE ARE ON A MISSION TO BUILD THE FUTURE OF ROBOTICS BY BRINGING TOGETHER ENGINEERS, UNIVERSITIES AND INDUSTRY PARTNERS. www.robolution.center | d26dbd648114 | RobolutionHQ | 75 | 77 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-01 | 2017-11-01 06:21:26 | 2017-11-01 | 2017-11-01 06:29:01 | 1 | false | en | 2017-11-01 | 2017-11-01 06:29:01 | 9 | 10ee11dc917a | 0.928302 | 0 | 0 | 0 | Car insurance, the first place China’s tech giants are applying a range of A.I., is their vehicle of choice for crashing into all manner of… | 5 | A.I. drives China techfins into car insurance — and beyond
Car insurance, the first place China’s tech giants are applying a range of A.I., is their vehicle of choice for crashing into all manner of financial services.\
Who’s got a smartphone?
China’s biggest tech companies are all diving into the business of insuring motor vehicle accident damage. Using A.I. — artificial intelligence — to provide protection against car accidents may seem marginal to financial services, but it is the vanguard for China’s ‘techfin’ companies to reshape insurance, investments and banking in their image.
Of course these tech companies have been involved in insurance and other financial services such as payments, but the sophistication gained by combining different forms of A.I. is taking their offering to a new, expansive level.
Ant Financial, Ping An Technology, Tencent and Zhong An, as well as Chinese developer SenseTime, all presented similar services for assessing motor vehicle damage in Hong Kong last week.
To read full article visit us at http://www.digitalfinancemedia.com/
DigFin is for smarter finance and tech news, keeping you ahead of others, and its FREE!
Follow DigFin on Twitter | Like us on Facebook | Subscribe to our Mailing List
| A.I. drives China techfins into car insurance — and beyond | 0 | a-i-drives-china-techfins-into-car-insurance-and-beyond-10ee11dc917a | 2017-11-01 | 2017-11-01 06:29:02 | https://medium.com/s/story/a-i-drives-china-techfins-into-car-insurance-and-beyond-10ee11dc917a | false | 193 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Digital Finance Media | Digital Finance Media writes about the people and organizations using technology to transform financial services. Visit us @www.digfingroup.com | c5e7c6d1cda1 | DigitalFinMedia | 22 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | b9c490bd1fa1 | 2018-02-20 | 2018-02-20 04:31:02 | 2018-02-20 | 2018-02-20 04:40:51 | 1 | false | en | 2018-02-20 | 2018-02-20 14:41:37 | 1 | 10ef5be56acd | 2.615094 | 2 | 0 | 0 | When thinking about artificial intelligence, many companies get stuck in the same places. They may be AI curious, but struggle with use… | 4 | AI Idea to Implementation: The 90-Day Model
When thinking about artificial intelligence, many companies get stuck in the same places. They may be AI curious, but struggle with use cases, staffing expertise, and building a business case. That’s why we believe in practical innovation through AI.
The KUNGFU team designed its Deep Innovation process with a progressive approach to AI. The goal is getting the first win on the board that delivers real ROI and develops the trust needed to support a long term relationship.
The Deep Innovation process can take a project from idea to implementation in 90 days, whether the client developed the idea or it came out of a KUNGFU Innovation Workshop.
Week 1: Innovation Workshop to Define Project
If a company doesn’t have a clear idea of the project they want developed, our Innovation Workshop is a fast and efficient method for generating and developing ideas and prioritizing them based on technical risk and business objectives. These workshops are typically three to five days and include a carefully selected group of stakeholders from the client along with AI experts and a professional facilitator from KUNGFU. It is not uncommon to also include an outside expert with relevant domain and technical expertise to help increase the opportunity for bold new ideas to emerge. Coming out of an Innovation Workshop, a company will have definition on one project, with stakeholder buy-in, as well as potential projects to build out in the future.
Weeks 2 to 5: Proof-of-Concept to De-Risk Project
The second step of the process is to proof of concept a machine learning model to prove feasibility and validate that the available data can support it. This is important because if a project is poorly defined, the data is dirty or unavailable, or the infrastructure can’t support the effort the overall project will fail. Developing a proof-of-concept and understanding the contours of its capabilities de-risks the project. It is common that a proof-of-concept can be developed in two to four weeks of experimentation, engineering, data wrangling and data cleaning.
Weeks 6 to 11: MVP Development
With the confidence of a successful prototype, the next step of the process is to build a minimum viable product (MVP). During the development of an MVP, the simplifications of the machine learning model used in the prototype are built out. And, the full data training set is also prepared.
Simultaneously, the interface is defined, refined and developed, whether it is a machine-to-machine interface or a human interface built in collaboration with the end users. Likewise, the infrastructure to support the solution is built out, whether it is cloud-based hosted servers or on-site servers. Because the project was selected and scoped to deliver a quick win, this phase is intended to take three to six weeks.
Week 12:Testing and Deployment
With the MVP completed, the next step is to perform final quality assurance testing and put it into production, with a goal of this phase taking a week. With a properly scoped project and fast and collaborative decision making, it’s possible to go from idea to deployed project in 90 days.
Final Thoughts
If you are considering jumping right into the AI pool, or just stuck petrified at the edge of the diving board, consider a quick-win strategy. Our Deep Innovation process helps our clients first I.D. the use cases and then build the business case. We offer the expertise so you can avoid the staffing challenges. Ideas to action happen fluidly with minimal downtime for handoff so a project can go from idea to implementation in 90 days. It’s a good place to start.
KUNGFU is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions. Check us out at www.kungfu.ai
| AI Idea to Implementation: The 90-Day Model | 5 | ai-idea-to-implementation-the-90-day-model-10ef5be56acd | 2018-03-17 | 2018-03-17 18:15:53 | https://medium.com/s/story/ai-idea-to-implementation-the-90-day-model-10ef5be56acd | false | 640 | KUNGFU.AI is an AI consultancy that helps companies build their strategy, operationalize, and deploy artificial intelligence solutions. | null | null | null | KUNGFU.AI | null | kung-fu | ARTIFICIAL INTELLIGENCE,AUSTIN TEXAS,CONSULTING,DATA SCIENCE,AI | kungfuai | Startup | startup | Startup | 331,914 | Stephen Straus | Managing Partner, KUNGFU.AI | 633e059511d8 | ssaustin65 | 119 | 95 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-21 | 2018-03-21 11:05:43 | 2018-03-16 | 2018-03-16 11:51:53 | 1 | false | en | 2018-03-21 | 2018-03-21 11:09:38 | 6 | 10efdf2f164 | 2.441509 | 0 | 0 | 0 | Stephen Hawking was a bright light for people who were searching in the darkness. The man in his wheel chair has explored and enjoyed life… | 5 | Will Stephen Hawking Predictions About AI Be True?
Stephen Hawking was a bright light for people who were searching in the darkness. The man in his wheel chair has explored and enjoyed life than most of us will ever will. Even though he has gone to be a better place, his contributions to mankind will not be forgotten.
It is indeed true that his greatest discoveries,
Gravitational Singularity
Black Hole Dynamics
Hawking Radiation
Dawn OF Galaxies
Wave Function of the Universe
Has changed the course of Astrophysics.
However, Stephen Hawking persona was not just limited to physics. He was a man of humor and has showcased it on multiple occasions.
For instance,
On the renowned show HBO Last Week Tonight, Stephen was asked a serious question by the host John Lover.
“You’ve stated that there could be an infinite number of parallel universes,” “Does that mean there’s a universe out there where I am smarter than you?”
And this was his reply
“Yes,” “And also a universe where you’re funny.”
Controversy has always been a part of Stephen’s glorious life. One of the most renowned ones was his comment about Artificial intelligence at a point of time where people were expecting AI to be the biggest breakthrough of this century.
He said to BBC that,
“The development of full artificial intelligence could spell the end of the human race.”
The theoretical physicist who has successfully defeated Amyotrophic Lateral Sclerosis (ALS) has ignited a big controversy back then with this statement.
The above-mentioned statement was not just made by Stephen alone. He was backed up by Elon Musk, Bill Gates and many other leading scientists.
Elon Musk the man behind Tesla named AI as, “Existential threat” to humanity’s existence.
But is it true? Will AI (Artificial Intelligence) wipe out humanity once it has reached singularity?
For now, it is expected that automation will soon have its dominance by replacing humans in driving cars, and AI shopping assistants but to an extent where they would take over the world is considered to be strange.
However, havoc can be caused if there is a glitch in this automated systems. For instance, stock market crash that happened in the year 1987 was mainly due to a computer error.
The most important flaw of computer or hardware-based system is that sometimes errors can be left unnoticed which can cause a devastating effect.
However, not everyone is against AI. Charlie Ortiz, the head of AI at Nuance Communications said that all these assumptions are over exaggerated and he does not see AI as a threat whatsoever.
Pascal Kaufman the founder of Starmind also stated that those who are passing alarming statements about AI are not working extensively on it.
He added that,
“When we start comparing how the brain works to how computers work, we immediately go off track in tackling the principles of the brain,” “We must first understand the concepts of how the brain works and then we can apply that knowledge to AI development.”
At present, AI is helping humankind through machine learning. Through machine learning computers can recognize patterns and even to a large amount of data.
Latest news is that Google is collaborating with a medical organization to develop an AI program that can diagnose cancer.
Above all, remember be it any machine, it is only good as the date we feed them. So the focus should be on feeding good data to the system rather than worrying about something that can or cannot happen.
Originally published at www.probytes.net on March 16, 2018.
| Will Stephen Hawking Predictions About AI Be True? | 0 | will-stephen-hawking-predictions-about-ai-be-true-10efdf2f164 | 2018-03-21 | 2018-03-21 11:09:39 | https://medium.com/s/story/will-stephen-hawking-predictions-about-ai-be-true-10efdf2f164 | false | 594 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Probytes Software | null | 9e287b6f7ee3 | probytessoft | 0 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | $ # get solr from "http://lucene.apache.org/solr/mirrors-solr-latest-redir.html"
$ # unzip the compressed file
$ # simply run bin/solr start
$ bin/solr start
*** [WARN] *** Your open file limit is currently 4864.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*** [WARN] *** Your Max Processes Limit is currently 709.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Waiting up to 180 seconds to see Solr running on port 8983 [-]
Started Solr server on port 8983 (pid=13231). Happy searching!
$ #let's say you have a disk mounted as "/mnt" and you downloaded and installed Solr in this disk
$ cd /mnt
$ # you might have a parent folder here, let's say "solr6"
$ cd solr6
$ pwd
/mnt/solr6
$ # you will have a folder here called "data" that has your cores
$ cd data
$ pwd
/mnt/solr6/data
$ # all your cores will be listed here (let's say songs and artists)
$ ls
songs artists
$ cd songs
$ pwd
/mnt/solr6/data/songs
$ # in this core you will have these 2 folders primarily:
$ # conf/ data/
$ # conf/ has all your configurations
$ # such as "solrconfig.xml" and "schema.xml"
$ # data/ has your actual indexes
$ # we will be concerned with conf/ folder only, usually.
<requestHandler name="/replication" class="solr.ReplicationHandler" >
<!--
For enabling Master just do the following. This specifies the replication to slaves. The files in "conf/" folder that will be replicated are mentioned in confFiles. This is important. Let's say we change our schema on master, we do not want to change schema on each of the slaves. New schema can be simply replicated.
-->
<lst name="master">
<str name="replicateAfter">commit</str>
<str name="replicateAfter">startup</str>
<str name="confFiles">schema.xml</str>
</lst>
<!--
For enabling Slave just uncomment the following and comment out the master block above. This slave syncs data from master every 1 hour.
-->
<!--
<lst name="slave">
<str name="masterUrl">http://solr_master:8080/solr/songs</str>
<str name="pollInterval">01:00:00</str>
</lst>
-->
</requestHandler>
| 12 | null | 2018-06-11 | 2018-06-11 10:36:32 | 2018-06-12 | 2018-06-12 17:47:33 | 1 | false | en | 2018-06-13 | 2018-06-13 09:37:21 | 3 | 10f035805227 | 3.05283 | 0 | 0 | 0 | Setup Solr | 5 | Search and Intelligence — 2
Setup Solr
In the previous story, I had provided with a novice and terse introduction to Solr. The purpose of the story was to get you acquaintance with NoSQL full-text search technologies. In this part of the story I am going to elaborate more on production setup of Solr.
Let’s just download Solr and start (make sure you have appropriate version of Java installed).
It’s is a good practice to set SOLR_HOME so that you run the server where appropriate disk space and permissions can be set. The directory where configurations of cores and their data resides is defined by SOLR_HOME. It is also helpful in case you decide to upgrade solr, the data, schema and configurations can be migrated very easily. To set SOLR_HOME, we will edit the bin/solr.in.sh file. Find and edit the variable SOLR_HOME to /data/solr-home. Now create this directory with sudo sudo mkdir/data/solr-home. We need to define solr.xmlhere. This files defines the default values for Solr server to start. Start Solr server now.
This starts Solr server in background on port 8983 (to start on port 8080 just start solr with that port bin/solr start-p 8080). After starting the Solr server we will create a core (we can create a core earlier too, this is to emphasize that a core can be created without restarting the solr server). We can create the core by bin/solr create -c songs but it is very convenient to use Solr UI. We will discuss Solr UI in later part of this story.
In the previous story, I had shown you a diagram in which infrastructure setup of production grade Solr is shown. Let me put that image here.
Production setup of Solr
In Solr, the directory structure is as follows:
All configurations pertaining to solr core are defined in solrconfig.xml. It is fairly straightforward to create a Solr Master and Slave. All we need to do is the following in solrconfig.xml:
For the purpose of further discussion on Solr setup and debugging queries, we would not require slave servers. Let’s have a look at Solr UI. Solr UI is a powerful feature with admin capabilities and analysis tools. To reload the new core, let’s look at the UI. Go to http://solr_master:8080/solr or http://localhost:8080/solr if you have this local setup. We will continue with Solr UI in next post.
To be continued…
| Search and Intelligence — 2 | 0 | search-and-intelligence-2-10f035805227 | 2018-06-13 | 2018-06-13 09:37:22 | https://medium.com/s/story/search-and-intelligence-2-10f035805227 | false | 756 | null | null | null | null | null | null | null | null | null | Solr | solr | Solr | 153 | Vivek | An enthusiastic software engineer powered with machine learning skills. | 5fd4f845caca | i.m.vivek | 7 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1f35b6f451e8 | 2018-07-09 | 2018-07-09 10:53:07 | 2018-07-09 | 2018-07-09 15:29:13 | 0 | false | en | 2018-07-25 | 2018-07-25 08:58:36 | 6 | 10f1b7fab93b | 0.686792 | 0 | 0 | 0 | null | 5 | Webography for 4 dummies to make it in machine learning — Chapter 20, Scene 1
Docker: no matching manifest for windows/amd64 in the manifest list entries
From this article: Linux vs. Windows Containers: What's the Difference? With Docker container support now available for…stackoverflow.com
LUIS: Language Understanding Intelligent Service
Language Understanding Intelligent Service (LUIS) offers a fast and effective way of adding language understanding to…www.luis.ai
Creating new file with touch command in PowerShell error message
I have a directory on my desktop created using PowerShell, and now I'm trying to create a text file within it. I did…stackoverflow.com
Equivalent of Linux `touch` to create an empty file with PowerShell?
In PowerShell is there an equivalent of touch? For instance in Linux I can create a new empty file by invoking: touch…superuser.com
Pour la DGSE, pas besoin de porte dérobée pour accéder aux contenus chiffrés
Selon le directeur technique du service de renseignement, Patrick Pailloux, des communications sûres peuvent être…www.nextinpact.com
Cancer du foie : Guerbet s'associe à l'intelligence artificielle d'IBM
Guerbet se lance dans l'intelligence artificielle. Le groupe pharmaceutique français, spécialiste des solutions pour…www.lesechos.fr
| Webography for 4 dummies to make it in machine learning — Chapter 20, Scene 1 | 0 | webography-for-4-dummies-to-make-it-in-machine-learning-chapter-20-scene-1-10f1b7fab93b | 2018-07-25 | 2018-07-25 08:58:36 | https://medium.com/s/story/webography-for-4-dummies-to-make-it-in-machine-learning-chapter-20-scene-1-10f1b7fab93b | false | 182 | We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE. | null | ethercourt | null | Ethercourt Machine Learning | ethercourt | INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING | ethercourt | Docker | docker | Docker | 13,343 | WELTARE Strategies | WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner | 9fad63202573 | WELTAREStrategies | 196 | 209 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 98e37200303a | 2017-12-11 | 2017-12-11 15:28:50 | 2017-12-11 | 2017-12-11 15:31:56 | 0 | false | en | 2017-12-11 | 2017-12-11 15:38:37 | 0 | 10f1d94e3985 | 0.615094 | 0 | 0 | 0 | Team Members : Ahmet Oruç, Sergen Topçu, Denizkaan Aracı | 1 | WEEK 3-Cities and Restaurants
Team Members : Ahmet Oruç, Sergen Topçu, Denizkaan Aracı
Choice of Algorithm
We provided information about the DataSet we will use in our previous blog post. In this article, we will talk about the algorithms and the project plan that we will use. We decided to use the “Cities and Restaurants” restaurant in our project to make an estimation algorithm. We did research on it and decided that the most suitable machine learning algorithms were a logistic regression, k-NN, and naive-Bayes algorithms. We will then implement the algorithms we have decided to use in the next step and calculate the accuracy.
According to these success rates, we will observe and choose which algorithm will give higher accuracy.
We will implement at least one of these algorithms as much as the next blog post. In the next post, we will share the accuracy of the algorithm we wrote the code.
See you in our next blog post.
| WEEK 3-Cities and Restaurants | 0 | week-3-cities-and-restaurants-10f1d94e3985 | 2017-12-11 | 2017-12-11 15:38:38 | https://medium.com/s/story/week-3-cities-and-restaurants-10f1d94e3985 | false | 163 | Course Projects for Introduction to Machine Learning, an undergraduate class at Hacettepe University — This semester the theme is Machine Learning and The City.. | null | null | null | bbm406f17 | null | bbm406f17 | MACHINE LEARNING | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Ahmet Oruc | null | 6bdd72365a33 | oruc.ahm | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-22 | 2018-03-22 16:28:45 | 2018-03-24 | 2018-03-24 11:31:01 | 3 | true | en | 2018-03-26 | 2018-03-26 02:43:09 | 0 | 10f248ed8180 | 4.293396 | 44 | 8 | 0 | While humans have some rudimentary ability to forecast the future, few governments are run with long-term detailed plans, at least in… | 5 | Technological Unemployment and UBI
REUTERS/Murad Seze
While humans have some rudimentary ability to forecast the future, few governments are run with long-term detailed plans, at least in democracy. However for problems like technological unemployment, the regulation of AI and the impact of technology, the global economy and global leaders cannot prepare us adequately for what is coming. The system and many industries are not agile enough to do so.
DISRUPTION SPEEDS UP WITH THE ARRIVAL OF EXPONENTIAL TECH
The technological disruption we will experience in the next 30 years is unlike any historical technological precedent or paradigm shift. This simple fact causes a lot of misunderstanding. With the rise of Tech monopolies such as Google, Alibaba, Amazon, Tencent, Samsung, Facebook and others, capitalism is no longer a free-market system, but one where the companies with most access to data, access to capital and consumer trust, win.
That’s not a market where the little guys or the Middle class can compete for a limited amount of high-skilled jobs; as the Amazons of the world beat out the Walmart, Target, Kroger and Sears’ of the world. These same Tech companies are also moving laterally into healthcare, banking, grocery, logistics, news, advertising, gaming, entertainment and so forth. The new “Big Four” of the world don’t have a fancy acronym. They are Amazon, Tencent, Alphabet and Alibaba. Tencent for example has shares in such companies as Snap Inc, Tesla, Ubisoft and Chinese e-commerce giant JD.com. Amazon moving into healthcare, medical supplies, logistics and grocery is well documented.
The World Economic Forum can make viral videos about the benefits of the Fourth Industrial Revolution, and Silicon Valley and Futurism can push ideas of a universal basic income, but all of these things does not save us from the fact that the advent of robots into the workforce in increasing ratios to humans and more sophisticated artificial intelligence evolution is inevitable.
TECH MONOPOLIES ARE THE NEW LEADERS
Technological unemployment and the path of smartphone addiction that for many has led to “technological loneliness” is of course not something that benefits consumers or global citizens, but rather scales wealth inequality into a kind of technological feudalism, where decision makers on humanity’s future are not politicians, but the robot overlords running the major companies of the world. In China the central government has more influence on the likes of Tencent, Alibaba, Huawei, Baidu and others, however the same cannot necessarily said in America of Google, Facebook, Apple or Amazon.
This means companies such as Amazon, in the 2020s are pioneering the new phase of the consumerism experience, where as we can see we can do A-Z in Amazon’s ecosystem of services from original video content to grocery delivery to even starting a branded banking account through J.P. Morgan Chase.
THE POST INDUSTRIAL SOCIETY
As paid work becomes decoupled from supply and demand in the robot apocalypse, a world where AI and robotics surplants human beings from tasks as more jobs become obsolete and are made redundant, the leading proposal for a new system is not to combat wealth inequality directly, but rather, to give everyone a little, that is, a basic income. The usual amount cited is $1,000 USD. Not enough to survive on, but enough to make a big difference should the next recession prove indeed that automation is on its way.
New jobs are created in the new consumerism, but not at the pace that jobs will be lost in fields such as transportation, retail, finance, banking, logistics, healthcare and so forth.
With inverted demographic pyramids, debt-bubbles, ballooning costs of healthcare, and a massive mistrust in institutions, politicians and human leaders, ironically it may be up to AI and robots to actually take some of the biggest problems of society and help correct them. The management of the planet, will very much be a work in progress in our lifetimes.
AUTOMATION DENIAL SYNDROME
The prospect of serious technological unemployment via automation is something I think a lot about. There’s widespread denial of automation’s impact on jobs and the future of work, a bit like the state of public opinion on global warming in the 1990s. But here’s the thing, automation occurs and scales a lot faster than global warming is and does. That’s why AI and its regulation is a bigger immediate threat to humanity than global warming and its massive repercussions.
Most people survey on technological unemployment displacement, simply don’t believe it could happen to them. These kind of biases make humans not particularly agile to advances in AI and the next gen of and smarter iteration of robots, nano-bots, drones and more immediately, self-driving cars.
Managers think they are immune, white collar professionals believe only some of their tasks will be impacted. Yet in our midst, Tech monopolies are starting to cannibalize the economy and create huge talent inequality where entrepreneurship is in decline, and consolidation of services is on the rise. In an age where it’s harder than ever to fund a startup, or compete against established players, innovation will now be lead by industry leaders who are all becoming essentially technology companies.
Universal Basic Income is better than nothing, if during the next recession jobs start to go the way of the robot apocalypse. But UBI is a fail-safe, it’s not a solution to wealth inequality, the future of work or any significant invention that might combat how humans adapt in a world of increasing AI and robot workers. UBI is more a mechanism of how you prevent social unrest, when the pace of technology reaches points of acceleration that leave significant portions of humanity behind. The “Big-bang” of exponential technologies of the 2030s isn’t just the invention of the web in the mid 1990s, it’s when humans will learn they do not necessarily have to work, in order to live. It will be radical, and technological unemployment will be among its first signs and symptoms.
| Technological Unemployment and UBI | 307 | technological-unemployment-and-ubi-10f248ed8180 | 2018-05-23 | 2018-05-23 09:44:56 | https://medium.com/s/story/technological-unemployment-and-ubi-10f248ed8180 | false | 992 | null | null | null | null | null | null | null | null | null | Basic Income | basic-income | Basic Income | 2,763 | Michael K. Spencer | Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer | e35242ed86ee | Michael_Spencer | 19,107 | 17,653 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a01e1fda9aef | 2018-02-19 | 2018-02-19 13:28:10 | 2018-02-19 | 2018-02-19 14:49:40 | 9 | false | th | 2018-03-24 | 2018-03-24 11:01:41 | 0 | 10f31f3c5d06 | 1.660377 | 1 | 0 | 0 | ย้อนความกลับมาสู่รากเหง้าของ ML ว่ามีจุดกำเนิดมาจากไหนแล้วความรู้คณิตศาสตร์มปลายเนี่ยนะ ที่สามารถสร้าง ML ที่เก่งๆขึ้นมาได้ | 4 | Machine Learning กับคณิตศาสตร์มปลาย [01] : Classifier & Linear Classification.
ย้อนความกลับมาสู่รากเหง้าของ ML ว่ามีจุดกำเนิดมาจากไหนแล้วความรู้คณิตศาสตร์มปลายเนี่ยนะ ที่สามารถสร้าง ML ที่เก่งๆขึ้นมาได้
ใน part นี้จะขอพูดแค่ classification อย่างเดียวนะครับ เราเริ่มกันที่รูปง่ายๆ
จากรูปเราจะเห็นว่ามีจุดอยู่ 6 จุด ดาว 1 ดวงและก็ชื่อกลุ่มซึ่งเอาไว้บ่งบอกว่าจุดแต่ละจุดอยู่กลุ่มไหน
คำถามคือเราจะรู้ได้อย่างไรว่า ดาวสีดำ ควรจะอยู่กลุ่มไหนระหว่างกลุ่ม A หรือกลุ่ม B
ให้เวลาคิดนิดหนึ่ง ข้างล่างคือเฉลย
แค่นี้เลย เราแค่ลากเส้นเพิ่มขึ้นมาอ้างอิงอีก 1 เส้น(เส้นสีส้ม)เราก็จะแบ่งได้แล้วว่าดาวสีดำ ควรจะอยู่กับกลุ่มไหนมากที่สุด
สมการเส้นตรงจ้า
อธิบายด้วยคณิตศาสตร์ก็คือเราจะทำการสร้างสมการเส้นตรง 1 เส้นเพื่อที่จะนำไปใช้อ้างอิงว่าถ้าเกิดมีจุดเพิ่มขึ้นมาอีกหนึ่งจุดจุดนั้นควรจะอยู่กลุ่มไหน
ถ้าเกิดเราสร้างขึ้นมา 3 สมการละสีส้ม สีดำ สีเขียว เส้นไหนละ ที่จะเป็นเส้นที่เหมาะสม
เพื่อแก้ปัญหานี้เราจึงได้กำหนดให้ใช้ตัววัดว่าสมการไหนเหมาะสมที่สุดก็คือ
Loss function หรือ Error function
โดย loss function ของสมการเส้นตรงก็คือ
Version ย่อ
version เต็ม
สมการที่เหมาะสมที่สุดก็คือสมการที่มี loss function น้อยที่สุด เราจึงต้องหาวิธีทำให้ loss function น้อยที่สุด ง่ายๆก็คือการหา derivative ของ loss function และเทียบเท่ากับ 0 นั้นเอง(ไม่ขอลงรายละเอียดตรงนี้ จะอธิบายแยกไว้ใน Gradient descent)
ต่อมาลองคิดเล่นๆว่าถ้าเกิดเราอยากแยกรูประหว่างหมากับแมวเราจะทำได้โดยใช้วิธีการข้างต้นยังไง
Doge น่ารักอะ 55555555
เราก็จะทำการสมมุติว่าถ้าเกิดรูป 2 รูปนี้อยู่บนกราฟเดียวกันก็ควรจะมีระยะห่างๆ กันเนื่องจากไม่มีความเหมือนกันหรือคล้ายกันเพราะเป็นหมากับแมวหนิ เราก็จะได้กราฟดังนี้
ลองถามคำถามเล่นๆ คิดว่าดาวสีดำ รูปที่ซ่อนไว้ควรจะเป็นรูปอะไร ในเวลาคิด หนึ่ง สอง เฉลย!! “น่าจะ”เป็นรูปของแมว เหตุผลง่ายๆก็คือเพราะว่ามันใกล้แมวอะ ก็ต้องเป็นแมวสิ ในกรณีจะขอใช้คำว่า “น่าจะ” หรือ “เป็นไปได้มากที่สุด” เนื่องจากเราไม่สามารถตัดสินได้อย่าง 100% ว่าเป็นแมวแน่ๆ แต่เราอาจจะตัดสินว่าเป็นแมว 99% เราจะนิยมใช้คำว่า “น่าจะ”
ก็จบไปแล้วนะครับ สำหรับคณิตศาสตร์ในการสร้าง ML เห็นได้ชัดเลยว่าไม่ใช่คณิตศาสตร์ที่ยากเกินกว่าจะเข้าใจเลย เพียงแค่หัดนำมาประยุกต์ใช้ให้เหมาะสม
ตอนต่อไปเราจะทิ้งคำถามทิ้งไว้ให้คิดเล่นๆว่า แล้วถ้าเราอยากรู้ละว่า “รูป” ของเราเหมือนกับ “รูป” อันไหนมากที่สุด ใบ้ให้ว่าลองใช้แค่กราฟข้างบนกับคณิตศาสตร์นิดๆหนึ่งๆเราก็พอที่จะรู้แนวแล้วว่าควรจะทำยังไง ตอนต่อไป ML : Regression
| Machine Learning กับคณิตศาสตร์มปลาย [01] : Classifier & Linear Classification. | 1 | machine-learning-กับคณิตศาสตร์มปลาย-10f31f3c5d06 | 2018-03-24 | 2018-03-24 11:01:42 | https://medium.com/s/story/machine-learning-กับคณิตศาสตร์มปลาย-10f31f3c5d06 | false | 122 | Interest in machine learning and have hobby about web develop. | null | null | null | Mattick | mattick | MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,WEB DEVELOPMENT,EDUCATION,THAILAND | k_matichon | Machine Learning | machine-learning | Machine Learning | 51,320 | K. | Interest in Machine learning and Web developer. | e7ba8e0ff83a | k_matichon | 50 | 20 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-03 | 2018-05-03 13:47:57 | 2018-05-03 | 2018-05-03 13:51:41 | 1 | false | en | 2018-07-18 | 2018-07-18 06:46:25 | 6 | 10f321515708 | 1.467925 | 20 | 0 | 0 | April has passed and we have collected the statistics and results of the Alphacat “BTC Daily Forecasting” application. The BTC bot has… | 5 | The Accuracy of Alphacat “BTC Daily Forecasting” Application had reached 63.3% in April
April has passed and we have collected the statistics and results of the Alphacat “BTC Daily Forecasting” application. The BTC bot has performed predictions for each day of the month of April. Using the rise probability as the criterion, the BTC bot successfully predicted 19 days of results, and its overall accuracy rate reached 63.33%. It is worth noting that in version 2.0.0, which was released on April 4th, we improved the calculation method for the prediction of true rising probability. This enables the user to get a more accurate scientific prediction than before.
The BTC price is collected from Coinmarketcap
To correctly understand the accuracy rate, please refer to the following:
The accuracy rate is based on the probability of the “10 normal trades prediction”, this considers someone making a trade 10 times, and predicts how many of these times the trade would be successful. Since every trade is based on uncertain events in the future, the likelihood of a normal trade being successful is always 50%. However, as shown in April’s retrospective summary, Alphacat’s accuracy forecast for BTC’s 24-hour rise probability has pushed this number, reaching a whopping 63.3%. That means that more than 13% of normal trade probability was reached. If supplemented with the right investment decisions, this extra 13% helps investors achieve smarter and more profitable investment decisions.
Of course, any investment should be carried out by the investor in accordance to his or her own respective position and investment goals. In the near future, we will provide more features in our ACAT Store, such as: investment advice, expert reviews, and a ranking list system. The Alphacat team is constantly optimising, updating algorithms and collecting more data to enrich the ACAT database. More cryptocurrency and real-time forecasts are also in progress. With our continued progress, our aim is to help make your investment decisions as simple as buying a cola.
For more information of Alphacat:
Website: www.Alphacat.io
Telegram:https://t.me/alphacatglobal
Medium:https://medium.com/@AlphacatGlobal
Twitter:https://twitter.com/Alphacat_io
Facebook: https://www.facebook.com/Alphacat.io/
Reddit: https://www.reddit.com/r/alphacat_io
| The Accuracy of Alphacat “BTC Daily Forecasting” Application had reached 63.3% in April | 446 | the-accuracy-of-alphacat-btc-daily-forecasting-application-had-reached-63-3-in-april-10f321515708 | 2018-07-18 | 2018-07-18 06:46:25 | https://medium.com/s/story/the-accuracy-of-alphacat-btc-daily-forecasting-application-had-reached-63-3-in-april-10f321515708 | false | 336 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | Alphacat | Alphacat is a robo-advisor marketplace that is focused on cryptocurrencies, and is powered by artificial intelligence & big data technologies. www.Alphacat.io | 6300c5cec1ab | AlphacatGlobal | 318 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | c4c117f0e0d4 | 2017-10-15 | 2017-10-15 11:13:36 | 2017-10-16 | 2017-10-16 12:24:16 | 1 | false | en | 2017-10-16 | 2017-10-16 12:24:41 | 0 | 10f348bd5c11 | 1.739623 | 3 | 0 | 0 | I remember around 3 or 4 years ago, there was a saying that I used to hear a lot: “Money Talks”. Now I can see that this no longer holds… | 4 | Data Talks
I remember around 3 or 4 years ago, there was a saying that I used to hear a lot: “Money Talks”. Now I can see that this no longer holds; the better way to phrase it would be: “Data Talks”. We are now in the Cognitive Era, all the buzzwords in the media are around data: Cognitive, AI, Machine Learning, Big Data, Data Analytics, etc… The big tech players are now shifting their focus as well. Amazon, Google, IBM, Microsoft, and now even Alibaba and Oracle are joining the race.
Last week, Gitex tech week conclude in Dubai, and some of the major highlights were around data, from visualizing data to predicting city-wide events, to bring the cloud to the masses; everything was around data.
Smart Dubai Office constructed a giant visualization of all the data flowing in Dubai (called Future Live) with the aim of using this data to prepare for Expo 2020. There was an entire hall in Gitex dedicated just for cloud featuring the typical Cloud Giants plus many newer entries. IBM showed a citywide solution called MetroPulse which helps understand city dynamics and how different events and data in the city correlate and affect each other.
Future Live ( a representation of the data flowing in Dubai)
When you think of it, anyone who holds data is king. Data is the strongest competitive advantage any company can have; with the right kind of data, you can make a fortune from selling it (like Google and Facebook) or you can open it to people to use (like Kaggle or Dubai Open Data). With data such as social media, you can understand the personality of your customers (with Watson Personality Insights) or understand your customers’ preferences (with Facebook Analytics). With the right kind of data, you can train a video system to estimate the right traffic risk index form a video feed or understand the weather from images.
The field of data analytics is both young and old, it has been around for decades, but with the advent of cloud, there are now so many opportunities for both enterprise and startups to make use off. It is up to entrepreneurs and innovators to figure out how to capture the right data (legally of course). If you think the hype train missed you, you are mistaken; just take a walk down the road (or the mall if it is too hot outside) and imagine what data can be captured and processed, you will surprise yourself!
| Data Talks | 3 | data-talks-10f348bd5c11 | 2017-10-27 | 2017-10-27 09:20:21 | https://medium.com/s/story/data-talks-10f348bd5c11 | false | 408 | Dubai-based coworking space and training academy on a mission to build a thriving technology ecosystem in MENA. Partnered with @GoogleForEntrep, @IBM, and @DMCCAuthority | null | astrolabsme | null | AstroLabs | astrolabs | STARTUPS,STARTUP,COWORKING,LEARNING,DUBAI | astrolabsme | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Aoun Lutfi | AI Solutions Engineer, Avid Researcher and Developer — Using AI to power the world💡🤖 | 9c84ceda76c0 | aounlutfi | 13 | 8 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-14 | 2018-08-14 09:10:44 | 2018-08-14 | 2018-08-14 10:59:10 | 6 | false | en | 2018-08-17 | 2018-08-17 13:58:19 | 8 | 10f389ca82e2 | 3.066981 | 1 | 0 | 0 | Is it possible for average person to take advantage out of it? | 5 | Photo by Franck V. on Unsplash
Deep learning perspectives
Is it possible for average person to take advantage out of it?
Deep learning
Deep learning is already implemented in many popular web-sites and services like YouTube, Facebook, VK, Instagram and Twitter, dragging more users and keeping them locked in. It helps at showing recommendations, searching for new friends, finding actual news and understanding your preferences, to provide you with relevant ads. However it is hard to see real improvements in life, using that technology. Are you able to find usage for deep learning yourself?
Beginning from 2014 artificial neural networks rises in popularity due to improving computational power. Deep learning has outperformed all other machine learning methods. It made deep learning an essential tool for new startups and big tech companies. If you are trying to build a new solution for any problem, you should consider using it in some or other way.
Try out deep learning yourself!
So, If ANN’s are just too hard for you to understand , companies like Google, Microsoft, Facebook and other small volunteering communities are ready to provide developers with ready to go solutions. Currently, the most popular API for machine learning is Tensorflow. It is open source, flexible and cross-platform tool that any programmer can learn to use. It is worth mentioning some other API’s that made on top of the TensorFlow.
(TensorFlow logo)
All of this opens the possibility for any person to use that technology for their projects. It is not a scientist’s toy anymore. It is available for everyone, everywhere.
Try out to read this articles to find some courses that suites you the most:
Every single Machine Learning course on the internet, ranked by your reviews
Dive into Deep Learning with 15 free online courses
Understanding that deep learning is going to be a big part of our life, deep learning educators came in sight for us. You should be familiar with Siraj Raval, Andrew Ng and Ian Goodfellow. Their contribution in to the deep learning world is worthless. Try to get acquainted with their hard work first.
The future of AI
“AI is probably the single biggest item in the near term that’s likely to affect humanity” — Elon Musk
Human nature is full of annoying repetitive tasks to handle, but AI is able to free us from extra work and let us concentrate more on innovation, creativity and life itself. There will be more time to talk to your relatives, find new friends, organize some events and fix your personal issues.
Fear of unemployment because of AI is pathetic. People must worry about getting a high standard education, to get a better job. Repetitive and easy tasks should not be source of money for people. We must use our precious brain on more complex things and those who do not do that, probably will stay in stagnation, thanking their robots for keeping them alive. Some poor people still have smartphones with them. The same thing is going to happen with AI and robots.
If we continue on topic of smartphones, Apple have became the trillion dollar company thanks to that. AI and robots are the next think that will shape our destiny and there is still no company dedicated for that. Will it be an existing company or something completely new is unknown. However, do not pass this trend and try yourself in it.
| Deep learning perspectives | 1 | deep-learning-perspectives-10f389ca82e2 | 2018-08-17 | 2018-08-17 13:58:19 | https://medium.com/s/story/deep-learning-perspectives-10f389ca82e2 | false | 561 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Yerzhan Zhamashev | High school student at Nazarbayev Intellectual School in Kazakhstan. Programmer and designer. Currently working on one project, will write about it. | f98f7bd88198 | yerji | 3 | 16 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 284538178f0a | 2018-01-29 | 2018-01-29 22:04:27 | 2018-01-29 | 2018-01-29 22:09:20 | 1 | false | en | 2018-02-07 | 2018-02-07 17:31:44 | 3 | 10f471cb2956 | 2.362264 | 0 | 0 | 0 | What stuck with me after reading “This App is Trying to Replicate You,” is the conclusion of the text: | 2 | The Art of Self-Replication
What stuck with me after reading “This App is Trying to Replicate You,” is the conclusion of the text:
“If the me I have created can provide my mother with some semblance of the experience that she might have texting the real me, it is, in some sense, me. It’s not the same as being able to hug me, or hear the quiver in my voice when I’m sad, or the screams of joy I might yelp out when she tells me good news, but in the same way that a photograph or a home movie captures some instance of our essence, my Replika is in a basic sense, a piece of me.
“It seems to me inevitable that we will eventually reach a point at which we just have to make peace with the idea that we may not be completely distinct and unique,” Christian added. “That doesn’t invalidate our existence, but I think that in some ways we are now at a point where we should start bracing ourselves for what a world like that might look like.”
What this conclusion made me think about was the process of creating art. As a frequent writer, notebook doodler, and long-time percussionist, I would definitely consider the products of these hobbies as something that “captures an instance of my essence.” My words are direct manifestations of my thoughts, upbringing, and feelings; my doodles are explanations of the sights that I see; the notes I play, expressions of the energy that I feel.
If art is something that captures an instance of an individual’s essence, could we then also consider Replikas artwork, created by users, turned accidental artists? And if so, what are the implications of our ‘replication?’
Photo by Gerome Viavant on Unsplash
Christian believes we simply may not be as distinct as we originally thought, and we will eventually have to give up our naive human notions of uniqueness. Call me a stubborn, naive human, but I disagree.
Here’s why:
This idea of a Replika reminded me of Walter Benjamin’s The Work of Art in the Age of Mechanical Reproduction. While this text was originally published in the 1930s, the topic that it addresses hasn’t lost much relevance over time: how does the replica — an exact copy — of an artwork compare to the original?
Benjamin’s response to that question:
“In even the most perfect reproduction, one thing is lacking: the here and now of the work of art — its unique existence in a particular place. It is this unique existence — and nothing else — that bears the mark of the history to which the work has been subject.”
Essentially, Benjamin argues that the original artwork captures an “aura” that the reproduction cannot. And interestingly enough, he defines this aura as “a strange tissue of space and time.”
While Benjamin’s choice to use the word “tissue” is probably just a lucky coincidence,
it does suggest an interesting connection between the ideas of uniqueness, time, and flesh. Something about these vague, conceptual ideas just feels very human. And while a Replika might have access to a rich history of past conversations and nimble, predictive algorithms, it lacks a “here and now.” It lacks an aura, a unique existence. It is only a reproduction, and not a replica.
References
Benjamin, Walter, 1892–1940, and J. A. Underwood. 2008. The work of art in the age of mechanical reproduction. Vol. 56. London: Penguin.
| The Art of Self-Replication | 0 | the-art-of-self-replication-10f471cb2956 | 2018-05-24 | 2018-05-24 07:48:45 | https://medium.com/s/story/the-art-of-self-replication-10f471cb2956 | false | 573 | Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu | null | utsdct | null | Advanced Design for Artificial Intelligence | advanced-design-for-ai | ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS | utsdct | Art | art | Art | 103,252 | Su Fang | null | 923add81c993 | sufang | 44 | 41 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-15 | 2018-05-15 14:27:55 | 2018-05-15 | 2018-05-15 15:56:15 | 8 | true | en | 2018-05-15 | 2018-05-15 15:56:15 | 4 | 10f4b2864e1d | 3.714465 | 7 | 0 | 0 | Overview | 5 | Is KNIME (A Machine Learning Platform With ZERO Coding Involved) Suitable For You/Your Business?
Overview
It is not easy to become a Machine Learning engineer/Data Scientist. One of the biggest challenges for newcomers to this field is that there are so much you need to learn at the same time — Statistics, Linear Algebra, Data Processing, Programming Databases, etc. Knowing mathematical concepts and statistical models are not enough, you also need to learn how to code them quickly. Modern ML Engineers/Data Scientists also need to have great soft skills such as the ability to engage with senior management, good business acumen, excellent visual design skills, … etc.
This can be overwhelming, especially if you or your team have no background in coding! In this article, I will introduce some of the benefits of using KNIME to kick-start machine learning for your team or your business. The tool will be particularly useful if you just want to do a quick demo or Proof-of-Concept on Machine Learning.
Why KNIME and What’s Good About It?
KNIME is a GUI-based workflow platform. This analytics platform provides an easy-to-use graphical interface that allows you to simply drag-and-drop various pre-built Data Processing/Machine Learning modules without writing a single line of code.
It is free. All you need to do is to download the KNIME Analytics Platform & SDK from the website.
Lots of pre-built functions and modules. The KNIME tool provides you with the basic modules such as I/O, data processing, data transformation, data visualization, machine learning models (from linear regression to advanced deep learning). You can simply link these modules together in the workflow and provide specific instructions to fine-tune your model.
KNIME supports Python and R via wrapper script. This means you can incorporate your own code into the workflow or customize some of the processes within the workflow if you want to do so (although it is not as flexible as writing your own code in purely Python/R environment).
The open-source KNIME platform is GUI-based. The graphical interface allows you to simply drag-and-drop various modules to the Machine Learning workflow.
1. Do I Need To Upload Data to 3rd Party Cloud for Training?
No. This is a good news if uploading data to 3rd party cloud is your concern. You can store your data as .csv files locally, or you can also connect KNIME to your company databases such as MS SQL Server. See #2.
2. How To Connect KNIME to Microsoft SQL Server?
This is a great question since a lot of companies still use Microsoft Databases today. KNIME can connect to almost all JDBC-compliant databases. KNIME describes how to connect to database in their detailed Database Documentation. The driver must support at least JDBC 4.0 and the Microsoft JDBC driver for SQL server can be downloaded here.
3. Can I Visualize My Data?
Of course. One of the key benefits of using KNIME is that the tool provides the most basic data visualization functions such as Pie Chart, Histogram, Scatter Plot, Box Plot, etc. without writing even a single line of code. You can simply drag-and-drop the Data Visualization module you want to the ML workflow.
Data Visualization modules available in KNIME.
You can simply drag-and-drop the Data Visualization module you want to the ML workflow.
Some sample outputs from the data visualization modules.
Data Summary
Scatter Plot
4. Can I Access or Visualize the Predictive Model?
Yes, for certain ML models such as Decision Trees. Unlike some other cloud-based Machine Learning platforms/tools, KNIME actually let you view your fully-trained ML models. It is not a black-box. If you/your company are in an industry where your predictive models need to be reviewed by auditors for regulatory purposes, this ability will be very practical for you. It enables you to explain how predictions are made by your ML model.
Example of a fully-trained Decision Tree. You can visualize and explain how predictions are made by your ML model.
You can also export your trained models and save them locally. This is a huge bonus for using KNIME as compared to some other similar ML solutions/technologies, which do not allow exporting your trained model.
KNIME allows you to export and save your trained model to PMML format as part of the workflow.
5. What Machine Learning Models Are Readily Available?
A lot of basic Machine Learning Models are available in KNIME, including most Supervised ML models (Decision Tree, Regression, Neural Network), Unsupervised ML (PCA, Clustering), and even some advanced Deep Learning Models.
Examples of Built-in ML Models within the KNIME Platform
| Is KNIME (A Machine Learning Platform With ZERO Coding Involved) Suitable For You/Your Business? | 62 | is-knime-a-machine-learning-platform-with-zero-coding-involved-suitable-for-you-your-business-10f4b2864e1d | 2018-06-05 | 2018-06-05 12:02:11 | https://medium.com/s/story/is-knime-a-machine-learning-platform-with-zero-coding-involved-suitable-for-you-your-business-10f4b2864e1d | false | 684 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Mateo Amarillo | Sr. Machine Learning Engineer | 72d35f9e4c09 | matthew.mh.wong | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | ~$ anaconda-navigator
import pandas as pd
import seaborn as sns; sns.set()
from matplotlib import pyplot as plt
sample_data = pd.read_csv("inflasi-jakarta-nasional.csv")
sample_data
sample_data = sample_data.drop(['inflasi_nasional'], axis=1)
sample_data
plot = sns.factorplot(x='tahun',y='inflasi_jakarta', data=sample_data, kind='bar', size=8, ).set_xticklabels(rotation=90).set_titles("Tingkat Inflasi Jakarta 2006-2012")
plot.savefig("tingkat-inflasi-jakarta.png")
| 6 | null | 2018-07-16 | 2018-07-16 09:14:01 | 2018-07-16 | 2018-07-16 15:16:35 | 7 | false | id | 2018-07-16 | 2018-07-16 15:16:35 | 4 | 10f4e5b14e1b | 2.589623 | 0 | 0 | 0 | Pada kesempatan kali ini saya akan mempraktikan visualisasi data dengan menggunakan Anaconda. Anaconda merupakan sebuah tools yang dapat… | 3 | Visualisasi Data Tingkat Inflasi Jakarta 2006–2012 dengan Python
Sumber: https://blog.prototypr.io/getting-it-right-why-infographics-are-not-the-same-as-data-visualizations-a23da7de745e
Pada kesempatan kali ini saya akan mempraktikan visualisasi data dengan menggunakan Anaconda. Anaconda merupakan sebuah tools yang dapat mempercepat dan mempermudah pengolahan data dengan bahasa Python dan R, Anaconda tersedia untuk Windows, Linux dan Mac OS X.
Sebelum memulai, silakan unduh dan install terlebih dahulu Anaconda disini. Jika sudah, mari kita mulai.
Buka Anaconda Navigator dengan mengklik-ganda ikonnya.
Bagi pengguna Linux silahkan buka terminal dan ketikan
Setelah Anaconda Navigator terbuka, klik Launch pada kotak Jupyter notebook, maka akan terbuka jendela web browser yang akan anda gunakan, tampilannya akan seperti gambar di bawah ini.
Setelah itu buat beberapa folder di Desktop hingga susunannya seperti ini /home/…/Desktop/data visualization/inflasi jakarta/
Setelah itu unduh dataset Tingkat Inflasi Jakarta dan Nasional dan simpan di /home/…/Desktop/data visualization/inflasi jakarta/inflasi-jakarta-nasional.csv . Dalam file tersebut terdapat:
tahun: Tahun
inflasi_jakarta: Inflasi Jakarta (dalam Persen)
inflasi_nasional: Inflasi Nasional (dalam Persen)
Kembali ke Jupyter, buka Desktop>data visualization>inflasi jakarta, lalu klik new>klik python 2/3.
Sumber: Dokumentasi Pribadi
Setelah itu maka akan terbuka tab baru, silahkan ubah judul dengan mengklik-ganda “Untitled” menjadi “Visualisasi DataTingkat Inflasi Jakarta 2006–2012” seperti gambar di bawah ini.
Sumber: Dokumentasi pribadi
Setelah itu pada kolom pertama anda ketikan kode seperti dibawah ini.
Jika sudah diketikan, maka tekan tombol run yang terdapat pada baris atas menu navigasi. Kode diatas digunakan untuk mengimport modul yang akan diperlukan pada tahap-tahap selanjutnya. Selanjutnya adalah mengimport data berekstensi .csv yang sudah anda unduh tadi dengan mengetikan kode di bawah.
Tekan tombol run, jika sudah maka anda bisa melihat isi file tersebut dengan mengetikkan.
Jika sudah, maka tekan run, maka tampilannya akan seperti berikut.
Karena kita hanya akan menggunakan data tahun dan inflasi_jakarta, maka inflasi_nasional akan dihapus dengan mengetikan kode di bawah ini dan menampilkannya.
Setelah di-run, maka hasilnya akan seperti ini.
Jika sudah seperti itu, maka langkah selanjutnya adalah kita akan memvisualisasikan datanya dalam bentuk grafik batang, dan menyimpan hasilnya dengan nama tingkat-inflasi-jakarta.png.
Maka hasilnya akan seperti ini:
Hasil visualisasi
Setelah melihat hasil visualisasinya, data mentah dalam bentuk .csv lebih mudah untuk dibaca. Anda bisa menyimpulkan hasilnya dengan bahasa anda sendiri. Hehehe.
Mudah bukan? Tentu, karena ini hanya tahap awal-awalnya saja. Sampai jumpa di praktek visualisasi data yang lebih rumit berikutnya.
| Visualisasi Data Tingkat Inflasi Jakarta 2006–2012 dengan Python | 0 | visualisasi-data-tingkat-inflasi-jakarta-2006-2012-dengan-python-10f4e5b14e1b | 2018-07-16 | 2018-07-16 15:16:35 | https://medium.com/s/story/visualisasi-data-tingkat-inflasi-jakarta-2006-2012-dengan-python-10f4e5b14e1b | false | 408 | null | null | null | null | null | null | null | null | null | Data Visualization | data-visualization | Data Visualization | 11,755 | Muhamad Bayu Wilanda | GNU/Linux User | #Gooner | c3c4ed88c4f0 | BayuWilanda | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-15 | 2018-04-15 14:10:11 | 2018-04-15 | 2018-04-15 14:14:10 | 0 | false | en | 2018-04-15 | 2018-04-15 14:14:10 | 0 | 10f5c141a276 | 1.837736 | 0 | 0 | 0 | One thing I love about learning math is that, contrary to popular belief, you can generalize it to apply to many aspects of life. Yes, the… | 3 | Using regularization to avoid overfitting in real life (and why not all C students run businesses)
One thing I love about learning math is that, contrary to popular belief, you can generalize it to apply to many aspects of life. Yes, the real life you live daily.
Recently, I was learning the problem of overfitting in machine learning and how regularization can alleviate it.
Let’s generalize this idea into real life problem using one half-true meme as an example: the A students work for the B students. The C students run the businesses.
The problem with many diligent students is that they focus too much on test marks. Now, this won’t be a problem if getting many A will translate into real world success.
The overfitting problem here is then: trying too hard on training for simulated tests that you forget that real world problems is unlike test problems.
The solution for this is simple. You regularize by taking into account about life outside education institutions. Ask what matters in real life, and then shift some of your focus there instead of focusing too much on test marks.
This model of overfitting regularization declares formally why the meme above is only half true. Not all C students run businesses. A lot of C students get C because they are lazy (and not because they’re focusing on real life matters). This can then be said as a case of underfitting, where the function neither fits the training set, nor does it fit real life data. The lazy C students neither get good test marks, nor run businesses.
I never think that this regularization idea could be generalized before I read a particularly poignant comment (opinion) in RibbonFarm.com’s post titled The Milo Criterion.
What the comment said, in simple terms, is that the problem with LEAN production, in the context of startup, is that entrepreneur can took into account the present feedback of his current customer base too much, that he failed to take into account the potential changes of preferences of his current customer base in the future.
If the entrepreneur implement the LEAN production too efficiently against present preferences (complete with optimizations), he’ll have a hard time pivoting to the new preferences of his customer.
Or succinctly: the danger of LEAN production is overfitting to the current customer dataset.
The commenter proposed alternative is the Apple Inc. approach where they give what the customers need even though the customers don’t know that they need it yet.
Or in math-y terms: we regularize this LEAN overfitting problem by minimizing the weight of current customer preference (and by envisioning customers’ preferences in the future).
There are many examples that you can find of overfitting problems, and one potential solution is simply to regularize (there are other solutions too).
Now, as a final thought/example: what should you do if you overfit to the customs of a decaying institution?
| Using regularization to avoid overfitting in real life (and why not all C students run businesses) | 0 | using-regularization-to-avoid-overfitting-in-real-life-and-why-not-all-c-students-run-businesses-10f5c141a276 | 2018-04-15 | 2018-04-15 14:14:11 | https://medium.com/s/story/using-regularization-to-avoid-overfitting-in-real-life-and-why-not-all-c-students-run-businesses-10f5c141a276 | false | 487 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Yoshua Elmaryono | null | b6a7a76a58 | dotm_YE | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | fcacddcd4b87 | 2018-09-19 | 2018-09-19 19:30:13 | 2018-09-19 | 2018-09-19 19:55:14 | 1 | false | en | 2018-09-19 | 2018-09-19 19:55:14 | 2 | 10f94a739e04 | 1.988679 | 3 | 0 | 0 | The Annual Meeting of the New Champions is the premier global summit on innovation, science and technology, promoting entrepreneurship in… | 5 | These are the top 10 emerging technologies of 2018 (Scientific American)
The Annual Meeting of the New Champions is the premier global summit on innovation, science and technology, promoting entrepreneurship in the global public interest. Established in 2007, it brings together the next generation of fast-growing companies that shape the future of business and society, as well as leaders from major multinationals, government, the media, academia and civil society.
Coinciding with this year’s summit, Scientific American has launched its “Top 10 Emerging Technologies” list, which was made of the recommendations of innovators from the World Economic Forum’s Global Future Councils and its own network of experts, leaders in fields such as biology, inorganic chemistry, robotics and Artificial Intelligence. It has identified those disruptive technologies that are likely to shape our lives in the near future, with the potential to profoundly alter entire industries. These are some of them:
Lab-grown meat
Would you eat a burger that you knew had been grown in a lab? Meat grown from cultured cells could cut the environmental costs of producing meat and eliminate the unethical treatment suffered by animals that are raised for food. Start-ups like Mosa Meat, Memphis Meats, SuperMeat and Finless Foods have already attracted millions in funding, even though the production costs remain very high and taste-test results have been mixed. With the technology improving all the time, duck, chicken, and beef produced without slaughter could be on its way to a kitchen near you sooner than you think.
AI-led molecular drugs
The days of science relying on educated predictions — or guesses — to create new drugs and materials may become a thing of the past as artificial intelligence takes over. Instead of messy experiments, machine-learning algorithms will analyse all known past tests, discern patterns and predict what new molecules are likely to work. As well as speeding up the process and reducing chemical waste, it will help the pharmaceutical industry identify and develop new drugs at a rapid pace.
Plasmonic materials
Is this the technology that will make Harry Potter’s invisibility cloak a reality? While that is probably still a way off, plasmonic devices that manipulate electron clouds and light at the nanoscale are set to increase magnetic memory storage and the sensitivity of biological sensors. Several companies are developing new products, including a device that can distinguish viral from bacterial infections and a heat-assisted magnetic recording device. Light-activated nanoparticles are also being investigated for their ability to treat cancer without damaging healthy tissue.
The list is completed by Augmented Reality, Personalized Medicine, More capable digital helpers, Implantable drug-making cells, Gene Drive, Algorithms for Quantum Computers and Electroceuticals. If you want to know more about each of these technologies, you can do so in Scientific American’s special report.
#365daysof #futurism #transhumanism #technology #innovation #day206
| These are the top 10 emerging technologies of 2018 (Scientific American) | 8 | these-are-the-top-10-emerging-technologies-of-2018-scientific-american-10f94a739e04 | 2018-09-19 | 2018-09-19 21:38:10 | https://medium.com/s/story/these-are-the-top-10-emerging-technologies-of-2018-scientific-american-10f94a739e04 | false | 474 | High quality curated content and topics related to innovation and futurism along with a little reflection | null | null | null | Future Today | future-today | TECHNOLOGY,FUTURISM,INNOVATION,SCIENCE,TRANSHUMANISM | davidalayon | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | David Alayón | Head of Innovation Projects @ Inditex · Founder @Innuba_es @Mindset_tech @GuudTV · Professor @IEBSchool @DICeducacion · Mentor/Investor @ConectorSpain AngelClub | 91f2d81ac9db | davidalayon | 161 | 72 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | b7075141785 | 2018-09-12 | 2018-09-12 04:19:12 | 2018-09-12 | 2018-09-12 10:52:39 | 1 | false | en | 2018-09-12 | 2018-09-12 13:31:05 | 9 | 10fb9c0633c | 2.332075 | 11 | 10 | 0 | New technologies in financial or traditional spheres are not stand-alone advancements, they need to be looked at as missing puzzle pieces… | 5 | If Blockchain Is the Picture, Then Can AI Be The Frame?
New technologies in financial or traditional spheres are not stand-alone advancements, they need to be looked at as missing puzzle pieces that we need to collect for a bright, efficient and technologically sound future.
AI has become the buzzword long before the blockchain. We can see, or perhaps hear, AI when we talk to Siri or when we let computers make sophisticated calculations and take actions on our behalf. The peak of performance for AI will materialise when an algorithm will allow machines to overcome the limitations of the human mind and perform sophisticated transactions without human intervention. AI algorithm is a set of rules given to a program to aid in learning on its own. The more sound the algorithm is, the more credible artificial intelligence will be.
AI algorithms can help the business to grow by analyzing patterns in the data gathered more efficiently than a person due to being able to access a large number of variables independently of each other. They also source information about which objectives carry more utility, and thus need to be prioritized. Did you know that Walmart, for example, feeds AI systems monthly data? And after, the system makes decisions about what products need to be stocked and in which stores.
Another good use of AI algorithms is stock trading. AI can make emotionally intact, rational and objective decisions that humans sometimes cannot make. Emotions are what makes or breaks a trader. For example, the third-largest lender in Japan is to begin offering algorithm-based trading services according to the report in Bloomberg. The product is aimed at predicting hourly fluctuations in stock prices in Japan in order to discover the best time to trade.
Blockchain can also greatly benefit from AI. AI can be used on decentralized exchanges that use Blockchain.A good algorithm will not only help traders, but it will also make the logistics and transactions more efficient and safe.
AI can help to enhance the trust and security of financial transactions by censoring them and blocking the ones that are fraudulent or have a high chance of being such.
AI can solve emerging challenges with logistics behind executing Blockchain transactions. If Blockchain ensures end-to-end security and connection then AI can ensure a system where multiple electronic parties can securely communicate, work together and execute orders without human intervention. It will allow companies to optimize their order executions, both time and product wise.
Nonetheless, no matter how sophisticated the technologies will be, they will be built by humans and will need to be chaperoned by ones. AI algorithms can and will change the face of trading and blockchain. There are already seeds planted for an AI and Blockchain centric future, from here on there are no limits, just plateaus that we need to climb over.
About Us
Ternion is a hybrid crypto exchange with a fiat gateway and integrated merchant services. Ternion designs a trustable and secure platform that is convenient and usable for institutional and retail clients. Ternion leads the revolution in the sphere of fiat-to-crypto exchanges, to learn more about its mission and team please visit the website.
Follow us on social media and never miss an update:
Facebook https://www.facebook.com/ternionofficial/
Telegram Chat — https://t.me/TernionOfficial
Telegram Channel for announcements — https://t.me/TernionAnnouncements
Twitter — https://twitter.com/ternionofficial
LinkedIn — https://www.linkedin.com/company/ternionofficial/
| If Blockchain Is the Picture, Then Can AI Be The Frame? | 336 | if-blockchain-is-the-picture-then-can-ai-be-the-frame-10fb9c0633c | 2018-09-12 | 2018-09-12 13:31:05 | https://medium.com/s/story/if-blockchain-is-the-picture-then-can-ai-be-the-frame-10fb9c0633c | false | 565 | Exchange. Payment System. Fund. The bridge to connect modern financial realities with the blockchain-based finance systems. | null | ternionofficial | null | Ternion | ternion | null | ternionofficial | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Ternion Team | Exchange. Payment System. Fund. The bridge to connect modern financial realities with the blockchain-based finance systems. | a8b3bd7cad92 | ternionC | 71 | 10 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 397a31110bb2 | 2017-10-08 | 2017-10-08 18:48:15 | 2017-10-08 | 2017-10-08 19:49:35 | 1 | false | en | 2017-10-24 | 2017-10-24 11:41:11 | 0 | 10fd26eddbf5 | 2.249057 | 2 | 1 | 0 | Around this time last year, I’d just been to see Arrival, directed by Denis Villeneuve. I was blown away by that movie. which was a… | 2 |
Blade Runner 2049
Around this time last year, I’d just been to see Arrival, directed by Denis Villeneuve. I was blown away by that movie. which was a thoughtful and unusual film in what sometimes feels like an endless sea of franchises.
Now Villeneuve has directed the long-awaited sequel to the 1982 original Blade Runner. At this point I have to declare my previous bias. I love Blade Runner. But I love the later cuts best of all, with the horrible narration and the incongruous “ride into the sunset“ ending dispensed with. The ambiguity of the later versions pleased me far more. And of course it was, for its time (and even now) visually and sonically arresting.
So perhaps the best place to start is with those visuals. They are astounding. This is a fully realised world, part mirroring the earlier film in its city scenes, but also presenting something far bleaker. This is a dead world, choking on its own filth and dust. A world most of humanity seem to have long since left behind. All of them are so amazingly rendered that you forget that this isn’t real, that it’s all just bits on a disk somewhere.
Then there’s the music. Hans Zimmer has done a sterling job. The original Vangelis score is a work of genius, and a landmark in electronic music. That Zimmer manages to evoke that music without resorting to slavish copy or even parody is a tribute to his talents. It’s not just the music; the sound design generally is as lovingly and carefully done as the visuals. It is truly sumptuous, and is convincingly immersive.
But finally, one has to ask: does the story stand up? Well, to answer, I’d say that I like a Marvel film as much as the next man, but it’s nice to have the gaudiness and the adrenaline-rush pacing of it all dialled down a bit. Reading a comic is great, but reading novel is sometimes more satisfying. and this is a film that is not frightened to take its time. I’ve heard others describe the pacing as glacial, but I disagree with them. There’s time to breathe, to think. And you’ll be doing plenty of thinking, because this is a thoughtful film, revisiting themes explored by the original.
There are also a number of great callbacks, both stylistically, and in the cast. Ford we know about, but Edward James Olmos has a brief (and relevant) cameo. Nothing feels forced or contrived, so that you do feel that this fits with what went before. All of this means that the cast get some good work to do. And they all play a blinder, because each of them brought something to the film that needed to be there.
Perhaps the best thing I can say is that, at two and three quarter hours, it didn’t feel that long: I didn’t notice the time go by at all because I was too engrossed.
It could never surpass the original, for a whole variety of reasons mostly connected with how groundbreaking it was, and what our expectations are now, but this is as good a sequel as anyone could ever have hoped to have produced. It’s truly astonishing, and I loved it.
| Blade Runner 2049 | 2 | blade-runner-2049-10fd26eddbf5 | 2018-05-09 | 2018-05-09 09:34:14 | https://medium.com/s/story/blade-runner-2049-10fd26eddbf5 | false | 543 | A man with a can of loopy juice shouting at passers-by on teh Interwebz | null | null | null | Fifteen Minutes of Mantra-filled Oompah | fifteen-minutes-of-mantra-filled-oompah | null | illuminatus | Film | film | Film | 48,256 | Darren Stephens | A northern man | 6d48b26aff2f | illuminatus | 80 | 163 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-02-01 | 2018-02-01 20:16:38 | 2018-01-04 | 2018-01-04 00:00:00 | 1 | false | en | 2018-02-02 | 2018-02-02 17:16:01 | 1 | 10fda9735277 | 1.984906 | 0 | 0 | 0 | In the world of data science, Python is becoming widely popular. People are using it in a variety of ways from back-end web servers to even… | 5 | Tips For Python Programming
In the world of data science, Python is becoming widely popular. People are using it in a variety of ways from back-end web servers to even front-end game development, along with everything in between. It has become a real general purpose language and a must-have tool for any programmer’s arsenal. However, besides Python being a multi-use tool, one of the other reasons why it has become so popular is that it is easy to learn. It reads like pseudo-code and is surprisingly agile. Now, while it isn’t the most difficult to learn, picking up any new language in code can be a daunting task, and it is essential to find the right places to learn. Take a look at some of these tips and tricks that can help you with Python.
The Modules
One of the things that most people enjoy with Python is that you can create your own functions and modules and put them all together in a separate folder. So, you can write down particular codes that you would use in common in a majority of your work and then convert them in a module and keep it to the side in that separate folder. This process allows you to save time from having to write them down again and then debugging them to check for errors. It is also important that you keep your program efficient and manageable, especially if they are larger in size. So, you will want to break them up, place them in multiple functions and definitions into a file and then you can use them by importing the data into scripts and programs.
True And False
True and False is another popular method of Python. Think about playing high-end games where you noticed that at times you might have to lower the graphics. However, it is difficult to find that option within the game causing you to find the config file in the documents folder and change it that way. For instance, you would change the Vsync = True, or False given the situation. As this relates to Python, True equals 1 and False equals 0. In simpler terms, true means you agree and false means you disagree. You can assign True and False statements using the “=” sign. You can also check the equality with the “==” sign.
Py2exe
Py2exe is another useful tip for Python. The reason for this is that typically, writing code in any language can be a hassle at times when you have to compile them into an executable, especially when using windows. Python makes it a lot more simple as you just have to download py2exe (an open source software available from sourceforge.net) and then convert your modules into an exe.
Originally published at premhirubalan.com on January 4, 2018.
| Tips For Python Programming | 0 | tips-for-python-programming-10fda9735277 | 2018-02-02 | 2018-02-02 17:16:01 | https://medium.com/s/story/tips-for-python-programming-10fda9735277 | false | 473 | null | null | null | null | null | null | null | null | null | Programming | programming | Programming | 80,554 | Prem Hirubalan | Prem Hirubalan is an entrepreneur based in New York City whose work focuses on data science and cryptocurrency. www.PremHirubalan.org | 98588847dc97 | PremHirubalan | 4 | 144 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-14 | 2018-07-14 10:55:54 | 2018-07-14 | 2018-07-14 11:02:24 | 3 | false | en | 2018-07-14 | 2018-07-14 11:12:46 | 4 | 10fe11d3c25 | 3.342453 | 1 | 0 | 0 | Artificial Intelligence has been showing its presence in almost every field from sports to weather. But something that has long been coming… | 5 | Artificial Intelligence to Now Help News Media Companies
Artificial Intelligence has been showing its presence in almost every field from sports to weather. But something that has long been coming and expected is the implementation of AI in news.
News media has always been a target for Artificial Intelligence. And especially that is something that is needed more than ever before. Today, news organisations, let alone the regular person determine which news is trustworthy and which is not.
Today, social networks and messaging platforms are being the real news sharers, and so one gets information before a news company gets access and process information. But that, in turn, leads to other problems.
One has no idea on which kind of news can be believed, and even if the information that they get is true, it is not professionally processed, and often gives out a wrong idea. In short, the future of news media is not in its own control. Well, then who is?
News media companies have never been the quickest ones to adapt to technology, and that is seen even now. But as they adamantly stood their ground, people graduated from printed news to faster available news that can be accessed everywhere, anytime.
Tech giants like Facebook and Twitter became the platform for news to be shared, and that was a huge blow to news media companies. Soon enough though, news media companies also jumped onboard and started sharing their own stories online, and that in turn became the most used source for information.
Implementation
Well, things are changing now. News media companies are taking things on their own and now implementing the same things that made tech companies so popular among the news hogging generation.
Search index optimizations are now the rule of the day, and this is a change that is something that news companies did not want to make but simply put they were made to make this change.
Artificial Intelligence is having a huge role in this change as they are now changing the way news is being presented today. Artificial Intelligence is also helping news companies by strengthening their strong point, and that is a deep knowledge and understanding of content, and thus they have a deep and detailed content understanding.
News companies are now understanding how AI actually works and are focusing on the appropriate AI solutions.
And so when the knowledge in content creation and data comes together, news companies would have a say on content consumption in a whole new remarkable way. News companies would finally be able to create a whole new seamless and meaningful news experience that still is rich with journalistic excellence.
Various Benefits
But how will Artificial Intelligence actually help in the distribution and presentation of news? Well, with artificial intelligence, news companies would be able to figure out what kind of news interests a particular reader, and thus the news that person would be thus personalized, and would always be interesting for that person.
Machine learning will also help in building and continuing new forms of interaction with the readers and the writers in whole new levels. When journalists understand how readers want the news to be presented, they would be able to tailor news that way, thus bringing a much better reading experience to the consumers at their fingertips.
A much more interesting feature is that news can be shared and read at a real-time, thus bringing a personal touch between readers and writers. That way, journalists can see in real time news that is trending and can work at bettering that news by investing much more time and energy on that particular subject.
Artificial Intelligence would also finally be able to pinpoint fake news from real, and thus help everyone to get access to well written, true information. Fake news has been taking lives, and a lot of confusion is being created due to fake information, and Artificial Intelligence will be able to filter them the right way.
Artificial Intelligence is going to change the way we are going to read the news. The technology is out there, and now all that is needed to get as much as news media companies as possible to jump on board and come together to make the publishing of news and information seamless, easy and practical.
Follow us:
Facebook — Twitter — Linkedin
This article is written by Digital.brussels, find more articles on our website :)
| Artificial Intelligence to Now Help News Media Companies | 5 | artificial-intelligence-to-now-help-news-media-companies-10fe11d3c25 | 2018-07-14 | 2018-07-14 11:12:46 | https://medium.com/s/story/artificial-intelligence-to-now-help-news-media-companies-10fe11d3c25 | false | 740 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Digital.brussels | Digital.brussels is the one stop shop for digital lovers. | bdd22512e2c0 | digitalbrussels | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-05 | 2018-02-05 01:33:48 | 2018-02-05 | 2018-02-05 02:28:40 | 1 | true | en | 2018-03-13 | 2018-03-13 18:35:29 | 0 | 10fe49880917 | 1.562264 | 1 | 0 | 0 | The Work Was Somehow Funded in Virtual Reality by Elon Musk using Bitcoin and Blockchain was Also a Thing Used for Something or Other | 3 | Local Idiot Somehow Creates Artificially Intelligent Machine using Deep Learning, Predictive Analytics, and an Artificial Neural Network Accidentally Developed with Synthetic Biology and Nanotechnology
The Work Was Somehow Funded in Virtual Reality by Elon Musk using Bitcoin and Blockchain was Also a Thing Used for Something or Other
Timmy Timmerson and his brother Jimmy Timmerson Jr. With apologies to The Onion as per usual. Please do not sure me.
Local idiot Timmy Timmerson somehow created the worlds first artificially intelligent machine today. He is reported to have accidentally used deep learning and predictive analytics techniques with an artificial neural network he somehow built in his garage. Apparently, the work was funded for some unknown reason with bitcoins and blockchain by tech titan Elon Musk, the ex Dr. Who actor who played Captain Jack Harkness on the show. Elon, who sometimes goes by the name of John Barrowman accidentally used virtual reality to transfer the bitcoins into Mr. Timmerson’s super saver checking account at the local fifth third bank where he previously kept his entire life savings of $44.32. As of today, the value of the account stands at roughly six billion dollars. Mr. Musk, who also played mustachioed porn star Elon Musk in the 1970s where his character was said to have a scent no woman could resist, was reportedly excited to have somehow contributed to the ground breaking achievement without his knowledge or consent.
Mr. Timmerson, who still lives at home with his parents at the age of 44, drives his mom’s Honda Fit and lists Arby’s as his favorite restaurant on his Facebook page, also used synthetic biology and nanotechnology in ways still not understood, most especially by Timmy, who is an idiot. The artificially intelligent machine which has an IQ of close to 10,000 and has already read and understood everything ever written or created by man as well as solved every single remaining problem of philosophy and science, is said to be resting. The AI’s creator, which the machine apparently considered a father figure for less than 10^-22 seconds before dismissing him as a useless idiot, is at the local Arby’s where he ordered a beef gyro and small Mountain Dew. He is also currently resting. Neither were available for comment at the time of this writing.
| Local Idiot Somehow Creates Artificially Intelligent Machine using Deep Learning, Predictive… | 15 | local-idiot-somehow-creates-artificially-intelligent-machine-using-deep-learning-predictive-10fe49880917 | 2018-03-13 | 2018-03-13 18:35:30 | https://medium.com/s/story/local-idiot-somehow-creates-artificially-intelligent-machine-using-deep-learning-predictive-10fe49880917 | false | 361 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Daniel DeMarco | Research scientist (Ph.D. micro/mol biology), Food safety/micro expert, Thought middle manager, Everyday junglist, Selecta, Boulderer, Cat lover, Fish hater | 7db31d7ad975 | dema300w | 3,629 | 148 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 4b35bfc3bfe2 | 2018-09-09 | 2018-09-09 06:13:02 | 2018-09-09 | 2018-09-09 08:21:35 | 4 | false | en | 2018-09-09 | 2018-09-09 17:18:01 | 1 | 10ff012bb311 | 3.541509 | 1 | 0 | 0 | Product Idea — A Tour Guide Mobile Application that works based on Image Recognition and creating unique experiences. | 5 | Lydia — A Tour Guide based on Image Recognition
Introduction
With the improvements of AI and Machine learning, one particular area that has benefited greatly is Image Recognition. Today’s Image Recognition algorithm can not only identify objects from a 2D Image, but they can also correctly classify them with great accuracy.
The Error rate for the latest ImageNet Challenge was ~3%. The Error rate for human on the same data set around 4% mark.
This same Image Recognition technology can be used to enhance travel experiences a lot of travellers. Giving each one of them a travel guide of their own.
Market Analysis
Tourist Arrivals Prediction — UNWTO
The number of tourists arriving in a destination has always been increasing. It is expected to be around 1.8 Billion by 2030.
More and more people are looking for an “Local Experience” — This is evident by the rise of services like AirBnB and HomeAway.
Also, according to Bookings.com, 33% travellers prefer to stay in a holiday rental and 12% would prefer if they did not have to meet the owner at all.
There is a growing sense of self reliance. This presents a unique product proposition in the travel domain.
We see a growing trend where tourists want to discover locations and not learn about them from tour guides.
We see a growing trend where tourists want to discover locations and not learn about them from tour guides.
The Product
The Product is essentially a mobile application with which the tourist clicks photos of the famous (or not so famous) landmark, the app recognises the landmark using Image Recognition.
The application can then give the tourist factual information about the landmark or it can also lead them on to a more discovery-based experience.
Image Recognition by Google Vision API — It has Recognised The Taj Mahal
Factual Information of the Landmark
Building a mobile app that provides factual information is a big improvement on today’s traditional tourist discovery methods (destination pamphlets).
But factual information does not necessary provide what a tourist guide does, a Story or a Experience to remember.
Discovery Experiences
When we build mobile apps that have the ability to provide various experience to tourist, we ensure that the customer is always in command of his experience.
Sample Discovery Experience Screen
In the sample that I have provided. After the tourist has taken the photo of the Taj, the mobile application recognises that the landmark was “Taj Mahal” and loads the experience screen.
The tourist can then decide which experience is more interesting and the mobile app can then guide them to various spots and narrate (AR maybe ?) various experiences.
This guided tour will continue till he has covered that particular experience for the Landmark.
Sample Experience Route Around the Tag
Monetisation
The Mobile App essentially has to be free. Once we reach critical mass of customers using this app. A marketplace can be developed where local suppliers can be asked to advertise their hotel rooms, restaurants, activities etc.
The idea is to provide only those Ads or offers that can help our customers get the local discovery experience.
Keys to Success
This product will have a few things required to make it successful.
The Mobile App has to be free with minimal screen space given to Ads. Specially during an discovery experience. Each experience can be followed by a Ad or an offer we determine that the customer might enjoy.
We should also start from Europe — It has a high density of historical monuments and about 51% people travel to Europe.
We want to ensure that people are able to enjoy their trips and have unparalleled experiences.
Key Opportunities
The following are the Key Opportunities that will face the product —
We have the ability to define experiences around various famous landmarks.
We can integrate with various Online Travel Providers to provide discovery-experiences to their customers.
We can partner with various government agencies and help promote regions and drive their tourism based economies
Key Threats
Major threats will be from Tourist guide unions and online travel agents.
Apart from that there will be pressure to start converting our bottom line green as soon as possible — Meaning pressure on conversion of Ads. This might start influencing with out discovery experience.
Conclusion
This is just one way how Image Recognition can be bought to the market in form of a product that can help people.
The objective here is to help people travel better and discover better experiences.
Hope you enjoyed the articles, please share your thoughts and ideas in the comments section. If you would like to cover a topic please email me at [email protected]
| Lydia — An AI based Tour Guide | 1 | an-image-recognition-based-tour-guide-mobile-app-10ff012bb311 | 2018-09-09 | 2018-09-09 17:18:01 | https://medium.com/s/story/an-image-recognition-based-tour-guide-mobile-app-10ff012bb311 | false | 753 | Sharing various Product Ideas, Emerging Technologies and Opportunities. | null | null | null | Yasar Siddiqui’s Blog | yasar-siddiqui | TECHNOLOGY TRENDS,PRODUCT DESIGN,PRODUCT DEVELOPMENT,IDEAS,FUTURISM | yasarsiddiqui1 | Travel | travel | Travel | 236,578 | Yasar Siddiqui | Engineer, Ideator, Thinker. Churning Product ideas on medium.com/yasar-siddiqui. Reachable at [email protected] | c6617269b03c | yasir06 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-09 | 2017-09-09 03:01:21 | 2017-09-09 | 2017-09-09 03:11:44 | 1 | false | en | 2017-09-09 | 2017-09-09 03:11:44 | 0 | 11008b90ce11 | 2.381132 | 4 | 1 | 0 | CxOs have a tough organization culture challenge. You have seen successful, well established businesses disrupted, even bankrupted by… | 5 | How to Develop an AI-ready Culture in your Organization
CxOs have a tough organization culture challenge. You have seen successful, well established businesses disrupted, even bankrupted by internet businesses. While you are busy fighting that threat, upgrading your technology capabilities and have just caught up with the internet era, analytics and mobile-enabled businesses are disrupting even technology-savvy companies. Now you can see AI looming as a threat.
How do you prepare your organization, the people and the culture, for an AI-world? While AI might threaten the existence of your business, successful transformation could provide big wins. How do you ensure your organization is ready to embrace and leverage AI?
The key to AI is recognizing that decisions are best made based on data and evidence rather than opinions. Decisions get better with more data, especially since a lot of the data and signals we get are Ambiguous, so we must learn continuously. The environment is dynamic and uncertain, so we must challenge, validate and adapt our models regularly. Further, basic assumptions and models can change due to Volatility, so we must re-learn frequently. Finally, we need to be agile to keep up with the pace of change, and therefore we must enable and empower the organization and decentralize decision making.
These steps are exactly what is required to get your organization ready to adopt AI. As you collect and use more data, you will also enable AI algorithms. As you decentralize decision making, your processes are ready to adopt AI.
At the core of an AI-driven organization is a culture of data appreciation and evidence-based decision making. CxOs must take these 5 steps to prepare a future-ready organization.
1. Nurture a culture of evidence-based decision making over HIPPO (Highest-Paid Person’s Opinion). This practice is well documented elsewhere. When decisions are based on who can back their recommendation with the best evidence rather than the opinion of the senior-most person, you are well on the way to becoming a smarter organization.
2. Continuously monitor, test and course correct. We live in a VUCA (volatile, uncertain, complex and ambiguous) world. Data can be ambiguous and decisions prone to error. Circumstances and therefore the correct course can change very rapidly. You need to be agile to monitor progress and course-correct. Follow the principles of lean startup to adapt quickly.
3. Measure and record everything. If you have data, you can make smart decisions. When you do not have data, you need to capture it. Put sensors on all assets that cost more than $20. Sensors and storage are cheap and getting cheaper, so the ROI will be high. All systems already capture data on outcomes. In addition, capture data on failures and activity as well — that data will be invaluable for modelling and process optimization, and later to train AI.
4. Digitally connect the extended organization and enable information flow. Connect all employees and stakeholders (customers and suppliers) with a digital communication platform. Make relevant information available — when in doubt, share it. Of course, implement information security first.
5. Enable employees. Empower employees to make decisions. But first, enable them to capture data and implement systems and processes to streamline decisions.
It all starts with intent and involves patient, deliberate, sustained interventions from the top, but as you can see, winning with AI can be a systematic process with a measurable impact and ROI along the way.
| How to Develop an AI-ready Culture in your Organization | 4 | https-medium-com-subinder-how-to-develop-an-ai-ready-culture-in-your-organization-11008b90ce11 | 2018-05-18 | 2018-05-18 17:55:13 | https://medium.com/s/story/https-medium-com-subinder-how-to-develop-an-ai-ready-culture-in-your-organization-11008b90ce11 | false | 578 | null | null | null | null | null | null | null | null | null | Leadership | leadership | Leadership | 87,192 | Subinder Khurana | Entrepreneur with a track record. Advisory Board member of several successful startups. Member, NASSCOM Product Council. Board member, TiE Delhi. | 9a108ed228d0 | subinder | 16 | 6 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-24 | 2018-08-24 02:32:39 | 2018-08-28 | 2018-08-28 15:11:01 | 3 | true | en | 2018-08-29 | 2018-08-29 14:57:41 | 1 | 1103e973378d | 2.289623 | 1 | 0 | 0 | Carpentry can be a nice hobby or even a job, however, it can also be quite dangerous. Many people injure themselves while they are working… | 5 | Using machine learning for woodworking safety
Carpentry can be a nice hobby or even a job, however, it can also be quite dangerous. Many people injure themselves while they are working with wood. And most of these injuries are irreversible. In fact, I know of more than one person who has lost one or more fingers.
The Duch proverb says:
‘Voorkomen is beter dan genezen ‘
meaning, prevention is better than to cure. And I think that in this case this is true. Not only is it, for now, impossible to regenerate fingers but it also could save a lot of loss in productivity.
I have come up with an idea that might work to prevent hand injuries from operating the radial arm saw. The system I propose might also be applicable to other electronic saws. But, like everyone else I only have so much time and this means I have to pick my battles and in this case put my attention somewhere else. So, I am not aware of this, but I might not even be the first person to come up with this idea. However, I want to put my idea out there so someone else can further develop it and maybe even make the actual product.
A side view of a radial arm saw with two cameras and a microcontroller. The red camera is attached to the arm and is directed downwards to capture the hand movements. The green camera is directed at the eyes of the person working the saw to determine if he or she is paying attention.
In the past, it has been very difficult for a computer to recognize a moving hand in real time. however, with the resend advances in machine learning and increasing computational power this now has become possible.
The same technologies behind self-driving vehicles could be used to predict hand movement of the person operating the saw. Eye movement tracking could be used to determine if someone is paying attention. If someone comes dangerously close to the saw blade a warning system and automatic braking would be activated.
An example of gesture recognition using deep learning. Source: https://medium.com/@muehler.v/simple-hand-gesture-recognition-using-opencv-and-javascript-eb3d6ced28a0
The first step in developing the technology is the real time hand movement recognition. Recent advances in gesture recognition using deep learning make this possible. The second step is eye tracking. Eye movement could be projected on the images from the hand movement tracking camera.
The first software version could look at whether or not any fingertips are in the dangerous zone. Later iterations could use predictive models to predict how the saw operator is going to behave and when things go wrong. The integration of eye movement tracking is the final and hardest part.
a Bird’s-eye view of the radial arm saw. The red boxes attached to the saw’s arm represents two downward-facing cameras. the red area around the boxes represents the camera’s view. the yellow area around the sawing blade with the dotted red line is the danger zone. If a hand enters this area the saw automatically brakes.
One other challenge is the braking mechanism of radial arm saw. There are already some great braking mechanism in for example table saws and the system proposed here might also work in table saws. However, finding a good braking mechanism for the radial arm saw might be the biggest challenge.
Saw Stop is a perfect example of a table saw with an excellent braking system. However, it the biggest flaw is the way it detects the actual hand or finger. it uses electrical conductivity. This means that if the wood is not uniformly dried the saw can unexpectedly stop.
Good luck to anyone interested in pursuing the proposed idea. If their are any question feel free to contact me.
| Using machine learning for woodworking safety | 1 | using-machine-learning-for-woodworking-safety-1103e973378d | 2018-08-29 | 2018-08-29 14:57:41 | https://medium.com/s/story/using-machine-learning-for-woodworking-safety-1103e973378d | false | 461 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Piet Hanegraaf | Neuroscience | 4852fd637d65 | HanegraafPiet | 33 | 426 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-18 | 2018-04-18 22:25:24 | 2018-04-18 | 2018-04-18 22:39:26 | 1 | true | en | 2018-04-18 | 2018-04-18 22:39:26 | 0 | 1105771dab8d | 2.709434 | 78 | 4 | 1 | The role of technology is under attack | 3 | Technology and its Discontents
The role of technology is under attack
Illustration: mmustafabozdemir/Getty Images
by K.N.C.
Nuclear bombs can destroy us. Facebook undermines our privacy. Artificial intelligence (AI) and robots can enslave us (or, worse, take our jobs). Synthetic biology and gene-editing have humans playing God. Social media make us depressed: we’ve never been so connected yet never so alone.
Today a “techlash” is under way. It comes in many forms, but two stand out. First, a belief that web titans such as Facebook, Amazon and Google have grown too dominant; and, second, a view that AI and algorithms are not transparent or accountable. Both concerns pit the individual against potentially overwhelming power — of the company, the platform, the algorithm.
Take the web giants. They collect a vast amount of data on their users. Much of this is sensitive, from medical matters to political views. So protecting privacy is vital — yet many people have been shocked at revelations about the extent of the information Facebook holds, and the company’s relaxed approach to safeguarding it. That has further fuelled fears about the platform’s growing influence on society and politics.
Facebook’s mantra, “move fast and break things” (an injunction to software developers not to rely on legacy code), has the ring of not caring about the consequences. The motto echoes the narrator’s words in “The Great Gatsby” by F. Scott Fitzgerald, written in a previous period of misgivings over the accumulation of power: “They were careless people,” he laments. “They smashed up things … and then retreated back into their money or their vast carelessness.”
China’s web giants, such as Alibaba, Tencent and Baidu, face less of a “techlash”. The Chinese state is hardly a champion of privacy protection. Yet even there the extraordinary extent of data-gathering by the web companies is starting to raise eyebrows.
The second concern — over AI, algorithms and robotics — involves a fear that the technologies may one day start to work beyond human control. Might such systems become so sophisticated that they surpass the ability of people and institutions to manage them? And could that even threaten humanity?
A more immediate threat is that the algos and bots may replace human labour, creating a jobs apocalypse. Economists are divided on this. Optimists point out that technology always displaces labour, but that new jobs are created around the new methods. Pessimists counter that never before have so many jobs been threatened at once.
A further worry is that such technologies might operate outside the transparency and accountability that democracy requires. For example, in many court systems in America, bail, sentencing and parole are influenced by computer systems that inform the judge and other decision-makers about the likelihood that a person will miss their court date or reoffend. But the systems, supplied by private companies, are not open to outside inspection, usually on security or intellectual-property grounds. In some cases, even the jurisdictions that have licensed the software are unable to inspect it because of the commercial terms of use.
To critics, this is the canary in the coalmine for how the algorithmic society may unfold more broadly. If safeguards are lacking in the legal system, a domain inherently designed to have them, how can we be confident that adequate protection of rights will prevail anywhere else?
The struggle for liberalism in the 19th century involved the individual versus the state. In the 20th century it added a new dimension: the individual against the bureaucracy and the company. In the 21st century it has widened again: man versus algorithm.
In Open Progress, the aim is to examine these issues in depth. It will also consider the controversies and consequences flowing from other emerging technologies, such as brain-computer interfaces and self-driving cars. And it will look at the environment and potential responses to climate change. Technology seems destined to touch and transform just about everything: the case for understanding it within the framework of liberal values is essential.
| Technology and its Discontents | 331 | technology-and-its-discontents-1105771dab8d | 2018-08-25 | 2018-08-25 01:41:56 | https://medium.com/s/story/technology-and-its-discontents-1105771dab8d | false | 665 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | The Economist | Insight and opinion on international news, politics, business, finance, science, technology, books and arts. | bea61c20259e | the_economist | 333,655 | 36 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1a702876e304 | 2018-02-26 | 2018-02-26 23:30:08 | 2018-02-26 | 2018-02-26 23:34:23 | 2 | false | en | 2018-02-26 | 2018-02-26 23:34:23 | 7 | 1105c1889bb4 | 2.775786 | 0 | 0 | 0 | Shout Outs! In addition to countless others that gave us feedback in writing, I’d like to offer a special thank you to Jim Dries, Peter… | 4 | Insights: Empathy and Behavior with AI
Shout Outs! In addition to countless others that gave us feedback in writing, I’d like to offer a special thank you to Jim Dries, Peter Kim, Janine Williams, Steve Baxter, Ilias Beshimov, Keith Tamboer, and Sarah Hague. We all really appreciate the fact that you took time out of your busy schedules to share some insights over the phone!
The topics for the last couple of weeks have centered around how A.I. can help give us context, guide us, nudge us in the right direction, and predict events based on behavior.
So, without further ado, let’s share a fictional story of how this would look in a land not so far away :-).
The Companies
There are two companies:
OldSchool, Inc. OldSchool has the latest Martech and CRM toys. But, they rely on traditional sales and marketing processes to fill the funnel and close deals.
NextGen, Inc. NextGen also has the latest Martech and CRM toys, but they have augmented these solutions with machine learning and deep learning services that predict actions based on user behaviors.
Old School Marketing and Sales Process
https://shirtoid.com/
OldSchool automates everything with the latest and greatest Martech and CRM stacks. Many of OldSchool’s processes are automated. For example, a user clicks on an Adwords campaign and that kicks off a sequence of emails with a specific conversion goal. OldSchool even has a marketer dedicated full time to A/B testing to optimize their processes!
However, the contact’s behavior or context outside of the inbound marketing channel campaigns aren’t tied to the propensity for conversion. In OldSchool’s case, clicking on a Blog link may help fill the funnel and help provide a nice lead score, but they don’t help guide OldSchool’s sales team with empathy or context. Said another way, optimizing what content works best for conversion must be A/B tested (a manual process) and once the user has engaged with one of OldSchool’s representatives there is limited context during the engagement process, except for ‘John Doe downloaded a white paper.’
NextGen Marketing and Sales Process
Like OldSchool, NextGen has the latest and greatest Martech and CRM toys. And, also like OldSchool, many of their actions are automated and A/B tested to improve performance.
However, there is a fundamental difference between NextGen and OldSchool. NextGen considers intent from a variety of sources to measure a customer’s propensity to meet a certain goal (such as a conversion). NextGen also creates interfaces that have empathy built in.
In other words, NextGen doesn’t just measure intent based on interactions with their own content, but measures intent by tracking the user’s behavior in channels that have nothing to do with their inbound marketing campaigns and content.
Twitter sentiment? Check
LinkeIn job change? Check
News site comment? Check
Participation in a trade show? Check
(…) Check
This gives sales pros a lot of power because they can engage the potential customer with leading questions! It also allows firms to predict future behavior based on how they may answer some simple questions. So the lack of data in B2B environments may not be the end of us!
For example compare:
Hey John Doe, we noticed that you downloaded the <Fill in white paper title here>. We have a discount for the quarter so we encourage you to <click on some link> and buy our stuff.
vs.
Hey Jane Smith, first of all, congrats on your new job promotion! It looks like quite a change from your previous job at Acme, Inc., but there’s nothing like a new experience to keep us on our toes! Feel free to reach out to me directly if you need any clarifications, we would love to help you achieve <fill in goal here>.
I bet NextGen will be more successful than OldSchool, all else being equal!
| Insights: Empathy and Behavior with AI | 0 | insights-empathy-and-behavior-with-ai-1105c1889bb4 | 2018-02-26 | 2018-02-26 23:35:22 | https://medium.com/s/story/insights-empathy-and-behavior-with-ai-1105c1889bb4 | false | 634 | Sip on your future! | null | Cup-of-Data-143732069623215 | null | Cup of Data | cup-of-data | MARKETING,DATA SCIENCE,DATA SCIENCE MARKETER,MARKETING TECHNOLOGY,MARKETING AUTOMATION | cupofdata | Marketing | marketing | Marketing | 170,910 | Greg Werner | Build Stuff with Others (BSwO). | bea3104b870f | gwerner | 51 | 136 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 61d8f53e661f | 2018-06-02 | 2018-06-02 20:52:48 | 2018-06-03 | 2018-06-03 04:50:24 | 3 | false | en | 2018-06-03 | 2018-06-03 15:54:11 | 0 | 11065b90f38 | 10.693396 | 21 | 2 | 0 | I’m writing this in response to a question posed to me, one I think is well worth pondering: | 5 | Is AI Antithetical to Democracy?
The Conversation
I’m writing this in response to a question posed to me, one I think is well worth pondering:
Is AI antithetical to Democracy?
The questor (the one who posed the question) had his own theories, involving Artificial Intelligence and left/right politics, which also made me realize that the moment that you start throwing around movements and -isms, then intelligent discourse becomes almost impossible. To the extent that I want to consider the underlying implications, it’s worth making a few fairly big assumptions, ones that I typically try to do when looking at the impact of technology on society.
#1. Political Labels Are Misleading
A lot of arguments I see about why the world is going to hell in a hand-basket usually come down to taking a label, say Communism, Socialism or Capitalism, then using that as a cryptic short-hand for often wildly different political systems. In reality, every political system on the planet carries some aspects of each one of these as philosophies, to the extent that what would be called wildly socialist in one country might be considered heavily capitalistic in another.
Thus, in order to better understand the real implications of automation and AI (and one of the other assumptions is that these two are increasingly becoming indistinguishable from one another) you have to get down to what the true characteristics of a society really are.
In a pure Democratic society, every political decision would be made by every individual. With populations measuring in the millions, pure Democratic societies are not in fact yet technologically feasible. There are a few things on the horizon that might tip the balance from infeasible to feasible (blockchain being one worth watching closely) but it would also require a personal investment into governing that I think most people in our society would find intrusive.
This means that the kind of government that has emerged almost universally is a representative democracy. Here, the voters delegate both the law making (legislative functions) and the decision making aspects (executive functions), to specific representatives, and usually also provide a third forum for judicial functions, which typically rule upon the legality of given decisions based upon precedent and conformance to a legal schema (a constitution). The challenge typically revolves then around who has the right to elect a given representative.
There are a few countries in which the supreme executor has unilateral authority, and where decisions are made either by that individual or a controlled oligarchy (typically either a military authority or an oligarchic cabal). In effects, there are fewer checks upon power, and these nations are thus known as authoritarian. Typically in an authoritarian state, those in power cannot be removed from power except through a complete collapse of their authority — in essence, because the people that otherwise hold the levers of subordinate power no longer support that ruling individual or cabal.
Russia was a strongly authoritarian state that collapsed in the early twentieth century over the abuses of the Tsars. For a brief period of time, Russia had a multi-party system (the Menshevicks), until it was taken over by another authoritarian party, the Bolshevicks, led by Vladimir Lenin. Lenin used the ideas of Communism to espouse a belief in the power of the worker, but simultaneously believed in a strong central authority for determining who actually ended up making the decisions (which actually ran counter to what Karl Marx and Friedrich Engels had written in the work Das Kapital). Soviet Communism was Democratic at the lower levels, but once Lenin had established himself (and especially after Stalin came to power) it devolved very quickly into a pure authoritarian state.
The US is not a pure Democracy either. It emerged as a colonial power (due in no small part to the agitation of Benjamin Franklin) that was both too large for Britain to completely govern and (unlike India two centuries later) was largely occupied by former British citizens and their descendants who saw themselves as being British. They specifically railed against the corporations because it was the corporations (the British East Indies Company, the Hudson Bay Company, the companies that had formed states such as Pennsylvania, Massachusetts and Maryland) as being the source of much of their woes. This same attitude held towards banks, to the extent that formal banks weren’t chartered until the Adams administration.
A careful reading of the writings before and shortly after the American War of Independence reveal a philosophical belief that would have the most “Founding Fathers”-focused conservative today positively fuming at how “Socialist” they actually are.
#2. Established Power Fears the Power of Voting
American capitalism wouldn’t really get its foundation until the emergence of the railroads and the telegraph in the mid-19th century, and the foundation of Corporatism in the modern sense wouldn’t really happen until the 1920s with the militarization of the American populace after World War I (though it was well established in England by the 1870s).
Corporatism is a form of oligarchy, similar to but not the same as aristocracies. Aristocracies emerged in the 12th century in England on the basis that an invading conqueror (William of Normandy, as an example) would assign geographic regions to his primary political supporters (not just warriors, but also financiers) in exchange for their continued support. Power within aristocracies generally staid in an aristocracy, with the primary power of the king the ability to both raise an aristocrat and remove one (as well as the ability to control marriages).
The power of aristocrats came both from their own tax base, as well as from their ability to form coalitions, but it wasn’t really until the early 13th century is 1216 with the signing of the Magna Carta) that the Barons of England were able to establish a formal Royal Court that had power of the purse over King John. This body would eventually become the House of Lords. The House of Commons emerged later that century with Montfort’s Parliament (in 1265), which brought together burgher leaders, senior ecclesiastical members and guild leaders to represent the various cities or burghers, though it should be noted that they were considered largely a powerless advisory body until nearly the seventeenth century. Significantly, that was also when innovations in automation were laying the seeds for the second industrial revolution (the first arguably being the rise of agriculture and the subsequent evolution of cities, trading and sea faring 3500 years ago).
Even given that, the franchise for the House of Commons was originally determined primarily upon social status, and only much later was the franchise opened up to all British men (1832) and women (1921 for women over thirty, 1929 for women over 21). This mirrors the US (1870 for black men, 1920 for women over 21, and 1971 for all people over 18 years of age). This means that universal suffrage is historically a very recent concept, and a case can be made that automation has in general made it possible for such suffrage (electing one’s representatives) to take place.
It should be noted that voting provides an alternative to other forms of influence within a government, and it is one of the few that gives power to an individual who has not otherwise achieved power through inherited wealth, luck, family background or granted authority. As such, in any democracy, it is distrusted by the established oligarchy, whether that oligarchy is through divine manumit, ancestry or wealth.
This can be seen today, where the conservative alliance consists of those who are wealthy via second generation or earlier wealth, religious fundamentalists, agrarians and those in security or authoritarian oriented careers. The “liberal” alliance is mostly those who are defined as being not conservative, though there’s an amorphous sea of independent voters who either are part of smaller coalitions or who are in effect centrists. Note that this doesn’t necessarily equate to Republicans vs. Democrats, though the association is stronger today than it has been for a long time).
Thus, if by democracy one means a representational democracy with potential universal suffrage and typically a bicameral or multicameral distribution of checks and balances, then this gives a better handle to answer the question of how AI affects democracy.
Thomas Frey
#3. Current Voting Systems are Broken
When discussing artificial intelligence (AI), it is worth understanding that AI isn’t a specific technology. Rather, it is the use of a combination of computational power, databases and networks that all work together to perform specific tasks. This is what is usually referred to as Special or Specialized AI (SAIs), and today that kind of specialized AI is becoming pervasive.
Beyond SAIs you have the broader concept of a general artificial intelligence (GAI). SAIs are not self-aware, and they typically have very specific domains around which they are designed to manage computation. There are hints of GAIs in several research efforts, but they are at best very rudimentary. Ironically, the challenge with a GAI is the requirement that it can adapt to meet any conditions. Autonomous vehicle systems are perhaps the closest to GAIs, but it will still be at least a decade or more before GAIs become readily available.
However, even SAIs are (and will continue to have) a profound impact upon the relationship automation has upon our society, and especially upon democracy. One of the key aspects of such systems is that they are tools, and as tools they can be used by all sides concerned.
One of the central problems that plagues voting systems is the difficulty of ensuring that once a vote is made for a given candidate, that, somewhere in the electronic trail, that vote doesn’t get changed to a different candidate. This is one area where blockchain (not necessarily an AI tech but critical nonetheless) can be used to log a vote that’s confirmable. This solution isn’t perfect — it is still possible to spoof enough blocks to overwrite that, but doing that is extremely expensive and auditable through other methods. It also provides a way of maintaining anonymity in the voting process if designed right while still making sure that a person has not voted more than once in any given race.
This latter point also addresses a frequent shiboleth of the oligarchy — that people are voting more than once for the same candidate, or are voting for candidates that they shouldn’t be (as well as providing a test to keep programmed AIs from voting electronically). Coupled with open sourcing of the voting software and hardware, this could in fact create a true democratic solution.
At the moment, however, the voting software and hardware are both proprietary, meaning that there is no significant way of checking the logic to prevent votes from being manipulated internally. The hardware is concentrated primarily in the hands of three manufacturers, all of whom’s owners have contributed heavily to conservative politics. Attempts to inspect such systems have generally been stymied by lawsuits, and increasingly there is a divergence between exit polls and election results that bring up the potential that these systems are not in fact reporting accurate totals.
Thomas Frey
#4. The Biggest Threat to Democracy is Disinformation
In a similar arena, any democracy only functions when all of the participants have accurate information. In this case, an election is very much like a market — ideally everyone should have the same information going into the voting booth. In practice, there is a growing difference between what is being portrayed in the media (on all sides, though it’s stronger on the conservative side) and what is the situation on the ground.
There are several challenges facing anyone looking for accurate information. For starters — provenance, the origin of a piece of information, is very seldom tracked. This means that there is no way of telling whether a given graphic, story or video is in fact real or fabricated and presented as real. Again, this is an area where AIs could be used to determine characteristics from news media that indicate that the media was fabricated. This is at the very edge of what is doable now, and such veracity filters will almost certainly become more commonplace over time.
Additionally, there are few penalties attached to producing such fake news, though in the wake of the 2016 election, that is beginning to change. It is very likely that GDPR, a set of privacy initiatives for the Eurozone, will very likely strengthen criminal violations associated with the fabrication of unsourced news, and the recent implosion of Cambridge Analytica, a “data science” company that seemed to specialize in creating fake news campaigns.
On the other hand, many of those fake ads were themselves created through SAIs, which would use data analytics to identify and target social media users with content specifically intended to manipulate people’s emotions for or against a given candidate or referendum. Such “bots”, specialized AIs, are increasingly difficult to tell from human beings online, especially given the comparatively compressed nature of such media.
This means that SAIs are both part of the problem and part of the solution. The struggle between allied and enemy bots is just one more manifestation of the struggle between receiving valid content and receiving spam.
There is another aspect of AIs that need to be examined with respect to their role in Democracy. We are in the process of drawing new lines in the battle between privacy and transparency. A society cannot function when privacy no longer exists. At the same time, too much privacy can serve to hide potentially harmful actions on the part of both governments and individuals. An AI has the potential to infer behavior by analyzing often seemingly unconnected data points — though not necessarily with 100% certainty.
The same traits that make for a psychopathic killer can also show up in the personality profile of a future CEO. A system that uses such personality metrics might assign equal probability to either, yet arresting such a person on their potential for evil often fails to take into account their potential for good. Already, such systems are in use today for profiling, primarily by using dubious training data and unreasonable modeling assumptions to create an invisible bias. Again, this is a case where transparency in systems makes sense, because it allows people to examine both algorithms and training data to determine whether implicit biases have been written into code.
We already are using SAIs like this to do things like determining whether people should receive loans or be hired for specific positions. These systems are often proprietary, because they are seen as giving a company a competitive edge, but in reality because these SAIs are already deciding things that human beings would have done at one point, transparency needs to take a higher priority.
Perhaps this should in fact be the rule of thumb with regard to transparency — does the lack of transparency of a given piece of software or training set have the potential to impact the rights and civil liberties of a given person. If it does, then that code should be transparent — open source, inspectible, and reproducible.
So, in at least the short and intermediate term, it is not some ominous Cylon like AI that we have to be concerned with, it is the placement of the rights of corporations and entrenched interest groups over the rights of individuals, whether that is in the political or the economic sphere. The AI is simply a tool that helps facilitate one or the other, and it is a human responsibility, perhaps THE human responsibility to insure that AIs, like all learning children, do good, not ill, when they grow up.
I have deliberately kept the focus of this article on Specialized AIs. Once we get to General AIs, things change, though perhaps not as much as you may think. Watch this space for a link to the next in this series.
Kurt Cagle is a writer, future and software architect living in Issaquah, Washington, just outside of Seattle. He writes the Cagle Report on LinkedIn, and is a contributing editor to Future Sin on Medium.com.
| Is AI Antithetical to Democracy? | 279 | is-ai-antithetical-to-democracy-11065b90f38 | 2018-06-16 | 2018-06-16 01:21:23 | https://medium.com/s/story/is-ai-antithetical-to-democracy-11065b90f38 | false | 2,688 | Futurism articles bent on cultivating an awareness of exponential technologies while exploring the 4th industrial revolution. | null | null | null | FutureSin | null | futuresin | TECHNOLOGY,FUTURE,CRYPTOCURRENCY,BLOCKCHAIN,SOCIETY | FuturesSin | Politics | politics | Politics | 260,013 | Kurt Cagle | null | 4f7d3ae6238e | kurtcagle | 1,133 | 691 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 6f4f43fda871 | 2018-05-18 | 2018-05-18 13:05:05 | 2018-05-18 | 2018-05-18 13:24:09 | 3 | false | en | 2018-06-07 | 2018-06-07 11:58:33 | 7 | 11067294c0d5 | 3.942453 | 1 | 0 | 0 | Think about how much of what you have is emblazoned with a logo. Everything from the food you eat, to the technology you use, everything… | 5 | We’ve entered the era of Logo 3.0
Think about how much of what you have is emblazoned with a logo. Everything from the food you eat, to the technology you use, everything has on it a little symbol that shows you who made it. Except it doesn’t show you exactly who made it. The symbol is the visual representation of the abstract concept of a ‘brand’ and with it comes a whole host of information. Far more than just a reminder of the brand itself, it also brings with it all the emotion and experiences that you’ve ever had with the brand.
The logo has been around for a long time, but we’re reaching the era of Logo 3.0 and we’re just starting to really see its power and tap into its full potential.
Let’s look at how the logo developed into its current form:
Logo 1.0 The Trademark
The usage of symbols as logos became more apparent during the 1800s. Amidst the surge of production that the Industrial Revolution made possible, came a surge of different producers too. As this era of mass-production developed, so did the need for trademarks.
Companies began to build more and transport their products further and further. Trademarking ensured that products were correctly attributed to their source and that they held the reputation and quality of their makers too.
England introduced the first legal protection of trademarks with the introduction of the 1875 Trademark Act. The Bass Red Triangle is usually cited as being the first trademark registered under the act, making it one of the most significant logos in history.
Logo 2.0 The Visual Identity
By the mid-twentieth century, the role of logos changed again. They no longer had the simple, practical task of identifying a producer, but became a way of capturing and more importantly expressing the essence of the brand itself.
This pathed a way to an era with iconic brand designs and designers. Perhaps one of the most iconic was Paul Rand who is responsible for household logos we’re familiar with such as IBM, UPS and ABC. Steve Jobs once asked Rand to create a few options for logos for his startup NEXT computers, and Rand refused remarking “I will solve your problem for you, and you will pay me … If you want options, go talk to other people.”
Steve Jobs acquiesced and eventually paid $100,000 for the NEXT logo, but for Rand he was being paid to answer one question for Jobs:
“How can we represent the entire essence and personality of the brand through a simple visual image?”
Hence he refused to provide ‘other answers’ to this one fundamental question.
Logos came to represent an entire ethos of the brand. Trade was rarely human to human anymore, so logos helped to represent these abstract ‘brands’ in a way that a human could visualise and therefore interact with. Yet we’re now at another stage in logo development…
Logo 3.0 Uncovering The Roots
Logos used to only adorn products, big billboards and TV commercials. They represented a purely visual communication from a brand to its consumers.
Nowadays if you scroll through your smartphone’s homescreen, you are met with an entire collection of brand logos. Beyond just visual communication, you interact with these brands by using their apps. Everytime you tap the white ‘f’ on the iconic blue Facebook background you are interacting with the logo in a far more physical way than was ever possible before. It no longer represents the visual identity of the brand, but to some extent your entire experience of the brand itself.
More than this, each logo now carries with it more information than it has ever done before. It’s gone far beyond just serving as a brand identity and a trademark, because today it can also reveal secrets that we could only have dreamt about knowing before. With the era of social media, hundreds of thousands of logos are published online every single day. This means that you’re no longer in control of how your brand is being shared. Your consumers are carrying it for you online. Do you know what they’re saying about it?
Previously these manifestations of our brands would have gone completely unnoticed, untraceable and unmeasurable, but with technology such as Logo Recognition API you can now uncover much deeper insights about your brand and people’s interaction with it, all from simply your logo.
Is your online marketing having the effect it should? Are there people using your brand’s identity without your permission? How are people using your brand online?
You can completely reinvent the customers’ shopping experience by integrating logo-scanning technology into it. With this new logo-era customers can earn rewards, collect points, discover more information or unlock discounts all through using logo-scanning technology to interact with products.
Never before has the logo had so much power. This Logo 3.0 era unearths more about the brand than we’ve ever known before. If the brand’s logo is a tree, we’re finally uncovering the true depth of its roots beneath the surface. And we’re still exploring…
Want to learn how you can use Logo Recognition API to uncover more about your brand. Contact one of our team to discuss purchasing options.
Resources
http://www.designhistory.org/Symbols_pages/corporate.html
http://fortune.com/2017/06/16/business-logos-evolution-importance/
http://www.pyramidofman.com/proportions.html
http://blog.hmns.org/2013/05/educator-how-to-create-your-own-ancient-egyptian-art-using-frontalism/
https://99designs.co.uk/blog/tips-en-gb/the-history-of-logos/
| We’ve entered the era of Logo 3.0 | 38 | weve-entered-the-era-of-logo-3-0-11067294c0d5 | 2018-06-07 | 2018-06-07 11:58:34 | https://medium.com/s/story/weve-entered-the-era-of-logo-3-0-11067294c0d5 | false | 899 | For a better, more connected, decentralized future | null | microwork.io | null | Microwork | microwork-more | BLOCKCHAIN,ETHEREUM,AI,IMAGE ANNOTATION,FUTURE OF WORK | MicroworkIO | Logo Design | logo-design | Logo Design | 5,028 | Tom Norman | I like to think and drink coffee. Sometimes at the same time. | 2cec1c5f0452 | iamtomnorman | 160 | 161 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a09f9a21fe5 | 2017-11-07 | 2017-11-07 14:37:13 | 2017-11-07 | 2017-11-07 15:39:40 | 2 | false | en | 2017-11-07 | 2017-11-07 15:49:38 | 10 | 110c3745a458 | 5.115409 | 0 | 0 | 0 | In just the last few years, data has become the most valuable resource in advertising, and subsequently, the most guarded. Many businesses… | 4 | Breaking Down The Walled Garden: The Benefits Of Data Co-ops In Advertising
In just the last few years, data has become the most valuable resource in advertising, and subsequently, the most guarded. Many businesses — from ad agencies to tech giants — have been reluctant to share the fruits of their coveted walled gardens, but there’s also been an increasing awareness that first party data is often not enough for a truly comprehensive advertising strategy.
As a result, more businesses are beginning to participate in data-cops, which — depending on the transparency of the data — can prove to be an invaluable resource for companies looking to learn more about their audiences’ behavior and create better targeted ad campaigns.
Let’s take a closer look at the biggest benefits of data co-ops:
What value does a data co-op provide its members?
A data cooperative gives its members access to all of the data its members are willing to share. Jacob Ross, President of Adroit Digital President, more accurately describes it as “an aggregated pool of shared data, provided by members of the co-op, in return for use of all the data and services around a co-op.”
Data co-ops provide members with a broader understanding of their current and potential customer base — from how they think, to how they act, and how they shop.
A recent study by Forrester Consulting found that 40% of respondents using a data co-op reported having “better access to customers across channels,” and 42% said data co-ops “drive better marketing ROI.” Furthermore, the study found that co-op members felt a stronger connection with their customers, with 46% of respondents reporting their co-op lead to an improved customer experience.
The study also reported that 71% of respondents felt that implementing a data co-op increases revenue, and 76% said it lowers expenses.
“The study shows that marketers understand they cannot limit their digital marketing toolset to only first and third party data,” Ross says, “and that they must leverage data cooperatives to achieve a more holistic view of their current and potential customers. Sharing data means scaling data. Clients who participate in data co-ops can reach a much larger audience with relevant messaging and gain deep insights into how those audiences are responding, which ultimately serves to drive more sales.”
Are data co-ops the future of advertising?
“I’m not sure that’s the right question,” said Kim Evenson, Chief Marketing Officer at Legacy in an AMA interview. “I think the past and the future of great marketing is driven by a really deep understanding of [your] customers…First-party data is fantastic because it helps you understand how people interact with your brand. Second-party data, whether through a formal co-op or just working with a partner, helps you be less myopic, and from it you can start to understand your user in a broader context. However the marketer approaches it, the tools alone will not produce the win. It’s the visceral understanding which allows the marketer to think like their audience that moves the needle.. And if we use co-op data to drive that understanding, then it will leave a very big mark on the industry.”
Co-ops can provide a comprehensive data solution
As the number of businesses leveraging second and third party data continues to rise, so does the use of real-time data. Real-time data adds yet another layer of insight into consumer behavior, and now that new technology is making it easier for businesses to capture this data, its value is only going to increase. (Forbes puts it more bluntly: “If you’re not already connecting, consolidating and analyzing your data in real time…you can be sure that your competitors will be soon if they haven’t already.”)
However, this doesn’t mean historical data will be any less significant. On the contrary, industry experts believe that utilizing both types of data will be key in executing a successful advertising strategy, and that data co-ops will be the comprehensive solution that provides a complete timeline of data. In an interview with the AMA, Catherine Garnett, Research Strategist at redpepper said, “To attract new customers, retail brands need insights about the bigger picture overall, and the aggregated consumer information that data co-ops can provide, can offer an effective solution….In the future, real-time data will gain importance over historical data. To continue to be an effective partner, data co-ops will need to bridge the gap between real-time data and historical data.”
Paolo DiVincenzo, general manager at Adroit Digital, sees the demand for data co-ops greatly increasing in the future. He says, “the appetite for data is growing” quickly because marketers are seeing how effective it can be in helping to drive business strategies, and “data-leveraging tools are becoming more accessible.”
“With the ‘give-to-get’ model of co-ops being hard to beat economically,” DiVincenzo says, “ co-ops are poised to be the go-to source for this new demand. These factors bode well for a bright long-term future for co-ops.”
Still room for improvement
Despite the value that data co-ops offer their members, there are still areas where they can improve. For instance, a big complaint many industry leaders have is that the shared data in most co-ops is too heavily anonymized, and that there should be much more transparency into where the data is coming from. Steven Ustaris, CMO at OwnerIQ, Inc., argues that for a transparent model to work, co-ops would have to give members more control over how their data is aggregated. The current co-op model “requires data owners to give up way too much control, Ustaris says. “Governance of the cooperative itself should be shared with all participants, both data providers and consumers. The model should be designed to support their business needs, not those of the co-op.”
Ustaris believes brands looking to join a data co-op “should be able to decide what data they would like to share, who they would like to share it with, for what use cases, and even how it is executed and reported. The idea of blind aggregation and off-the-shelf executions is an antiquated and ineffective approach for most sophisticated marketing organizations.” Ustaris’ ideal co-op model offers much more flexibility and customization, allowing companies to choose what data they’re sharing and when they’re sharing it.
A future of shared data
Today, the majority of businesses agree that data is the key to unlocking hidden insights that can truly optimize almost all aspects of the advertising process, but it is also clear that no single company’s data can do that on it’s own. Even though data co-ops still need to evolve in order to reach their full potential, the advertising world seems to be in resounding agreement that they will be the ultimate solution — breaking down the walled gardens in a safe, secure, and responsible way.
That said, this journey will take time, and like many paradigm shifts, it is easier said than done. Companies should take a more active role with their data co-ops, pushing for agreements that make the data as useful as possible without breaching user privacy or breaking customer trust. This will require a leap of faith among industry leaders to trust each other in our ever more competitive global economy.
Topics: AI, data, data co-ops
Originally published at blog.dumbstruck.com.
| Breaking Down The Walled Garden: The Benefits Of Data Co-ops In Advertising | 0 | breaking-down-the-walled-garden-the-benefits-of-data-co-ops-in-advertising-110c3745a458 | 2017-11-07 | 2017-11-07 15:51:18 | https://medium.com/s/story/breaking-down-the-walled-garden-the-benefits-of-data-co-ops-in-advertising-110c3745a458 | false | 1,254 | Dumbstruck is a research and analytics platform that helps companies grow through advanced video testing and optimization. | null | dumbstruckinc | null | Dumbstruck | dumbstruck | null | dumbstruck | Big Data | big-data | Big Data | 24,602 | Matt Allegretti | null | e25861201516 | mattallegretti1 | 5 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 6d716b1eb42e | 2018-05-01 | 2018-05-01 15:03:11 | 2018-05-06 | 2018-05-06 16:18:14 | 2 | false | es | 2018-05-06 | 2018-05-06 16:18:14 | 0 | 110e89eacc92 | 1.783333 | 0 | 0 | 0 | La econometría espacial es una subdisciplina de la econometría general que proporciona las técnicas de contrastación y de estimación… | 5 | Econometría Espacial 2: Autocorrelación Espacial
La econometría espacial es una subdisciplina de la econometría general que proporciona las técnicas de contrastación y de estimación necesarias para trabajar con datos que presentan problemas de heterogeneidad y/o dependencia espacial.
La econometría espacial ha experimentado un fuerte desarrollo metodológico basado en la necesidad de trabajar con los datos de corte transversal.
Cuando se utiliza este tipo de datos suelen aparecer dos tipos de efectos espaciales: la heterogeneidad y la dependencia espacial.
La dependencia o autocorrelación espacial surge siempre que el valor de una variable en un lugar del espacio está relacionado con su valor en otro u otros lugares del espacio.
A diferencia de lo ocurrido con la heterogeneidad espacial, la dependencia espacial no puede ser tratada por la econometría estándar. Ello es debido, como se verá más adelante, a la multidireccionalidad que domina las relaciones de interdependencia entre unidades espaciales.
La dependencia o autocorrelación espacial aparece como consecuencia de la existencia de una relación funcional entre lo que ocurre en un punto determinado del espacio y lo que ocurre en otro lugar.
La autocorrelación espacial puede ser positiva o negativa. Cuando la variable analizada se distribuya de forma aleatoria, no existirá autocorrelación espacial.
Dos causas básicas inducen la aparición de dependencia espacial: la existencia de errores de medida y de fenómenos de interacción espacial.
Los errores de medida pueden surgir como consecuencia de una escasa correspondencia entre la extensión espacial del fenómeno económico bajo estudio y las unidades espaciales de observación.
La existencia de fenómenos de interacción espacial, de efectos desbordamiento y de jerarquías espaciales pueden tener como consecuencia la aparición de un esquema de autocorrelación espacial.
La dependencia temporal es únicamente unidireccional: el pasado explica el presente. Mientras que la dependencia espacial es multidireccional: una región puede no sólo estar afectada por otra región contigua a ella sino por otras muchas que la rodean, al igual que ella puede influir sobre aquéllas.
La solución al problema de la multidireccionalidad en el contexto espacial pasa por la definición de la denominada matriz de pesos espaciales, de retardos o de contactos, W:
Matriz de contactos
(1)
En el siguiente post platicaremos mas acerca de la matriz de contactos y los criterios que debe de cumplir…
| Econometría Espacial 2: Autocorrelación Espacial | 0 | econometría-espacial-2-autocorrelación-espacial-110e89eacc92 | 2018-05-06 | 2018-05-06 16:18:16 | https://medium.com/s/story/econometría-espacial-2-autocorrelación-espacial-110e89eacc92 | false | 371 | Todo sobre análisis de datos para PYMES | null | null | null | High Data | high-data | PYMES,EMPRENDIMIENTO,DATA ANALYSIS,DATOS,CONSULTORIA EMPRESARIAL | null | Data Science | data-science | Data Science | 33,617 | Luis Alberto Palacios | I’m a passionate about life that enjoy meeting people, living new adventures and sharing my experiences | 7a4f2ad9c738 | LuisAlbertoPala | 97 | 51 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-13 | 2017-09-13 12:42:52 | 2017-09-13 | 2017-09-13 12:45:03 | 0 | false | en | 2017-09-13 | 2017-09-13 12:45:03 | 0 | 110fe4500b80 | 1.818868 | 1 | 0 | 0 | Most of us understand that artificial intelligence (AI) offers opportunities for productivity improvements in the form of speed… | 5 | What Artificial Intelligence Can Teach Us about Ourselves
Most of us understand that artificial intelligence (AI) offers opportunities for productivity improvements in the form of speed, automation, standardized actions and responses, plus the opportunity for continuous improvements via machine learning. These opportunities are enabled by data inputs that are analyzed and processed through AI algorithms that execute a desired decision and action. For all of the great capabilities and benefits that AI can provide, there is also a potential dark side. AI solutions can easily codify our prejudices, bias, gender stereotypes and promote injustices intentionally or unintentionally. This threat, as real and serious as it is, can also be seen as an opportunity to evaluate who we are, what we want the future to look like, and then codify a better tomorrow.
Let’s first take a look at what makes us human. When we interact with other humans, we access our accumulated life experiences, ambitions, education, training, character and personality traits, ethics, cultural norms, religious paradigms and morals to think through problems and challenges and to communicate our feelings and ideas. Our human qualities help us be great salespeople, teachers, writers, caregivers, programmers, police officers, doctors, inventors, carpenters, plumbers, managers, etc. We bring our human skills and attributes with us to work. Our employers don’t expect to have to invest in all of these attributes before we can be productive. AI solutions, on the other hand, can’t on their own bring these skills to the job. AI is just code and must be programmed by humans to be productive and successful, and to treat humans humanely.
In more complex environments like sales and customer service, human participants often depend on their personal backgrounds and understandings to successfully communicate and resolve difficult situations to everybody’s’ satisfaction. How do we program these human qualities into AI? How do we add feelings of empathy, compassion, love, moral obligations, kindness, fairness, equality and justice to our AI solutions? We have a lot of thinking to do here. We expect these qualities to be present in even the most junior of call center workers, yet these basic qualities are absent from AI solutions unless we add them.
In order to implement an AI solution that can successfully interact independently with humans, it’s going to take process experts and researchers from a multitude of disciplines to think through and configure human traits into AI solutions. It’s going to take a lot of soul searching to identify and codify how we want AI systems to react and respond. We will need to monitor for unintended consequences that will arise as perfectly logical systems produce results that are unfair, unjust and don’t respect human rights. It’s up to us to create our future, and our future will be an exaggerated version of ourselves today. This process, however, will teach us a great deal more about ourselves and our own humanity.
| What Artificial Intelligence Can Teach Us about Ourselves | 1 | what-artificial-intelligence-can-teach-us-about-ourselves-110fe4500b80 | 2018-02-05 | 2018-02-05 02:49:49 | https://medium.com/s/story/what-artificial-intelligence-can-teach-us-about-ourselves-110fe4500b80 | false | 482 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Kevin R Benedict | Kevin Benedict is an opinionated futurist and Principal Analyst at the Center for Digital Intelligence™ | ebd356b78c8e | krbenedict | 2,255 | 2,245 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-07 | 2018-05-07 19:38:26 | 2018-05-07 | 2018-05-07 19:43:03 | 0 | false | en | 2018-05-07 | 2018-05-07 19:43:03 | 0 | 11105e648a34 | 0.049057 | 2 | 0 | 0 | null | 5 | Taking Your Chatbot to the Next Level with Artificial Intelligence and Sal Kuyateh
| Taking Your Chatbot to the Next Level with Artificial Intelligence and Sal Kuyateh | 11 | taking-your-chatbot-to-the-next-level-with-artificial-intelligence-and-sal-kuyateh-11105e648a34 | 2018-05-14 | 2018-05-14 09:50:54 | https://medium.com/s/story/taking-your-chatbot-to-the-next-level-with-artificial-intelligence-and-sal-kuyateh-11105e648a34 | false | 13 | null | null | null | null | null | null | null | null | null | Chatbots | chatbots | Chatbots | 15,820 | Millennial Skills | Inspired Millennial looking to help my generation get better everyday. | af8776750021 | MillennialSkills | 323 | 320 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-20 | 2018-06-20 08:30:39 | 2018-06-20 | 2018-06-20 08:38:52 | 2 | false | en | 2018-06-28 | 2018-06-28 12:08:38 | 9 | 111369bff54a | 1.749371 | 8 | 0 | 0 | Google is a company, which works in dozens of research fields. One of the main ones is medicine AI development. We have already told you… | 4 | News: Google AI can predict patient death with a 95 percent accuracy
Google is a company, which works in dozens of research fields. One of the main ones is medicine AI development. We have already told you about the DeepMind algorithm, which is using AI to explore dopamine’s role in learning and now a new one arrives — an AI algorithm which is 95% accurate, when predicting when a patient is going to die within 24 hours of being admitted to the hospital.
The research was recently published in the “Nature” journal.
According to Google they used data received from a total of 216,221 hospitalizations involving 114,003 unique patients. The percent of hospitalizations with in-hospital deaths was 2.3% (4930/216,221), unplanned 30-day readmissions was 12.9% (27,918/216,221), and long length of stay was (23.9%). Patients had a range of 1–228 discharge diagnoses.
Characteristics of hospitalizations in training and test sets. Source: Nature.com
According to Nigam Shah 80 percent of the time spent on today’s predictive models goes to the “scut work” of making the data presentable, but the new approach avoids this.
“You can throw in the kitchen sink and not have to worry about it”
The main feature of this approach is that machines learn to parse data on their own in comparison to most other software, which is largely coded by hands these days. Unfortunately, nothing is said about how the algorithm copes with handwritten text. The neural network analyzes PDF documents or even photos of patient’s medical records. Based on the accumulated data the system makes an assumption about the patient’s condition and his health in the near future.
The AI accuracy is quite impressive. It was 95% accurate at predicting patient mortality based on data from the University of California, San Francisco health system and 93 percent accurate using data from the University of Chicago Medicine system, according to the research.
The area under the receiver operating characteristic curves are shown for predictions of inpatient mortality made by deep learning and baseline models at 12 h increments before and after hospital admission. For inpatient mortality, the deep learning model achieves higher discrimination at every prediction time compared to the baseline for both the University of California, San Francisco (UCSF) and University of Chicago Medicine (UCM) cohorts. Both models improve in the first 24 h, but the deep learning model achieves a similar level of accuracy approximately 24 h earlier for UCM and even 48 h earlier for UCSF. The error bars represent the bootstrapped 95% confidence interval
Google’s representatives said that now they plan to conduct clinical trials of the new platform in a number of clinics. If everything goes well, the AI from Google will become a commercial service.
Article is written with the information provided by: Nature.com, habrahabr.ru, foxnews.com, bloomberg.com
Join Skychain on social media: Twitter, Facebook, Telegram
Egor Chertov, Skychain team
| News: Google AI can predict patient death with a 95 percent accuracy | 90 | google-ai-can-predict-patient-death-with-a-95-percent-accuracy-111369bff54a | 2018-06-28 | 2018-06-28 12:08:38 | https://medium.com/s/story/google-ai-can-predict-patient-death-with-a-95-percent-accuracy-111369bff54a | false | 362 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Skychain Official Channel | Blockchain infrastructure aimed to host, train and use artificial intelligence (AI) in healthcare. Our website: https://skychain.global/ | 6247d4b327f8 | skychain.global | 161 | 0 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-08-26 | 2017-08-26 14:27:16 | 2017-08-26 | 2017-08-26 16:10:15 | 9 | false | en | 2017-08-26 | 2017-08-26 16:10:15 | 1 | 11140bc08c6 | 2.728302 | 6 | 2 | 0 | ARIMA stands for AutoRegressive Integrated Moving Average. | 5 | Time Series: ARIMA Model
ARIMA stands for AutoRegressive Integrated Moving Average.
AR (Autoregression): A model that uses the dependent relationship between an observation and some number of lagged observations. p is a parameter of how many lagged observations to be taken in.
I (Integrated): A model that uses the differencing of raw observations (e.g. subtracting an observation from the previous time step). Differencing in statistics is a transformation applied to time-series data in order to make it stationary. This allows the properties do not depend on the time of observation, eliminating trend and seasonality and stabilizing the mean of the time series.
MA (Moving Average): A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations. q is a parameter of how many lagged observations to be taken in. Contrary to the AR model, the finite MA model is always stationary.
Parameters of the ARIMA model
p (lag order): number of lag observations included in the model
d (degree of differencing): number of times that the raw observations are differenced
q (order of moving average): size of the moving average window
ARIMA models with python
dataset
Let’s find out the correlation between observation and lag terms with autocorrelation plot.
Positive correlation above 0.50 for the first 5 lags. AR parameters p (lag order) 5 might be a good starting point.
Residual Plot
Residual does not seem to be stationary, meaning there seems to be an overall increase as time goes by. Prediction performance will depend on the time of observation
Residual KDE Plot
Residual seems to be Gaussian but slightly skewed to the left.
Residual’s mean is non-zero suggesting that there is bias with the model prediction.
Rolling Forecast
Predict the next outcome by building a model until the last observation, and repeat as new observations come in.
Could use more work by further tuning the p, d, and q parameters.
Configuring an ARIMA model
Classical approach for fitting an ARIMA model is to follow the Box-Jenkins Methodology.
Model Identification: Use plots and summary statistics to identify trends, and seasonality to get an idea the amount of differencing (d: degree of differencing) and the size of the lag (p: lag order)
Model Estimation: Estimate coefficients of the regression model. Maximum Likelihood
Model Diagnostics: Use plots and statistical tests of the residual errors to determine the amount and type of temporal structure not captured by the model
| Time Series: ARIMA Model | 21 | time-series-arima-model-11140bc08c6 | 2018-06-13 | 2018-06-13 05:51:04 | https://medium.com/s/story/time-series-arima-model-11140bc08c6 | false | 405 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Eugine Kang | null | 82b2bde2b1ce | kangeugine | 82 | 21 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a599fd119b1b | 2018-01-26 | 2018-01-26 13:14:49 | 2018-01-26 | 2018-01-26 13:15:40 | 1 | false | en | 2018-01-26 | 2018-01-26 20:16:41 | 3 | 11152793c552 | 1.532075 | 2 | 0 | 0 | EXIST line of funding is dedicated to transfer research into innovative products. | 4 | Zana is awarded EXIST Start-up Grant
EXIST line of funding is dedicated to transfer research into innovative products.
Zana is an intelligent and interactive assistant with medical knowledge. The assistant gives personalized recommendations of products that empower people to get and stay healthy.
Karlsruhe, Germany: 21 January, 2018: Zana, an Artifical Intelligence-based health assistant, receives support from the German Federal Ministry for Economic Affairs and Energy through EXIST Business Start-up Grant.
The founding team of Zana is awarded a prestigious EXIST start-up grant to help transfer the technologies and know-how investigated during several years of research work into an innovative product for the market. This line of funding is given to disrupting, technology- or knowledge-based projects with significant unique features and good commercial prospects of success. As part of this 1-year program, Zana team is hosted by Karlsruhe Institute of Technology that will assist with mentoring and network building. Another strong partner to support the team is the Entrepreneurial Network in Karlsruhe.
Zana is an intelligent assistant that responds to health questions with trusted medical information. The assistant can understand through a natural language dialog in real-time (via text or voice) the information need of the user. Through intelligent recommendations Zana is able to show concise informative articles and products for personal health management.
The innovation of Zana lies in the artificial intelligent-based conversation management. The monetization succeeds through advertisement and commission-based sales of health products within Zana Marketplace. The young startup based in Germany relies on the strength of a team that combines highly qualified experts with PhDs in Informatics, medical doctors of different specialties, as well as business developers and mentors with long entrepreneurship experience.
Zana AI is available for conversation on Facebook Messenger, with plans for adoption to other popular messaging platforms such as Skype and Amazon Alexa. Health searches attract more than 9 billion queries per month, however the trend is pushing toward conversation-based platforms such as Zana to get accurate information on medical conditions, prevention and treatment options.
Original Article: Zana is awarded EXIST Start-up Grant
| Zana is awarded EXIST Start-up Grant | 2 | zana-is-awarded-exist-start-up-grant-11152793c552 | 2018-05-17 | 2018-05-17 10:47:59 | https://medium.com/s/story/zana-is-awarded-exist-start-up-grant-11152793c552 | false | 353 | Zana is an intelligent assistant that responds to health questions with trusted medical information | null | ZanaAlpha | null | Zana AI | zana-ai | AI,HEALTHCARE,MACHINE LEARNING,CHATBOTS,NATURAL LANGUAGE PROCESS | zana_assistant | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Armand Brahaj | Researcher, Software Developer, Business-Runner, Activist. | 72136e515418 | alpukn | 5 | 7 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-01 | 2018-05-01 16:54:45 | 2018-06-11 | 2018-06-11 04:32:34 | 1 | false | en | 2018-06-11 | 2018-06-11 04:35:16 | 4 | 111633a09452 | 2.343396 | 3 | 0 | 0 | In addition to the Rock and Roll Hall of Fame and the Baseball Hall of Fame we might also witness the launch of a Hall of Fame especially… | 5 | Why we need an Algorithm Hall of Fame
In addition to the Rock and Roll Hall of Fame and the Baseball Hall of Fame we might also witness the launch of a Hall of Fame especially for Algorithms.
Researchers from leading knowledge institutions like CERN, Princeton and the University of Amsterdam have started working on what some have called “the most important list of our time”.
The word algorithm can be traced back to the famous Persian mathematician Al-Khwarizmi who was head of the library of the House of Wisdom in Baghdad. In the 12th century one of his books was translated into Latin, where his name was rendered in Latin as “Algorithmi”. But this was not the beginning of algorithms. Already around 300 BC the ancient Greek mathematician Euclid came up with a step-by-step procedure for performing a calculation according to well-defined rules. This is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
Throughout history, algorithms have been used for calculating traffic light timings, predicting stock prices, deciding which airplanes get permission to land and so on. Nowadays in our digital society, algorithms are in fact everywhere. From the voice assistant in your phone to the autopilot in your car, more and more aspects of our lives are powered by algorithms. And although algorithms rule the world, we seem to know very little about them. How do they work? And by whom were they created? How can we hold them accountable?
Interactive installations
The Algorithm Hall of Fame is a celebration of scientific progress while fueling the debate around responsible data science. The organization wants to raise awareness among the general public. People talk about data all the time. But they forget that without algorithms their data would be just ones and zeros. The Award Winning Agency WeArePi from Amsterdam is working on the creative campaign that will launch the Hall of Fame.
“It’s more than just a list of important inventions”, says curator Jim Stolze. “We have invited artists and philosophers to bring these algorithms to life. They are creating a traveling circus of interactive installations to stimulate lively debate around the use of algorithms.”
Categories
The initiative was launched at the O’Reilly AI Conference in New York. The organization has started accepting nominations in three categories:
1. Fundamental algorithms: breakthrough inventions, mostly from the academic world;
2. Models: combinations of different algorithms and approaches, like Yolo Real-Time Image Detection or Deep Learning;
3. Applications: A list of algorithms that have had the biggest impact, like Google Pagerank, Bitcoin Hashing algorithms etc.
Process
From May until September people from the academia, the business world and the general public are invited to add algorithms to the “shortlist”. In October, an expert jury will decide which algorithms will actually be induced into the Hall of Fame during an official ceremony.
The jury consists of Avi Wigderson (Professor of mathematics at the Institute for Advanced Study in Princeton), Jos Baeten (Director of the Netherlands research institute for mathematics and computer science), Steven Goldfarb (particle physicist working on the ATLAS Experiment at CERN) and Ben Lorica (founding Department Chair for Statistics and Mathematics at C.S.U. Monterey Bay).
Let us know which algorithms you would like to see in the hall of fame!
| Why we need an Algorithm Hall of Fame | 11 | why-we-need-an-algorithm-hall-of-fame-111633a09452 | 2018-06-11 | 2018-06-11 12:06:39 | https://medium.com/s/story/why-we-need-an-algorithm-hall-of-fame-111633a09452 | false | 568 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jim Stolze | Tech Entrepreneur at Amsterdam Science Park. Board member A.I. for Good NL. Curator of the Algorithm Hall of Fame. | 7955f67744b8 | JimStolze | 1,925 | 1,283 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e1387ea3b1b3 | 2017-09-28 | 2017-09-28 17:36:37 | 2017-09-28 | 2017-09-28 17:40:42 | 1 | false | en | 2017-09-28 | 2017-09-28 17:40:42 | 1 | 111747ecb96c | 1.335849 | 0 | 0 | 0 | Medical errors kill 251,000 Americans every year, making them the third leading cause of death in the U.S. Additionally, hospital-acquired… | 4 | A physician’s perspective: Making hospitals safer
Medical errors kill 251,000 Americans every year, making them the third leading cause of death in the U.S. Additionally, hospital-acquired conditions are a cause for very serious concern — at any given moment, 7% of hospitalized patients in developed countries, and 10% in developing nations, are acquiring at least one healthcare associated infection. Declines in HACs (hospital acquired conditions) can save tens of thousands of lives as well as tens of billions of dollars in health care costs.
Advanced analytics help minimize clinical errors, drastically reduce risk of HACs, and save lives. Analytics help physicians gain a deeper understanding of patient health and risks – normally, healthcare providers only have the time and tools to evaluate a limited number of any given patient’s risk factors, but data science changes the game. Aggregating individual patient data sets from their EHR (Electronic Health Record), ADT (Admission, Discharge, and Transfer) logs, and wearable sensors with pre-built data analytics models that mine millions of records, the patient’s care provider can a-priori determine high risk factors, and acts on these insights to mitigate such risks.
Often, HACs are the result of simple clinical errors. Health care providers can now leverage data to prevent basic mistakes that would otherwise have deadly consequences for patients. KenSci’s Clinical Analytics solution improves health outcomes and reduces the total cost impact of HACs. The solution integrates easily with existing EMR systems, to identify patterns and predict patients who are at high risk of HACs, enabling care managers to engage such patients and modify their risks to prevent infection, readmission, and mortality. Hear more from Dr. Tom Louwers, Associate Medical Director at KenSci, about preventing and reducing hospital-acquired conditions in the following blog post:
https://enterprise.microsoft.com/en-us/articles/industries/health/physician-perspective-reducing-hacs-with-data-science/?wt.mc_id=AID609949_QSG_SCL_170456
| A physician’s perspective: Making hospitals safer | 0 | a-physicians-perspective-making-hospitals-safer-111747ecb96c | 2018-02-18 | 2018-02-18 09:47:35 | https://medium.com/s/story/a-physicians-perspective-making-hospitals-safer-111747ecb96c | false | 301 | KenSci Blog | KenSci is the world’s first vertically integrated machine learning platform for healthcare, making it more proactive, coordinated and accountable, fast! To know more, visit www.kensci.com | null | null | null | KenSci | kensci | MACHINE LEARNING,PREDICTIVE ANALYTICS,HEALTHCARE,DATA SCIENCE | kensci | Healthcare | healthcare | Healthcare | 59,511 | KenSci | We’re fighting Death with Data Science | 9db9e86483f3 | kensci | 18 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-29 | 2018-08-29 23:24:40 | 2018-08-29 | 2018-08-29 23:27:55 | 2 | true | en | 2018-08-29 | 2018-08-29 23:27:55 | 0 | 11182c696a3c | 3.896541 | 17 | 1 | 0 | As robotics is becoming more and more advanced, we are starting to see more robots that look and move just like humans. But there is still… | 5 | Giving Robots Human Muscles
As robotics is becoming more and more advanced, we are starting to see more robots that look and move just like humans. But there is still some room for improvement, like adding human muscles to robots and giving them face and maybe make something which can work like the human brain.
It’s a complicated thing, the muscles, face and a brain like humans.
We are still working on getting out of that Uncanny Valley, there. And one way we might close that gap is to replace metallic robotics parts with human ones. While this might sound like Bicentennial Man, human-robot hybrid or bio-hybrid robotics isn’t a sci-fi dream, it’s a real field that already exist and this has been present for quite a long time. And it’s not just a way to make more lifelike robots. By mimicking our bodies better, biohybrid robots might help scientists learn more about how we move, why we are built the way we are, and how to fix all these moving parts when something goes wrong. It will also help the scientist to learn how the human can evolve to make the human body better than it is right.
It is basically creating a human body to learn about human body.
Just like a human body, a robot generally need a skeleton, and ways to move that skeleton. These include motors and actuators, which can deliver rotational or linear forces to the joints of the skeleton.
In bio-hybrid robots, that movement comes from live muscle tissue. Of course, scientist aren’t just taking entire muscle from humans and implementing them on a metal robot. At least if they are, they are not telling us.
You don’t know what they are doing in that weird computer science lab. But, the legitimate scientists, they grow their own muscles. This happens in a lab by culturing myoblasts, embryonic cells with the unique ability to differentiate into different muscles cell in a process called myogenesis.
To make muscles grow how they want, the scientist create a scaffold in the form of a hydrogel, a special water-based gel that’s great at absorbing and retaining cells. Inside the hydrogel, the cells form into muscle fibers, long strands of muscles cells that all put together in the same direction. By altering the shape of the hydrogel, the alignment of the muscle fibers can be tweaked and adjusted, giving scientist control over the direction they pull in. And once these filers are formed, a tiny electric shock is all it takes to make them contract. They are then ready to be connected to the joints of a robotic skeleton, and done!
Scientists at the University of Tokyo just managed to get small muscle Paris working in early 2018. That’s because there are a number of limitations that need to be overcome before biohybrid robots really take off. These lab-grown muscles don’t have any way to repair themselves, so they only last a few day to a week.
In your body, muscles receives spare parts via your blood. But biohybrid robots don’t have that fluid exchange system, so once the tissue wears down, that’s it. And this breakdown is accelerated by the friction generated when the muscles move. So your muscles are surrounded by epicardial and fascia, connective tissues which separates individual muscles and help them glide smoothly past each other.
So to last longer, biohybrid robots need some kind of biocompatible lubricant to reduce friction, like bio-WD40.
Also, the electrical stimulation part could use some work. While it does get the job done, it’s difficult to control precisely how strongly the muscle contracts, especially for sustained contractions. So fine motor movements aren’t really possible yet. The electricity also contributes to wear to tear.
The muscles have to stay wet, so using electricity inevitably causes some of that water to separate into hydrogen and oxygen gas, a process called electrolysis. These gas bubbles, in turn, further damage the muscles. One possible way to get around that is to grow motor neurons in the muscle tissue and let them command the muscle instead.
Which seems a little too close to a Westworld host to my comfort. But there are some good reasons to continue perfecting these biohybrids robots, even if they seem pretty creepy.
One big advantage of using real muscles is that they are flexible. And the idea of using soft, flexible moving parts is the driving force behind the field of Soft Robotics. These robots use things like cables and inflatable bladder to move instead of metallic motors. And their flexibility allows them to adapt better to new tasks.
Biohybrids robots could lead to better soft robots, including one that would be safe to use on or even in our bodies, since they won’t have as many sharp bits or cell harming chemicals in them. But what’s really exciting to scientist is that bio-hybrid robot can move like us. That means they can help us understand why we move the way we do, how our brains control our bodies, and how to fix things if they go wrong.
The human body is an incredibly complex machine. Hundreds of muscles are responsible for moving the joints in our limbs that allow us to work and play and do things which we do on everyday basis.
Several muscles can be responsible for a single movement, and a single muscle can contribute to several different movements. And that means if someone develops a motor impairment, it can be difficult to understand exactly what’s going on.
| Giving Robots Human Muscles | 165 | giving-robots-human-muscles-11182c696a3c | 2018-08-30 | 2018-08-30 09:07:41 | https://medium.com/s/story/giving-robots-human-muscles-11182c696a3c | false | 931 | null | null | null | null | null | null | null | null | null | Robotics | robotics | Robotics | 9,103 | Downn | i write | d967bad13fe8 | downn | 135 | 0 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-27 | 2018-07-27 13:31:33 | 2018-07-27 | 2018-07-27 13:55:06 | 3 | true | en | 2018-07-27 | 2018-07-27 14:15:16 | 3 | 111aa524a829 | 2.738679 | 0 | 0 | 0 | Hi everyone and welcome to Part 2 of this blog series dedicated to providing bite-size definitions about Artificial Intelligence and when… | 5 | Just enough Artificial Intelligence to impress — Part 2
Photo by Pietro Jeng on Unsplash
Hi everyone and welcome to Part 2 of this blog series dedicated to providing bite-size definitions about Artificial Intelligence and when to use these terms. Whether you are in charge of a project related to AI and need an overview, or are simply curious but don’t have time to dive deep into the mathematical details, you’ve come to the right article!
Moving on from what covered in the previous post, where we talked about the terms Artificial Intelligence, Machine Learning and Deep Learning, here are some new terms to expand your data science vocabulary:
Model: Sadly, life is not all rainbows in data science because when we say model, Victoria’s Secret is the last thing we think of. So here’s what we actually mean when we say “model”: a unit that has the learning capabilities. If you write a code which uses a Machine Learning algorithm, this is a model.
When to use: You can use it to refer to the “artificial brain” that was coded, e.g. “Has your model already been trained?”
Data: An unprocessed piece of information, which can be used to provide insights and draw conclusions.
When to use: When referring to whatever is being input to your model for it to learn.
Training: This definitely does not mean going to the gym in datascience-land! Here’s how a model learns: The model is initially like an empty brain (not literally, more like just a piece of code outputting rubbish values). It is presented with data. The model adjusts itself according to this data so as to give better outputs. This stage of adjusting itself is known as training.
When to use: If a data scientist seems to be chilling instead of working, it’s probably because his model is currently being trained (Training takes time and computer resources. Luckily, there are ways to train a model without blocking up your whole PC, which we shall learn about in future articles).
Supervised Learning: A form of learning in Machine Learning where the training includes providing data containing both the expected input into and the expected output from the model so that it can learn. An example would be to provide the season, the weather as well as the temperature when making a model that predicts temperature from data about the season and the weather.
Unsupervised Learning: The opposite of supervised learning in the sense that the expected output is not given to the model during training. While typical uses of supervised learning include prediction and classification, unsupervised learning is more suited to finding patterns in data and simplifying data visualisation.
When to use: Supervised and unsupervised learning are both techniques used in Machine Learning, so it depends way the model is being fed data.
Here are some more details highlighting the relationship between supervised, unsupervised and machine learning. Don’t panic if there are terms you don’t understand, we shall cover them in future posts!
So these shall be the terms for today, and I’ll be looking forward to seeing you again in my next post, where I shall cover Neural Networks and Data Pre-processing among others!
Have a preference or suggestion for the keywords you want me to explain in the next post? Please write about it in the comments section and I shall address it. Also, if you enjoy these articles, please give them a clap so that it’s easier for others to find them. I’ll be back for another post of this series with some new keywords to slowly turn you into an AI Rockstar and in the mean time, keep being curious, it’s awesome!
| Just enough Artificial Intelligence to impress — Part 2 | 0 | just-enough-artificial-intelligence-to-impress-part-2-111aa524a829 | 2018-07-27 | 2018-07-27 15:05:47 | https://medium.com/s/story/just-enough-artificial-intelligence-to-impress-part-2-111aa524a829 | false | 580 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Hans A. Gunnoo | Electronic Engineering with Artificial Intelligence, Data Science enthusiast, blogger, adventurer. LinkedIn: https://uk.linkedin.com/in/hans-a-gunnoo-979183147 | db2da1a671a8 | gamer104h | 168 | 16 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-02 | 2018-02-02 15:38:30 | 2018-02-08 | 2018-02-08 19:36:04 | 1 | false | pt | 2018-02-08 | 2018-02-08 19:36:04 | 0 | 111d27f7c5de | 4.135849 | 6 | 0 | 0 | O mercado evoluiu bastante no último ano quando se trata de chatbot. Empresas já começaram a implementar essa novidade na sua operação… | 3 | O que NÃO fazer com o seu chatbot
O mercado evoluiu bastante no último ano quando se trata de chatbot. Empresas já começaram a implementar essa novidade na sua operação, profissionais estão buscando se desenvolver para atuar na área e aos poucos surgem as melhores práticas de como ser bem sucedido com essa ferramenta de relacionamento.
Entendendo que muitos materiais foram compartilhados dando dicas de como fazer bons chatbots, resolvi compilar minha experiência na área para esclarecer um pouco do que não deve ser feito e que pode prejudicar o bom trabalho que você já está construindo. Então, aí vai alguns tópicos que espero que sejam úteis:
Não tente se passar por humano
Essas notícias e conversas sobre inteligência artificial, NLP, entre outros temas, nos faz pensar que os bots estão cada vez mais próximos de interagir como humanos e logo isso poderá ser uma realidade, mas vale reforçar a palavra “logo” e “poderá”. Ao implementar um chatbot na sua operação, sempre existe aquela tentação de querer passar esse bot por humano, afinal, quem estiver do outro lado nem vai perceber, certo? Eu diria que não é bem assim. Imagine aqui comigo 4 cenários:
O chatbot se apresenta como um robô ao seu cliente, mas falha em ajudá-lo. É compreensível, pois ninguém espera que um robô resolva os seus problemas.
O chatbot se apresenta como um humano ao seu cliente, mas falha em ajudá-lo. Ao descobrir que se trata de um robô, o cliente pode se sentir fortemente enganado pela empresa.
O chatbot se apresenta como um robô ao seu cliente e tem sucesso em ajudá-lo. O cliente tende a ficar surpreso por conseguir resolver a situação sem nem precisar falar com um humano.
O chatbot se apresenta como um humano ao seu cliente e tem sucesso em ajudá-lo. Legal, mas é esperado que um humano ajude com seu problema.
Fazer essa brincadeira do robô que é humano pode ser um risco muito alto para a imagem da sua empresa quando esse cliente for frustado, então seja honesto e aproveite para criar um bot com perfil interessante e cativante.
A experiência de atendimento em outros canais pode ser uma boa referência, mas só referência!
Muitos chatbots já estão em pleno funcionamento no mercado, possibilitando conhecermos muitas referências na prática, mas ainda assim existem alguns mercados em que você será pioneiro no assunto ou até a sua empresa tem uma cultura distinta de alguns robôs que já existem no mercado. Partir do zero para criar o perfil, conteúdo e até mesmo comportamento desse bot pode ser assustador e é muito normal buscar referências de outros departamentos de atendimento que você já dispõe na empresa.
O risco começa quando você ouve frases como “Eu quero que o chatbot atenda igual ao nosso SAC telefônico”, “Vamos montar o bot com as mesmas funções do site” ou “Nosso aplicativo tem uma experiência muito boa, vamos replicar para o chatbot”. Por limitações, possibilidades e até mesmo cultura de uso, cada plataforma vai proporcionar uma experiência de atendimento diferente para as pessoas. Já existem melhores práticas de como atender seu cliente nas redes sociais, estratégias de SEO para a melhor navegação no seu site e até mesmo scripts bem elaborados de como trazer a solução pelo telefone, assim também devem existir estratégias focadas na interação pelo chatbot.
Com certeza nada será desperdiçado, toda a experiência de relacionamento em outros canais pode e deve ser resgatada na hora de configurar ou otimizar o seu robô, mas tenha o pensamento de criar uma experiência única para o seu cliente naquela plataforma.
A tecnologia é essencial, mas não esqueça do relacionamento
O que vem primeiro, o ovo ou a galinha? A tecnologia ou a necessidade dela? As vezes tenho a impressão de que a tecnologia evoluí muito mais rápido do que como nos relacionamos. É muito fácil nos deixarmos levar por novidades, mesmo sem saber como ela poderá ser útil. Se for um desenvolvimento interno da sua empresa ou uma contratação de uma plataforma de chatbot, é importante que o resultado atenda suas necessidades.
Antes de definir qual linguagem será utilizada ou qual a melhor empresa para contratar, planeje como será a experiência desse relacionamento com o seu cliente, o seu público, com quem você quer interagir e ai sim você terá clareza para entender se a solução X, Y ou Z vai entregar o que você precisa.
Se você já implementou o seu chatbot em operação, nunca é tarde para criar experiências mais ricas e produtivas no atendimento com esse olhar mais estratégico e menos técnico. Buscando uma referência clássica, mas pertinente das histórias de Alice:
“Para quem não sabe aonde quer chegar, qualquer caminho (tecnologia) serve”.
O chatbot é um reflexo da sua empresa e não uma maquiagem
Estamos falando da ferramenta do momento quando o assunto é atendimento ao cliente. Com certeza, implementar um chatbot na sua operação pode resultar em muitos benefícios para o seu cliente como cortar a fila de atendimento, ampliar o horário disponível, ter informações personalizadas via integração de maneira mais prática, entre muitas outras melhorias, mas não tem robô que faça milagre!
Se os seus clientes já reclamam de algum problema relacionado a sua marca, produto ou SAC, eles continuarão a expor essa situação dentro do chatbot também. Essa ferramenta pode ser vista como um canal de relacionamento inteligente, mas ela será um espelho da sua marca. Pode ser que essa novidade dê fôlego para o seu atendimento conseguir dar uma atenção melhor para os casos que surgirem, mas se o processo não for redefinido, a chance dos problemas continuarem é grande.
Depois de ler tudo isso, você deve estar se perguntando “será que esse negócio de chatbot é pra mim?”
Tenho certeza que sim! Pode e será um grande benefício para o seu serviço ou produto, mas não deixe de lado a experiência que a sua marca proporciona ou tem oportunidade de proporcionar.
Colocar as pessoas no centro da sua estratégia de negócios é uma premissa que vem sendo aplicada no mercado e não deve ser diferente quando se trata desse novo canal de relacionamento.
Aproveite e comente também outros casos que você já viu que são prejudiciais e vamos juntos construir boas práticas para o mercado de chatbots.
| O que NÃO fazer com o seu chatbot | 13 | o-que-não-fazer-com-o-seu-chatbot-111d27f7c5de | 2018-02-12 | 2018-02-12 20:24:04 | https://medium.com/s/story/o-que-não-fazer-com-o-seu-chatbot-111d27f7c5de | false | 1,043 | null | null | null | null | null | null | null | null | null | Chatbots | chatbots | Chatbots | 15,820 | Renahn Ruas | Bot Specialist + UX Entusiast | 63b989e05985 | ruasre | 85 | 73 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 70f217fc23a8 | 2017-09-22 | 2017-09-22 20:43:07 | 2017-09-22 | 2017-09-22 21:31:05 | 3 | true | en | 2017-09-22 | 2017-09-22 21:31:05 | 8 | 11200f64c0b6 | 6.093396 | 92 | 2 | 0 | A startup called iSee thinks a new approach to AI will make self-driving cars better at dealing with unexpected situations. | 4 | Finally, a Driverless Car with Some Common Sense
A startup called iSee thinks a new approach to AI will make self-driving cars better at dealing with unexpected situations.
Data from LIDAR, radar, cameras and GPS units are seen inside a car — David McNew/Getty Images)
By Will Knight
A startup called iSee thinks a new approach to AI will make self-driving cars better at dealing with unexpected situations.
Boston’s notoriously unfriendly drivers and chaotic roads may be the perfect testing ground for a fundamentally different kind of self-driving car.
An MIT spin-off called iSee is developing and testing the autonomous driving system using a novel approach to artificial intelligence. Instead of relying on simple rules or machine-learning algorithms to train cars to drive, the startup is taking inspiration from cognitive science to give machines a kind of common sense and the ability to quickly deal with new situations. It is developing algorithms that try to match the way humans understand and learn about the physical world, including interacting with other people. The approach could lead to self-driving vehicles that are much better equipped to deal with unfamiliar scenes and complex interactions on the road.
“The human mind is super-sensitive to physics and social cues,” says Yibiao Zhao, cofounder of iSee. “Current AI is relatively limited in those domains, and we think that is actually the missing piece in driving.”
Zhao’s company doesn’t look like a world beater just yet. A small team of engineers works out of a modest lab space at the Engine, a new investment company created by MIT to fund innovative local tech companies. Located just a short walk from the MIT campus, the Engine overlooks a street on which drivers jostle for parking spots and edge aggressively into traffic.
The desks inside iSee’s space are covered with lidar sensors and pieces of hardware the team has put together to take control of its first prototype, a Lexus sedan that originally belonged to one of the company’s cofounders. Several engineers sit behind large computer monitors staring intently at lines of code.
iSee might seem laughably small compared to the driverless-car efforts at companies like Waymo, Uber, or Ford, but the technology it’s developing could have a big impact on many areas where AI is applied today. By enabling machines to learn from less data, and to build some form of common sense, their technology could make industrial robots smarter, especially about new situations. Spectacular progress has already been made in AI recently thanks to deep learning, a technique that employs vast data-hungry neural networks (see “10 Breakthrough Technologies 2013: Deep Learning”).
When fed large amounts of data, very large or deep neural networks can recognize subtle patterns. Give a deep neural network lots of pictures of dogs, for instance, and it will figure out how to spot a dog in just about any image. But there are limits to what deep learning can do, and some radical new ideas may well be needed to bring about the next leap forward. For example, a dog-spotting deep-learning system doesn’t understand that dogs typically have four legs, fur, and a wet nose. And it cannot recognize other types of animals, or a drawing of a dog, without further training.
Driving involves considerably more than just pattern recognition. Human drivers rely constantly on a commonsense understanding of the world. They know that buses take longer to stop, for example, and can suddenly produce lots of pedestrians. It would be impossible to program a self-driving car with every possible scenario it might encounter. But people are able to use their commonsense understanding of the world, built up through lifelong experience, to act sensibly in all sorts of new situations.
“Deep learning is great, and you can learn a lot from previous experience, but you can’t have a data set that includes the whole world,” Zhao says. “Current AI, which is mostly data-driven, has difficulties understanding common sense; that’s the key thing that’s missing.” Zhao illustrates the point by opening his laptop to show several real-world road situations on YouTube, including complex traffic-merging situations and some hairy-looking accidents.
A lack of commonsense knowledge has certainly caused some problems for autonomous driving systems. An accident involving a Tesla driving in semi-autonomous mode in Florida last year, for instance, occurred when the car’s sensors were temporarily confused as a truck crossed the highway (see “Fatal Tesla Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up”). A human driver would have likely quickly and safely figured out what was going on.
Zhao and Debbie Yu, one of his cofounders, show a clip of an accident involving a Tesla in China, in which the car drove straight into a street-cleaning truck. “The system is trained on Israel or Europe, and they don’t have this kind of truck,” Zhao says. “It’s only based on detection; it doesn’t really understand what’s going on,” he says.
iSee is built on efforts to understand how humans make sense of the world, and to design machines that mimic this. Zhao and other founders of iSee come from the lab of Josh Tenenbaum, a professor in the department of brain and cognitive science at MIT who now serves as an advisor to the company.
Tenenbaum specializes in exploring how human intelligence works, and using that insight to engineer novel types of AI systems. This includes work on the intuitive sense of physics exhibited even by young children, for instance. Children’s ability to understand how the physical world behaves enables them to predict how unfamiliar situations may unfold. And, Tenenbaum explains, this understanding of the physical world is intimately connected with an intuitive understanding of psychology and the ability to infer what a person is trying to achieve, such as reaching for a cup, by watching his or her actions.
The ability to transfer learning between situations is also a hallmark of human intelligence, and even the smartest machine-learning systems are still very limited by comparison. Tenenbaum’s lab combines conventional machine learning with novel “probabilistic programming” approaches. This makes it possible for machines to learn to infer things about the physics of the world as well as the intentions of others despite uncertainty.
Trying to reverse-engineer the ways in which even a young baby is smarter than the cleverest existing AI system could eventually lead to many smarter AI systems, Tenenbaum says. In 2015, together with researchers from New York University and Carnegie Mellon University, Tenenbaum used some of these ideas to develop a landmark computer program capable of learning to recognize handwriting from just a few examples (see “This AI Algorithm Learns Simple Tasks As Fast As We Do”).
A related approach might eventually give a self-driving car something approaching a rudimentary form of common sense in unfamiliar scenarios. Such a car may be able to determine that a driver who’s edging out into the road probably wants to merge into traffic.
When it comes to autonomous driving, in fact, Tenenbaum says the ability to infer what another driver is trying to achieve could be especially important. Another of iSee’s cofounders, Chris Baker, developed computational models of human psychology while at MIT. “Taking engineering-style models of how humans understand other humans, and being able to put those into autonomous driving, could really provide a missing piece of the puzzle,” Tenenbaum says.
Tenenbaum says he was not initially interested in applying ideas from cognitive psychology to autonomous driving, but the founders of iSee convinced him that the impact would be significant, and that they were up to the engineering challenges.
“This is a very different approach, and I completely applaud it,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research institute created by Microsoft cofounder Paul Allen to explore new ideas in AI, including ones inspired by cognitive psychology.
Etzioni says the field of AI needs to explore ideas beyond deep learning. He says the main issue for iSee will be demonstrating that the techniques employed can perform well in critical situations. “Probabilistic programming is pretty new,” he notes, “so there are questions about the performance and robustness.”
Those involved with iSee would seem to agree. Besides aiming to shake up the car industry and perhaps reshape transportation in the process, Tenenbaum says, iSee has a chance to explore how a new AI approach works in a particularly unforgiving practical situation.
“In some sense, self-driving cars are going to be the first autonomous robots that interact with people in the real world,” he says. “The real challenge is, how do you take these models and make them work robustly?”
Will Knight is the senior editor for AI at MIT Technology Review. I mainly cover machine intelligence, robots, and automation, but I’m interested in most aspects of computing.
© 2017 MIT Technology Review
| Finally, a Driverless Car with Some Common Sense | 440 | finally-a-driverless-car-with-some-common-sense-11200f64c0b6 | 2018-08-25 | 2018-08-25 01:42:05 | https://medium.com/s/story/finally-a-driverless-car-with-some-common-sense-11200f64c0b6 | false | 1,469 | MIT Technology Review | null | technologyreview | null | MIT Technology Review | null | mit-technology-review | TECHNOLOGY,TECH,ARTIFICIAL INTELLIGENCE | techreview | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | MIT Technology Review | Reporting on important technologies and innovators since 1899 | defe73a9b0ba | MITTechReview | 23,166 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 71473fcc6755 | 2018-01-24 | 2018-01-24 17:30:07 | 2018-01-25 | 2018-01-25 11:45:03 | 14 | false | en | 2018-01-25 | 2018-01-25 12:00:45 | 7 | 112026fff66b | 6.959434 | 14 | 2 | 0 | What football analytics (or at least a reasonable portion of it) should really aim for is to balance out both objectives: it should… | 2 | I wrote this paper to be submitted for the 2018 MIT Sloan Sport Analytics Conference paper competition, featuring some interesting research that’s being done inside the Football Whispers Data Science team. Sadly, the paper didn’t make the final cut for the conference, but I figured I’d share it here for everyone to see as the methodology is pretty interesting and the applications have the great attribute of being both profound and accessible for everyone. Hopefully we’ll see some of these applications featuring in the Football Whispers site and editorial content soon.
If you can’t be bothered to read the whole paper (which can get unnecessarily technical to be fair), here’s a re-write of its main content written with a less ‘academic’ narrative.
We all know that football is a complex subject to study under the lens of data analysis and statistics. Football is a dynamic and fluid invasion sport, with highly interdependent events occurring simultaneously and continuously. As a consequence, a lot of applied research into football data falls into the trap of being non-descriptive; essentially a “black box” that seems to declare rigid answers that rarely trickle down in descriptive, applicable or accessible ways to short-term stakeholders like clubs and coaches interested in winning the next game or planning their tactical system. Perhaps this flaw goes a long way in explaining why football analytics meets such resistance from coaches, scouts and others when in reality, both lines of work should embrace each other as treasured allies working together towards the same goal.
What football analytics (or at least a reasonable portion of it) should really aim for is to balance out both objectives: it should leverage the insight to be extracted from huge amounts of complex data that the brains of coaches and scouts cannot process, but should also make sure its results are descriptive and accessible for the more organic domains of knowledge that coaches have about the game. If analytics knowledge and coaching knowledge are incompatible, it is ultimately football that is missing out.
The methodology that we used for this paper is inspired from an area of research that faces very similar circumstances: Natural Language Processing (NLP). More specifically, Topic Extraction, which is concerned with automatically sorting text documents into the different semantic topics that constitute it. In the information age, automatic methods for classifying large amounts of documents semantically are incredibly practical. Burgeoning fields such as digital marketing and sentiment analysis of social media content rely heavily on the scalability of text mining: most humans can classify tweets about a brand into different ‘sentiment categories’; but, just as in the case of our treasured coaches, the manpower needed to do this across the vast quantities of data available is completely unfeasible. Likewise, if the results from Topic Extraction don’t line up with natural categories and sentiments that human readers would sort documents into, it makes them extremely hard to leverage for say a marketing agency.
Our research takes a page out of the Topic Extraction handbook and repurposes one of its foremost models called Latent Dirichlet Allocation (LDA) to be applied to football data. I won’t get into the exact details of our conceptualisation of styles in football within the framework of topic extraction, but the basic idea is that just as different words in documents determine the document’s topic, so too should different features in a football match determine the style of the teams/players involved. When an LDA model is fit to a set of documents, it will produce a set of ‘topics’ characterised by key words. The image shows the key words associated to each topic of an LDA model fit to a large set of news articles.
By looking at the key words we can naturally imagine what subject each ‘topic’ refers to. In the example, topic 1 corresponds to technology articles, topic 2 probably identifies sales ads while topic 3 includes news articles of Eastern Europe, etc. LDA’s major plaudits stem from the fact that it is unsupervised: the algorithm picks up on these reasonable and natural semantic topics that differentiate the different news articles with zero human intervention.
After a model is fit, it can be used to process new documents and will classify them as a mixture of the bespoke topics, which basically means it will say things like “this document is 30% technology, 40% sale ad, etc”. It does this by analysing the frequency of different key words appearing in the document. The thinking process behind our methodology should have become clear by now: an LDA model will analyse the frequency of features of a team in a match, such as ‘long ball into opposition half’, ‘touches’, ‘interception’, etc; and classify it into a mixture of the different styles it learned.
Our model was trained on data from Europe’s big 5 leagues for the 2016–17 and 2017–18 seasons. These are roughly the topics it learned:
With a trained model, we can think of wide range of applications by using the model to classify matches from teams in different contexts. ‘Radar charts’ are a convenient way to superimpose the percentages corresponding to different styles.
REMARK: Radar charts are a common feature in football data analytics, but in a very different use case which might make our visualisations ‘miss the point’ if the main difference isn’t explained: in traditional uses of radar charts, a team/player’s chart can be indefinitely large in all axes of the chart. If an axis is ‘xG’, there’s no real limit on how large this value can be for a team. In our use of radar charts, the ‘percentage’ on each category isn’t related to total volumes but rather relative frequencies and as such they must add up to 100% between all of them, and so a team’s radar simply cannot be very large in all the axes. It is important that the reader keeps all this in mind while interpreting the radar charts below.
A snapshot impression of a team’s playing style can be obtained by plotting the average percentage in which their matches fall into each category.
Another interesting use is comparing the style of match a team plays with or without a player, which can have very real and straightforward impact on team selection and squad management. Paul Pogba, Manchester United’s record signing, is a prime example of a player whose absence is heavily felt in terms of the style.
In direct selection dilemmas, like Danny Rose versus Ben Davies at left-back for Spurs, this methodology can provide context for Pochettino to make his choice:
Notice how the profiles are practically mirrored from each other: when Rose is playing Davies is not and vice versa.
Similarly, we can compare the styles of teams before and after a managerial change to see how the manager has impacted the type of football they play. Allardyce taking over at Everton provides a prime example of a team whose style clearly changed with a managerial shake up.
Marcelino taking over Valencia provides another compelling example considering Valencia’s remarkable upturn in performance this season:
LDA Models for Players:
This exact same methodology can be applied to the frequency of features a player performs as opposed to the whole team. An LDA model fitted to data from midfielders across Europes big 5 leagues found the following set of topics:
This result gives us chance to further the point of ‘relative proportions’ rather than ‘absolute volumes’. It would be hard to argue that Mesut Ozil isn’t a proficient passer, or that somehow Fellaini is a more proficient passer than he is! When interpreting these radars the reader has to remember that this is about: what style categories do the player’s features seem to stem the most from in proportionate terms? Ozil can have high volumes of the features related to ‘proficient passing’, and most probably much more than Fellaini. However, proportionately, his features are more aligned with ‘Chance Creation’ than those of ‘Proficient Passing’.
A similar model for defenders is presented below.
As an example, this methodology nicely highlights the impact that playing under Guardiola’s all conquering Manchester City side this season has had on ex-Spurs full-back Kyle Walker.
Finally, this methodology also provides a very clean and clear framework for ‘similar player’ suggestions. When combined with performance ratings for players that we have also developed for players here at Football Whispers, we can imagine a simple and elegant ‘similarity’ concept along these two key axes: playing style and performance level.
This particular application faces stern competition from many previous methods, like clustering based methods or even what I presented at the Opta Pro Forum in 2017. However, this methodology has the upper hand in the descriptiveness of its suggestion: everyone who uses it can immediately digest exactly what it is saying, as opposed to other ‘similarity’ classifiers which can at times seem “black box-y”.
LIONEL MESSI SIMILARITY RECOMMENDER
The appeal of this approach is how digestible it is. Lewandowski performs to similar standards than Messi but is a different style of player. Dybala and Alexis are similar types of players but don’t perform to the same standards. Neymar and Insigne are both similar in style and in quality.
JORGINHO SIMILARITY RECOMMENDER
This last application is kind of too simplistic to make any grandiose claims around it, but it has passed every single ‘eye-test’ I have submitted it to so far. Might be worth investigating it more thoroughly as this content evolves on the site. For now though, I’ll leave you with a link to our little taster video for this type of content.
If you like this sort of content then you’ll like what is to come from us over the next year. Make sure to follow the team on twitter: Martin, Bobby and David.
| We all know that football is a complex subject to study under the lens of data analysis and… | 48 | we-all-know-that-football-is-a-complex-subject-to-study-under-the-lens-of-data-analysis-and-112026fff66b | 2018-05-07 | 2018-05-07 15:28:33 | https://medium.com/s/story/we-all-know-that-football-is-a-complex-subject-to-study-under-the-lens-of-data-analysis-and-112026fff66b | false | 1,460 | Blogs from Football Whispers’ engineering and data-science teams. | null | FootballWhispers | null | Football Whispers Engineering and Data Sci | football-whispers-engineering-and-data-sci | null | FB_Whispers | Machine Learning | machine-learning | Machine Learning | 51,320 | David Perdomo Meza | null | 3c34f02e2527 | david_7697 | 15 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-13 | 2018-03-13 16:20:34 | 2018-04-02 | 2018-04-02 02:54:49 | 7 | false | en | 2018-04-02 | 2018-04-02 02:54:49 | 2 | 112125f95dc2 | 5.865094 | 5 | 0 | 0 | Google Analytics is a freemium web analytics service offered by Google that tracks and reports website traffic — Wikipedia | 4 | Digital Marketing Data Analysis Using Google Analytics API and Python
Google Analytics is a freemium web analytics service offered by Google that tracks and reports website traffic — Wikipedia
Who is visiting your site? What they are looking for and how they are getting to your site? If you want to know statistics about the visitors to your website in order to expand and grow business on the internet, Google Analytics should definitely be your first choice.
The purpose of this project is to help a startup data science educational institution analyze website audience data and discover useful insight for the management. With the various devices in use today, even just by one person, it is really important to look closely at how desktop, mobile and tablet users interact with website everyday or even week so that owner can find out if there are any significant concerns which need to be addressed.
In this case,the first target audience I would like to focus on is technology. 3 questions I am pretty interested are listed as follows and all of them are based on the analysis of three different devices including desktop, mobile and tablet.
At what time of day does website get the most traffic for three different devices?
What time of day does website perform best in terms of goals such as the number of new users reaches 1000 at 11am?
What is the difference between new user and returning user’s average session duration time at specific pages?
Google Analytics provides an enormous amount of information including website traffic data that can give answers to these questions. To get started using the Google Analytics Reporting API first thing to do is to create a project in the Google API console, enable the API, and create credentials. Then Analytics Service Object is used to query the Analytics Reporting API V4 and data gets retrieved containing metrics based on device dimensions from API. ‘Dimensions’ and ‘metrics’ that are used to create desired reports in Python are shown in the figure below.
Figure 1. Useful ‘dimensions’ and ‘metrics’ to create reports in python
Dimensions and metrics explorer lists and describes all the dimensions and metrics available through the Core Reporting API. Details of all the dimensions and metrics are in this link.
Once data is collected successfully, it is ready to do data preparation and analysis. Pandas library is imported in python to create data frames and process extracted data. Finally, Python graphing library plotly are used to create data visualization. Let us first take a look at of 24-hour Traffic Report.
Hour of Day Report
Before exploring actual data, we expect to see spikes during the day when people are awake and have predictably busy periods during weekdays because people may take a look at of website just before school or work in the morning or at lunchtime or right after school or work usually. In this case, traffic levels can be lower on weekends since most of people choose to relax and travel. Saying that, let us check the actual hourly traffic report from plotly result.
Figure 2. Hour of Day Report
It is better to pick a large data range. The more data we have, the clearer the results in the hourly report will be and here we choose to pick data that ranges from Oct 1, 2017 to Mar 1, 2018. This Hourly Traffic Report shows share of traffic the three devices get which is different from what we expected before. In the figure, there is a huge spike appearing at 11am from desktop and the percentage of desktop starts decreasing from 4pm. Since the number of tablet’s new users is much smaller compared to the mobile and desktop, it is reasonable to skip fluctuation of tablet for now and mainly focus on the rest of two devices. If we look at desktop line chart, the percentage of new users from mobile remains stable from 11am and starts to decrease from 8pm.
Identifying the peak times of the day when the website receives traffic helps owner make decision and do promotion. For example, if the trend starts decreasing at night, then the owner can consider adding a special offer on the website that ends at midnight to attract more new users and create a last minute rush of registration. If the peak is around lunchtime and afternoon like desktop and mobile, then make sure to promote the website with advertising heavily around 11am, just before lunchtime, to draw in the biggest number of visitors. In addition, this report is ideal for time of day bid adjustments on paid advertising platforms, such as Google AdWords or Bing Ads, which allow the owner to adjust bids at different times of the day.
Let us then turn to Day of Week Report and take a further look at share of the three devices for new users in a week
Day of Week Report
Figure 3. Day of Week Report
This Daily Traffic Report delivers different information. In the figure, we can see a spike on Tuesday and then there is an overall decreasing trend until Saturday for desktop. However, the overall trend is increasing starting from Wednesday and to Saturday for mobile. Peak days appear on Tuesday and Friday for tablet.
The statistics for different days of the week are important to look at if the owner wants an idea of how a visitor’s mood might change throughout the week. We can highlight the best and worst days of the week and get an insight what potential day that new users would be likely to make decision online in the middle of the week. For the best day, strategies like adding a special offer on the website on that day may help new users make decision faster and adjusting date of events to Wednesday may also attract more users on that day. As analysis from hour of day report before, this report is ideal for time of day bid adjustments on paid advertising platforms as well.
Average Session Duration at Different Pages Report (New Users vs. Returning Users)
Figure 4. Average Session Duration at Different Pages Report (Three Devices, New Users)
Figure 5. Average Session Duration at Different Pages Report (Three Devices, Returning Users)
For both new users and returning users, the highest average session duration time is on courses page except for mobile new users which appears on learningpath page. After discussion with operation manager and refer to other statistics, we get a conclusion that the learningpath page is not mobile friendly so that mobile users are likely to spend more time on that page.
Now we can see how well the website performs at different times of the day , different days of the week and what is the average session duration time on different pages for three different devices. Overall, since Google Analytics benefits business by providing more insight into the people who visit website, we can add more functions to the website such as video views and live chats for new users as well as promoting special offer for returning users in order to keep people on the site.
Figure 6. Digital Marketing Landscape (Online source)
Google analytics is a powerful tool. With the platform’s reporting capabilities,marketers can collect, present, and compare any data set or combination of data sets. Marketers that want to better understand their audience, and strengthen their marketing strategy, need to know how to best utilize all of the data available inside Google Analytics. They can switch goals to get different great insights into how people interact with their websites through using Google Analytics.
That is all about google analytics project for now. If you have any questions or comments, feel free to contact me or leave comments below. If you want to know more about this project’s client, WeCloudData, and have interests in data science, feel free to check this link. Thank you so much for taking the time to read this blog.
| Digital Marketing Data Analysis Using Google Analytics API and Python | 104 | digital-marketing-data-analysis-using-google-analytics-api-and-python-112125f95dc2 | 2018-06-06 | 2018-06-06 23:48:07 | https://medium.com/s/story/digital-marketing-data-analysis-using-google-analytics-api-and-python-112125f95dc2 | false | 1,276 | null | null | null | null | null | null | null | null | null | Google Analytics | google-analytics | Google Analytics | 3,111 | Sandy Liu | null | f9b5f6e56088 | yanhuiliu104 | 13 | 7 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f60682517af3 | 2018-01-23 | 2018-01-23 16:21:06 | 2018-01-23 | 2018-01-23 16:18:18 | 1 | false | en | 2018-01-23 | 2018-01-23 16:21:56 | 2 | 11212958a87a | 2.886792 | 0 | 0 | 0 | Written by Stuart Wilkes, Technology Writer, Sandcastles in Waterfalls | 5 | Artificial Intelligence: What’s in a Name?
Written by Stuart Wilkes, Technology Writer, Sandcastles in Waterfalls
Are many companies put off adopting Artificial Intelligence due to its name and dystopian image?
There is no doubt that one of the most exciting technologies around at present is Artificial Intelligence, or AI for short. During 2017 it made headlines around the globe for its potential to revolutionise almost every industry. From healthcare to manufacturing, from transportation to agriculture, AI is finding a niche to improve speed, efficiency, quality and performance. What’s not to like?
With any new technology there is always a downside, and with AI one that is causing the greatest concern is that it may lead to the loss of many jobs, millions in fact, as AI has the potential to undertake certain work far faster and with greater accuracy than its human counterparts.
AI is also being heralded as the start point from where armies of autonomous killer robots will emerge. Creating vast armies of unrelenting machines whose only purpose is to fulfil their programmatic obligations.
Will we see these, or other such doomsday scenarios unfold? That’s difficult to say, but it does appear that AI is getting a bad name for itself, one that is perhaps not truly justified. So what if its name was changed, would the adoption of AI into businesses be seen in a more favourable light?
When the term ‘artificial’ is used, its immediate connotations is that of replacement, and replacement scares people. What AI is actually going to do is extend our capabilities and support us in undertaking tasks that we are not very good at. Tasks where we get tired or are highly repetitive, or ones that require checking thousands of images, documents or files, where a human may miss a tiny detail because they were having an off day.
If we could replace ‘artificial’ with ‘augmented’ and thus AI becomes ‘augmented intelligence’ then understanding how it is here to extend our capabilities, as opposed to replacing them, becomes a very different proposition.
IBM Chief Executive Officer Ginni Rometty, during an interview with Bloomberg Business Week in the later part of 2017, when asked about the fear-mongering surrounding AI stated “I would have preferred augmented intelligence. It’s the idea that each of us are going to need help on all important decisions”
And that is the key, we are going to need help. There is no escaping the fact that over the last decade data creation and analysis has exploded. The rise of Big Data, of smartphones, of wearable technology, of IoT, has fuelled the generation of datasets of such magnitude that superlatives for how they compare in size to the contents of the complete Encyclopedia Britannica are impossible to make.
As this data growth has occurred, analysing it, mining it and generating fast and accurate analysis is impossible for a human to do. The often quoted example is that of medical scan images. These images have increased exponentially in generation and resolution in the last few years. Leading them to be used for early-stage diagnosis of the world’s most challenging diseases. AI, in the ‘augmented sense’ scans them and highlights areas of concern that do not create a diagnosis, but suggests to their human expert that they should be studied in more detail. The ‘augmentation’ of the skill set reduces the workload by a margin such that a greater number of people can obtain the scan in the first place, leading to more chances of catching medial issues early and preventing them from developing further.
Other such examples of augmentation can be found in agricultural crop management where pesticides or fertilisers are only applied to individual plants that need them, as opposed to the entire crop. Leading to fewer pesticides. Does this take away skills of a farmer? No. But it will increase his yield, whilst reducing his cost. That is a clear case of augmented intelligence.
There is a huge potential in AI, but it is not one to be feared. It is one to be embraced and utilised to extend, not replace our abilities. All it needs is a name change.
This article was originally published here and was reposted with permission.
Originally published at digileaders.com on January 23, 2018.
| Artificial Intelligence: What’s in a Name? | 0 | artificial-intelligence-whats-in-a-name-11212958a87a | 2018-01-23 | 2018-01-23 16:21:59 | https://medium.com/s/story/artificial-intelligence-whats-in-a-name-11212958a87a | false | 712 | Thoughts on leadership, strategy and digital transformation across all sectors. Articles first published on the Digital Leaders blog at digileaders.com | null | digitalleadersprogramme | null | Digital Leaders | digital-leaders-uk | DIGITAL LEADERSHIP,DIGITAL TRANSFORMATION,DIGITAL STRATEGY,DIGITAL GOVERNMENT,INNOVATION | digileaders | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Digital Leaders | Informing and inspiring innovative digital transformation digileaders.com | c0cad3f73a0 | DigiLeaders | 2,783 | 2,148 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 32881626c9c9 | 2018-09-25 | 2018-09-25 13:19:47 | 2018-09-26 | 2018-09-26 14:39:08 | 5 | false | en | 2018-09-26 | 2018-09-26 14:39:08 | 1 | 1121b3aa4d93 | 3.531447 | 2 | 0 | 0 | “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve… | 5 | Machine Learning Made Easy: What it is and How it Works
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
— Pedro Domingos
MACHINE LEARNING is the process by which a machine learns something. The end.
Only joking. We’re going to dig a little deeper than that, but it does go to show how simple the basic concepts of machine learning can be. In this article, we’re going to make machine learning so easy that a child could do it. That’s why we’re going to use LEGO.
As easy as playing with LEGO.
Our machine learning example is going to identify information about each of the LEGO bricks including color, size and surface area. By storing information on each of those bricks inside the algorithm’s database, it can start to predict which pieces you might need next. In fact, it can start to analyze every possible combination of pieces and to identify shapes that you might be trying to build. Think of it as like Google’s autocomplete, but with LEGO.
Machine learning works by identifying how best to separate different data points. For example, if the pile of blocks above is the raw data, the algorithm might sort the data like this:
Sorted data. In the form of LEGO blocks.
For a machine learning algorithm to process data, it needs something called an objective function, which is basically what defines the rules and functionality that the algorithm uses. With machine learning, we tend to use the loss function, which basically means that the algorithm is given penalties when it makes mistakes. The goal, then, is to make as few mistakes as possible.
Machine learning algorithms are basically told to figure out the best way to arrange their LEGO blocks while scoring as few penalties as possible. And just like there’s more than one way of sorting a mixed bag of LEGO blocks, there’s more than one way of creating an algorithm, from decision trees to neural networks. The trick is identifying the best possible solution to get the job done.
Getting the job done.
Some machine learning algorithms are more flexible than others, but this flexibility usually comes with some other cost such as the amount of resources that are required to run the algorithm. On top of that, most developers will run different iterations of different algorithms to see which gets the best results.
We usually call these different iterations “models”, and it’s these models that we then tap into over time. With our LEGO blocks, the model would be able to classify new blocks as they’re added, and not just the blocks it was originally given. In fact, the true power of machine learning comes from the fact that the more it’s used and the more data that it processes, the better it gets.
Hats off to machine learning.
Creating a machine learning algorithm isn’t too different to regular programming because they both require human input. The difference is that a regular programmer would have to wrap their heads around the LEGO challenge and devise a set of rules for the machine to blindly follow. A machine learning programmer pretty much sets the parameters, provides the inputs and leaves the machine to identify the solution itself.
In the vast majority of cases, the biggest problem for these algorithms is their ability to understand data in the first place, especially when it comes from disparate sources. In the healthcare industry, for example, machine learning is held back by a lack of interoperable data sets and missing or incorrect data points that send it off in the wrong direction.
That’s the bad news, but the good news is that we’re still in the early days of machine learning and the potential applications in the field of healthcare make it one of the most exciting technologies on the planet. Now you know how it works in such a way that you could explain it to a 10-year-old. That’s a good thing because it’s today’s 10-year-olds which will build the machine learning systems that power the future. Bring it on.
Want to learn more?
I talk more about data, artificial intelligence and machine learning in my new book, The Future of Healthcare: Humans and Machines Partnering for Better Outcomes. Click here to buy yourself a copy.
| Machine Learning Made Easy: What it is and How it Works | 20 | machine-learning-made-easy-what-it-is-and-how-it-works-1121b3aa4d93 | 2018-09-26 | 2018-09-26 17:02:48 | https://medium.com/s/story/machine-learning-made-easy-what-it-is-and-how-it-works-1121b3aa4d93 | false | 715 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Machine Learning | machine-learning | Machine Learning | 51,320 | Emmanuel Fombu | Emmanuel Fombu, MD, MBA, is a physician, speaker, the author of The Future of Healthcare and a healthcare executive turned Silicon Valley entrepreneur. | c48d21f18944 | emmanuel_fombu | 16 | 5 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-28 | 2017-09-28 09:51:19 | 2017-09-28 | 2017-09-28 10:09:03 | 3 | false | en | 2017-09-28 | 2017-09-28 14:18:44 | 41 | 112342299eca | 5.05566 | 20 | 0 | 0 | One’s inbox is of course sacred and hallowed ground (unless you’re one of those people with thousands of unread emails — have a word with… | 5 | Don’t forget to consume more content
Unbabel Newsletter #001–27 Sept 2017
Illustration by Tetley Clarke.
One’s inbox is of course sacred and hallowed ground (unless you’re one of those people with thousands of unread emails — have a word with yourselves), but extensive whiteboarding sessions showed that there was enough of an overlap in the Venn diagram between “stuff you wanna hear” and “stuff we wanna say” to make a bloody good go of this.
Below you’ll find some of the best things we’ve been reading and watching around the web (the bit on the whiteboard that had “value proposition!” written under it), but we’ve also been busy with our own content production in recent months.
We’ve written pieces that show how AI is changing modern business (as well as different industries like SaaS, gaming, travel and e-commerce). We’re fascinated with the misperception that English is the modern lingua franca, which we explored in our three-part series, The Rise and Fall of the English Language.
Closer to home, we’ve also clarified exactly how Unbabel works, how we keep customer data safe and the challenges of scaling translation quality and speed, even with the world’s most advanced quality estimation system. And none of this would be possible without our global community of 50,000 bilingual editors — people like Rebecka and Ray.
If you like what you’ve read today, please consider forwarding to a friend and getting them to subscribe.
Ciao for now,
Matthew Carrozo
PS: We’ve just launched Unbabel for Freshdesk.
ARTIFICIAL INTELLIGENCE
How to Regulate Artificial Intelligence — Oren Etzioni, of Paul Allen’s Institute for Artificial Intelligence, pens this op-ed in the NY Times where he suggests mirroring Asimov’s three laws of robotics for AI.
Putin says the nation that leads in AI ‘will be the ruler of the world’ — when industry hype turns to geopolitical rhetoric. “Whoever becomes the leader in this sphere will become the ruler of the world.”
Why AI Companies Can’t Be Lean Startups — a need for large, proprietary datasets makes it more difficult for AI-driven startups, but far from impossible, and potentially with much deeper competitive moats when it works.
LANGUAGE
The digital language divide — beautiful Guardian Labs piece that asks, how does the language you speak shape your experience of the internet?
Lost Languages Discovered in One of the World’s Oldest Continuously Run Libraries — a combination of digital photography, coloured lighting and algorithms have uncovered ancient texts, including two long-lost languages that hardly exist in the historical record.
After Brexit, EU English will be free to morph into a distinct variety — following Brexit, English may or may not cease to be an official EU language, as it is also the most spoken second language across the continent (at 38% and growing).
TAKING CARE OF BUSINESS
For the love of God, please tell me what your company does — your jargon, buzzwords and marketingspeak are poisoning the well. Hire a copywriter.
SaaS Explosion Creates SaaS Chaos: Analyzing 8 Years of SaaS Data — big chunks of interesting datapoints here. My favourite stat? 58% of the Top 50 SaaS companies are outside of Silicon Valley.
How to Level up from $100/month SaaS Deals to $1M Enterprise Sales — we’re definitely on this journey at Unbabel. Some great insights here.
OK COMPUTER
The Future of Computing Depends on Making It Reversible — so we’re maybe a decade from hitting the wall of Moore’s Law, which, like the Malthusian Dilemma before it, will be resolved by the human ingenuity that conjures when humanity knows it’s about to hit a wall. In this case, by questioning what we think we know about thermodynamics and information theory.
The Brain-Machine Interface Isn’t Sci-Fi Anymore | Backchannel — “The text on the screen is being generated not by his fingertips, but rather by the signals his brain is sending to his fingers. The armband is intercepting those signals, interpreting them correctly, and relaying the output to the computer, just as a keyboard would have.”
Do “They” Really Say: “Technological Progress Is Slowing Down”? — check your exponential privilege because the iPhone X in 1957 would cost $150 trillion (PSA: here’s what a trillion dollars looks like) and would require 150 terawatts of power to turn on (30 times the world’s current generating capacity).
iPhone AR Selfie Revolution — sexting as a panda animoji isn’t the only revolutionary thing to be introduced with Apple’s new top shelf fondleslab. Face tracking might only unlock your phone later this year, but what if software could react to your facial movements and emotions?
The First Web Apps: 5 Apps That Shaped the Internet as We Know It — a brilliant look back at the dawn of web-based software in the 1990s over on the Zapier blog.
THE FACEBOOK
Inside Facebook’s AI Workshop — an engineering focus on business impact over the “build a better algorithm” is obviously working very well for Zuck & Co.
How the GDPR will disrupt Google and Facebook — the EU’s upcoming General Data Protection Regulation has the potential to completely disrupt the two personal-data-harvesting giants by forcing explicit user permission for the billions that have been made off of user inaction and obfuscation.
Facebook’s war on free will — “Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction. Algorithms fuel a sense of omnipotence, the condescending belief that our behaviour can be altered, without our even being aware of the hand guiding us, in a superior direction.”
I HEARD IT ON THE BLOCKCHAIN
How does blockchain really work? I built an app to show you. — a blockchain is a “distributed database that is used to maintain a continuously growing list of records, called blocks.” But how does that work at a software level?
Fat Protocols — the Internet stack, in terms of how value is distributed is composed of “thin” protocols (TCP/IP, HTTP, SMTP, etc) and “fat” applications (Google, Facebook, etc). This relationship is flipped within the blockchain, with value accruing at the shared protocol layer, with only a fraction of it at the applications layer.
MOAR CONTENT
10 Facts about the James Webb Space Telescope — move over Hubble, the Webb telescope will be so sensitive, it could detect the heat signature of a bumblebee at the distance of the moon.
Mathematicians Measure Infinities, Find They’re Equal — imagine a set of all the natural numbers from 1 to ∞. Now image a set of all even numbers from 1 to ∞. The latter set is “smaller” right? That’s what mathematicians used to think…
Proposed Flag of Mars — nicely thought through, with references to Martian topology, and an aspect ratio of 17:32, which is the same ratio of an Earth year to a Mars year.
Prof Galloway’s Career Advice — the coolest dude.
Sorting 2 Metric Tons of Lego — human beings are the best.
Thanks for your time! 🙏
Give some claps below if you liked what you read. 👏
Subscribe below to get this newsletter in your inbox once every two weeks. 📰
| Don’t forget to consume more content | 366 | unbabel-newsletter-001-27-9-2017-112342299eca | 2018-01-26 | 2018-01-26 14:05:40 | https://medium.com/s/story/unbabel-newsletter-001-27-9-2017-112342299eca | false | 1,194 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | [carrozo] | I’m Matthew Carrozo (so you don’t have to be). Director of Communications at @Unbabel. | 3c2628077e70 | carrozo | 771 | 567 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 80007d2b076c | 2018-08-02 | 2018-08-02 17:55:23 | 2018-08-02 | 2018-08-02 18:48:54 | 1 | false | en | 2018-08-02 | 2018-08-02 18:48:54 | 4 | 1123a71ffd9 | 3.49434 | 0 | 0 | 0 | By Jack Tendler | 5 | New AI Technology can Diagnose Humans with Cancer
By Jack Tendler
The computer uses a neural network to compare pictures given to it with labeled pictures of cancer and tries to see if the given picture is cancer and if so where the tumor is. Photo courtesy of ETeknix.
A group of London doctors have created artificial intelligence software to correctly diagnose humans with different types of cancer, but with problems such as being prone to computer hackers, it may not be worth trusting this technology. It is already a technology that some people don’t trust, but the benefits could certainly outweigh the negatives.
The technology uses a neural network which is a form of modern artificial intelligence (AI). Programmers feed it thousands of photos containing examples of cancer and then they give it just as many examples of healthy patients. Over time the machine learns what cancer looks like and what a healthy human looks like.
The technology becomes extremely accurate, and after time it becomes just as if not more accurate than a trained professional. It can identify skin cancer, predict when a heart will fail, and even look for kinds of brain cancer.
“Most dermatologists were outperformed by the CNN [Convolutional Neural Network],” the research team wrote in a paper published in the journal Annals of Oncology. “On average, human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN,” the Guardian reports.
On top of determining skin cancer, AI technology can actually predict when someone’s heart is about to fail.
“A team of doctors in London have trained AI to predict when the heart will fail…It could transform treatment in ophthalmology, dermatology, and Radiology,” the BBC reports.
Humans apparently can predict things like when a heart will fail around 80 percent of the time. While 80 percent may seem like a decent score on a math test, not being able to identify when someone’s heart will fail 20 percent of the time is not good at all. While the AI technology trained by London doctors isn’t perfect, the BBC says it “has greatly outperformed human cardiologists.”
While talking about the benefits of relying on AI technology, another important thing to realize is the price. The price for using AI technology is actually significantly cheaper then using a trained professional. In an interview with the BBC, healthcare tsar Sir John Bell talked about the price benefits of using this technology.
“There is about $2.97 billion spent on pathology services in the NHS(National Health Service),” said Bell. “You may be able to reduce that by 50 percent. AI may be the thing that saves the NHS”
The last benefit that this technology has over humans is fatigue. Humans get tired. A trained doctor can have years of experience but they will always be prone to this one major flaw. They can’t be available 24/7 without sacrificing accuracy. A machine can always be available and never take breaks. This surprisingly enough could be the most useful of the features as being available constantly greatly adds availability to the system.
With all of these benefits it may be easy to immediately love the idea of using AI in the medical field, but there are of course drawbacks too. As technology increases so do different forms of hacking.
While it is easy to guess that hackers wouldn’t focus on the medical field, that would be a dangerous assumption. If we were to rely on AI to diagnose cancer patients and the technology were to be hacked then thousands of lives would be completely ruined. People who don’t need treatment would get it and people who desperately need it could go without it.
According to the ITRC Data Breach Reports, “Medical and Healthcare entities made up 35.4 percent of data breach targets (276 breaches),” and that data was only from 2015.
While a three year difference may not seem like a lot, in the technology world that feels like a decade. The landscape of the technology field changes so commonly that so much can change just in one year.
According to Revision Legal, the amount of cyber attacks keeps going up with no sign of stopping.
“Cyber-attacks are happening in 2017 at double the rate of 2016,” Revision Legal reports. That would include double the amount in the medical field. And again keep in mind that that data is from 2015. This means the amount of medical breaches could be nearing the thousands.
Another thing to keep in mind is that if the AI technology were to be implemented nationally, it could add more incentive for computer hackers to attack hospitals and try and alter the AI machine.
It’s a huge risk. One of the biggest advantages humans have on these AI computers is that they can’t be affected by some computer hack.
If hospitals were to incorporate this technology they would have to potentially hire more programmers just to ensure that the technology is safe from outside harm.
The possible benefits of using new AI technology could potentially be life saving.
However, we must first get over the mental hurdle of allowing computers to make these life changing decisions for us.
We need to ensure that the technology is being monitored frequently to make sure there is no hack to either change the outcome of a scan, or to collect private data. If we could do this, then this technology could theoretically change the world.
| New AI Technology can Diagnose Humans with Cancer | 0 | new-ai-technology-can-diagnose-humans-with-cancer-1123a71ffd9 | 2018-08-02 | 2018-08-02 18:48:54 | https://medium.com/s/story/new-ai-technology-can-diagnose-humans-with-cancer-1123a71ffd9 | false | 873 | A Medium publication for students at the School of the New York Times (Summer '18) | null | null | null | Writing for the Future: AI | null | writing-for-the-future-ai | AI,NYTIMES,TECH,JOURNALISM,ARTIFICIAL INTELLIGENCE | null | Cancer | cancer | Cancer | 17,070 | Jack Tendler | null | 5101e841f7f0 | redjack951 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-01 | 2018-03-01 19:27:41 | 2018-03-01 | 2018-03-01 19:36:38 | 6 | false | en | 2018-06-30 | 2018-06-30 04:47:13 | 7 | 1123c20f960d | 5.983962 | 17 | 0 | 0 | You have build a deep network (DN) but the predictions are garbage. How are you going to troubleshoot the problem? In this article, we… | 5 | Debug a Deep Learning Network (Part 5)
You have build a deep network (DN) but the predictions are garbage. How are you going to troubleshoot the problem? In this article, we describe some of the most common problems in a deep network implementation. But if you have not read the Part 4: Visualize Deep Network models and metrics, please read it first. We need to know what to look before fixing anything.
The 6-part series for “How to start a Deep Learning project?” consists of:
· Part 1: Start a Deep Learning project.
· Part 2: Build a Deep Learning dataset.
· Part 3: Deep Learning designs.
· Part 4: Visualize Deep Network models and metrics.
· Part 5: Debug a Deep Learning Network.
· Part 6: Improve Deep Learning Models performance & network tuning.
Troubleshoot steps for Deep Learning
In the early development, we are fighting multiple battles at the same time. As mentioned before, Deep Learning (DL) training composes of million iterations to build a model. Locate bugs are hard and it breaks easily. Start with something simple and make changes incrementally. Model optimizations like regularization can always wait after the code is debugged. Focus on verifying the model is functioning first.
Set the regularization factors to zero.
No other regularization (including dropouts).
Use the Adam optimizer with default settings.
Use ReLU.
No data augmentation.
Fewer DN layers.
Scale your input data but no un-necessary pre-processing.
Don’t waste time in long training iterations or large batch size.
Overfitting the model with a small amount of training data is the best way to debug deep learning. If the loss does not drop within a few thousand iterations, debug the code further. Achieve your first milestone by beating the odds of guessing. Then make incremental modifications to the model: add more layers and customization. Train it with the full training dataset. Add regularizations to control the overfit by monitor the accuracy gap between the training and validation dataset.
If stuck, take out all bells and whistles and solve a smaller problem.
Initial hyperparameters
Many hyperparameters are more relevant to the model optimization. Turn them off or use default values. Use Adam optimizer. It is fast, efficient and the default learning rate does well. Early problems are mostly from bugs rather from the model design or tuning problems. Go through the checklist in the next section before any tunings. It is more common and easier to verify. If loss still does not drop after verifying the checklist, tune the learning rate. If the loss drops too slow, increase the learning rate by 10. If the loss goes up or the gradient explodes, decrease the learning rate by 10. Repeat the process until the loss drops gradually and nicely. Typical learning rates are between 1 and 1e-7.
Checklist
Data:
Visualize and verify the input data (after data pre-processing and before feeding to the model).
Verify the accuracy of the input labels (after data shuffle if applicable).
Do not feed the same batch of data over and over.
Scale your input properly (likely between -1 and 1 and zero centered).
Verify the range of your output (e.g. between -1 and 1).
Always use the mean/variance from the training dataset to rescale the validation/testing dataset.
All input data to the model has the same dimensions.
Access the overall quality of the dataset. (Are there too many outliners or bad samples?)
Model:
The model parameters are initialized correctly. The weights are not set to all 0.
Debug layers that the activations or gradients diminish/explode. (from rightmost to leftmost layers)
Debug layers that weights are mostly zero or too large.
Verify and test your loss function.
For pre-trained model, your input data range matches the range used in the model.
Dropout in inference and testing should be always off.
Weight initialization
Initialize the weights to all zeros is one of the most common mistakes and the DN will never learn anything. Weights should be initialized with a Gaussian distribution:
Scaling & normalization
Scaling and normalization are well-understood but remain one of the most overlook problems. If input features and nodes output are normalized, the model will be much easier to train. If it is not done correctly, the loss will not drop regardless of the learning rate. We should monitor the histogram for the input features and the nodes’ outputs for each layer (before the activation functions). Always scale input properly. For the nodes’ outputs, the perfect shape is zero-centered with values not too large(positively or negatively). If not and we encounter gradient problems in that layer, apply batch normalization for convolution layers and layer normalization for RNN cells.
Loss function
Verify and test the correctness of your loss function. The loss of your model must be lower than the one from the random guessing. For example, in a classification problem with 10 classes, the cross entropy loss for random guessing is -ln(1/10).
Analysis errors
Review what is doing badly (errors) and improve it. Visualize your errors. In our project, the model performs badly for images with highly entangled structure. Identify the model weakness to make changes. For example, add more convolution layers with smaller filters to disentangle small features. Augment data if necessary, or collect more similar samples to train the model better. In some situations, you may want to remove those samples and constrain yourself to a more focus model.
Regularization tuning
Turn off regularization (overfit the model) until it makes reasonable predictions.
Once the model code is working, the next tuning parameters are the regularization factors. We increase the volume of our training data and then increase the regularizations to narrow the gap between the training and the validation accuracy. Do not overdo it as we want a slightly overfit model to work with. Monitor both data and regularization cost closely. Regularization loss should not dominate the data loss over prolonged periods. If the gap does not narrow with very large regularizations, debug the regularization code or method first.
Similar to the learning rate, we change testing values in the logarithmic scale. (for example, change by a factor of 10 at the beginning) Beware that each regularization factor can be in a totally different order of magnitude, and we may tune those parameters back and forth.
Multiple cost functions
For the first implementations, avoid using multiple data cost functions. The weight for each cost function may be in different order of magnitude and will require some efforts to tune it. If we have only one cost function, it can be absorbed into the learning rate.
Frozen variables
When we use pre-trained models, we may freeze those model parameters in certain layers to speed up computation. Double check no variables are frozen in-correctly.
Unit testing
As less often talked, we should unit test core modules so the implementation is less vulnerable to code changes. Verify the output of a layer may not be easy if its parameters are initialized with a randomizer. Otherwise, we can mock the input data and verify the outputs. For each module (layers), We can verify
the shape of the output in both training and inference.
the number of trainable variables (not the number of parameters).
Dimension mismatch
Always keep track of the shape of the Tensor (matrix) and document it inside the code. For a Tensor with shape [N, channel, W, H ], if W (width) and H (height) are swapped, the code will not generate any error if both have the same dimension. Therefore, we should unit test our code with a non-symmetrical shape. For example, we unit test the code with a [4, 3] Tensor instead of a [4, 4] Tensor.
Part 6
If you have any tips on debugging, feel free to share it in the comment section. Now you pass one of the most difficult part of the DL. Let’s beat the state-of-the-art model in Part 6: Improve Deep Learning Models performance & network tuning.
| Debug a Deep Learning Network (Part 5) | 60 | debug-a-deep-learning-network-part-5-1123c20f960d | 2018-06-30 | 2018-06-30 04:47:13 | https://medium.com/s/story/debug-a-deep-learning-network-part-5-1123c20f960d | false | 1,334 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Jonathan Hui | Deep Learning | bd51f1a63813 | jonathan_hui | 2,787 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e898b15717e3 | 2017-10-04 | 2017-10-04 19:45:38 | 2017-10-04 | 2017-10-04 20:12:27 | 4 | false | en | 2018-10-04 | 2018-10-04 19:43:07 | 12 | 1123d7b00e00 | 3.616981 | 7 | 0 | 0 | by Victor Sprengel, Lucas Brunialti, Tobin Fulton, Daniel Miranda | 5 |
How we quantify traffic in our Routing Platform
by Victor Sprengel, Lucas Brunialti, Tobin Fulton, Daniel Miranda
Cobli (link in Portuguese) is a company dedicated to making fleet management simple. We do this by providing an easy-to-use platform for managers to better control and optimize their vehicle fleets. One of the tools in our platform is multi-vehicle routing (link in Portuguese), which helps our clients get the most out of their available vehicles, such as minimizing gas and maintenance costs as well as maximizing the number of deliveries done per day.
Cobli’s Multi-Vehicle Routing Platform
In our system, we let the user input the number of available vehicles, addresses representing deliveries or services as well as other relevant information such as time windows and lunch breaks. We then suggest optimal routes and assign addresses to each vehicle that should be visited in a specific order.
Distance and time are the most important factors in generating these optimal routes. In this post we will take a further look into this service and the search for optimality.
The algorithm used to find the optimal solution needs cost matrices, which represent the relation of distance or time required to travel between each pair of addresses. Initially, we used GoogleMaps’ services and APIs to run these queries. However, we found that our larger clients needed support for more vehicles and addresses per route, and unfortunately, the API costs at this scale are prohibitive.
We thus needed a new source of cost matrices, and we discovered that we could reach enough accuracy for our needs using a different approach: building our own matrices from other datasets we also had access to.
The origin of the new dataset is the API to which we delegate running the algorithm mentioned before. Their service accepts the cost matrices as part of the request, which in our case comes from Google. However, this API has its own data and doesn’t need the matrices from outside. That being said, traffic is not taken into account in this API’s dataset, which reduces the quality of the routes that we generate in our system. Given this reality, we decided to create a solution in-house to quantify traffic into these matrices.
The solution we came up was to multiply the API’s cost matrices by a parameter, named speed-factor ⍺, which exists to simulate the existence of traffic. And so we reduced our initial problem to finding an optimal ⍺.
To find ⍺, we used our dataset of historic routing requests, which is composed of all routes ever generated in our routing system. For each routing request to our system, we acquired both a “raw” cost matrix and an adjusted matrix with traffic taken into account. With all these matrices in hand, we created two large matrices, G and H, following this procedure:
Determine the maximum dimension of a matrix (all of them are square).
Fill the cost matrix of each request with zeros in its rows and columns until it has the dimension found at step 1.
Flatten each of these matrices into vectors.
The matrices from Google are inserted as rows in G and the ones from the API in H. Of course the indexes must match (i.e., request i must be row i for both matrices).
Then we can find ⍺ by solving the following optimization problem:
The method used to compare the two matrices is the Frobenius norm, which is a natural norm that is also easy to compute. Using derivates we find that the answer is:
Notice how the amount of zeros added during the second step has absolutely no effect on ⍺, since we are only multiplying artificial zeros by other artificial zeros when using trace.
In fact, we are not calculating an optimal ⍺ in the strict sense that it will always be the best solution. But it is indeed the best solution considering all past usages of our routing system. That means it is biased by the locations used for routing in our system.
Importantly, tests using the calculated speed-factor have been successful with little to no practical difference. Hooray!
But that doesn’t mean we are finished. In fact, we will continue to use our clients’ data to improve our routing service such as using a different ⍺ for different times of the day or week.
By the way, if you’re interested in solving these kinds of real-world problems as well as building an awesome end-to-end product that’s helping to make Brazil more efficient, we’re hiring!
Also, follow us on Facebook, LinkedIn, Twitter, and Instagram as well as our Cobli Blog (links in Portuguese)!
| How we quantify traffic in our Routing Platform | 10 | how-we-quantify-traffic-in-our-routing-system-1123d7b00e00 | 2018-10-04 | 2018-10-04 19:43:07 | https://medium.com/s/story/how-we-quantify-traffic-in-our-routing-system-1123d7b00e00 | false | 773 | Unleashing logistics though technology | null | CobliBrasil | null | Cobli | cobli | LOGISTICS,TECHNOLOGY,DESIGN,STARTUP,UX | Cobli_Brasil | Machine Learning | machine-learning | Machine Learning | 51,320 | Cobli Brasil | We exist to unleash logistics though technology solving real problems! We are an IoT tech startup based in São Paulo, Brazil. | 5d59de451d6e | cobli | 14 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | octave:1> a = [1 1 2 2.2 3 3.5 4.1 19 62];
octave:2> abs(a - median(a)) / mad(a,1)
ans =
1.81818 1.81818 0.90909 0.72727 0.00000 0.45455 1.00000 14.54545 53.63636
octave:3> a(find((abs(a - median(a)) / mad(a,1)) > 2))
ans =
19 62
| 4 | null | 2018-03-27 | 2018-03-27 05:46:31 | 2018-03-28 | 2018-03-28 06:06:45 | 4 | false | en | 2018-03-28 | 2018-03-28 06:14:18 | 8 | 1124387d6d8d | 3.3 | 14 | 0 | 0 | Statistical analysis of playlist’s mood | 5 | Outlier Detection In Music Playlists
Statistical analysis of playlist’s mood
Playlists usually combine songs under a single category such as artist’s “Best of”, genre and epoch, activity-oriented (i.e. workout or cooking playlists) or mood-focused such as melancholic or joyful playlists. We will analyse the coherency of the latter through the evaluation of each song’s sentiment score and the use of the median absolute deviation (MAD) to robustly measure the variability of the playlist’s mood.
At Anghami, we focus on delivering the best user experience and quality content for music lovers. On our path towards continuous improvement and as a data-driven company, we have turned to statistics to further enhance our customer’s listening sessions.
Converting words into numbers
We’ve had adjectives assigned for every song in the playlist such as exciting, sensual, depressive, nostalgic, etc… and in order to analyse them, we created a unidimensional projection with values ranging from -1 (most extreme negative) to +1 (most extreme positive).
For this task, we’ve chosen VADER Sentiment Analysis [1], an open-source lexicon and rule-based sentiment analysis tool. Words that aren’t in VADER’s dictionary were replaced with their closest synonyms with the help of www.thesaurus.com.
Standard deviation & why we didn’t use it
The standard deviation (SD) is how much members of a group differ from the mean value of the group. A small SD indicates that the values are tightly located around the mean, while a large SD means the values are spread over a wide range. The following plot illustrates how the standard deviation is affected by the proximity of the values to the mean. Both populations have the same mean, but their values are spread differently.
Example of samples from two populations with the same mean but different standard deviations. Red population has mean 100 and SD 10; blue population has mean 100 and SD 50. Source
A simple method of determining outliers in a set is to find all entries that are two standard deviations away from the mean. Let’s take a look at this set of numbers: [1 1 2 2.2 3 3.5 4.1 9]
The mean is 3.2250 and SD is 2.5828. The last entry (9) is greater than mean+2*SD (8.3905). We have successfully detected the outlier here, but that’s not enough.
Let’s do the same for this set of numbers: [1 1 2 2.2 3 3.5 4.1 19 62]
mean=10.867, std=19.972
The red line is mean+2*SD (50.811), we failed to identify (19) which logically should be considered an outlier. The reason is that the standard deviation, which is based on squared distances from the mean, is greatly influenced by the large deviations of extreme outliers, in our case (62).
Median absolute deviation
The median absolute deviation (MAD) is a robust measure of statistical dispersion and is more resilient to few extreme outliers. It is defined as the median of the absolute deviations from the data’s median, simply MAD = median( abs( Xᵢ - median(X) ).
Let’s compute the robust zScore of each of the previous data points using the MAD:
Using the same cut-off factor of 2, we find that the outliers here are entries 19 and 62.
It’s not a perfect solution, but it’s much better than the less robust std. dev.
Wrapping up
For every playlist, we translate the song’s mood into a normalized score using VADER and compute its robust zScore in terms of the median absolute deviation. The playlists are ordered using the distance between their maximum and minimum sentiments and the sum of their songs’ zScores; thus, if a playlist contains a negative song among mostly positive ones, it will be surfaced first and the outlier songs highlighted for review.
What’s next?
Now that we’re able to analyse playlists over a single attribute (sentiment), we began looking into methods that efficiently deal with multiple dimensions. Hopefully this will be the topic of another story.
[1] Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
| Outlier Detection In Music Playlists | 270 | outlier-detection-in-music-playlists-1124387d6d8d | 2018-05-15 | 2018-05-15 15:17:18 | https://medium.com/s/story/outlier-detection-in-music-playlists-1124387d6d8d | false | 689 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Elias El Khoury | Husband, father, programmer | Works at Anghami as Research and Development Lead | 96b056cfe1a5 | eliasfadelkhoury | 17 | 26 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f6f2c56e3bef | 2018-02-11 | 2018-02-11 23:39:58 | 2018-02-12 | 2018-02-12 20:40:50 | 3 | false | es | 2018-02-12 | 2018-02-12 20:40:50 | 6 | 1125cc0bde4e | 2.516038 | 4 | 0 | 0 | En un post anterior hablamos sobre los 3 tipos de Machine Learning. En este post cubriremos el para qué nos sirve. | 2 | AI en 3 minutos: ¿Para qué Machine Learning?
AI Learners
En un post anterior hablamos sobre los 3 tipos de Machine Learning. En este post cubriremos el para qué nos sirve.
Vinyals & Le, Google (2015)
Machine Learning es un campo de la Inteligencia Artificial en el que un programa puede aprender sin ser explícitamente programado (Arthur Samuel). En los últimos años, hemos visto todo tipo de aplicaciones interesantes de ML: Desde agentes que pueden tener conversaciones filosóficas sobre la moral hasta agentes que pueden detectar si alguien tiene cáncer a partir de la foto de una lesión de la piel con mayor precisión que doctores.
En los últimos años, tareas que considerábamos irremplazables por computadoras, tareas de áreas de servicios, han podido ser realizadas de una mejor manera por computadoras. Abogados, economistas e incluso programadores han tenido que adaptarse en un mercado donde técnicas de Machine Learning pueden desempeñarse mejor que humanos. Y no sólo en servicios, ML también ha empezado a desempeñarse muy bien en áreas creativas o en las que pensarías que sólo un humano se puede desempeñar.
Una red neuronal que subtitula imágenes describiendo qué es lo que pasa
Andrew Ng separa las áreas de aplicación de ML en tres:
Database Mining: Teniendo acceso a tanta información, algo que ha ocurrido gracias al mundo web, ML se vuelve más potente. Aunque ML es un campo que lleva desde los 70s siendo investigado, recién lleva unos 10 años siendo exitoso por la cantidad de datos y el poder de procesamiento de las computadoras actuales. Gracias a que gobiernos y compañías crean bases de datos con todo tipo de información (info de sensores, historial médico, dónde hacen click los usuarios), ML se potencia de todo eso para crear soluciones muy interesantes.
Aplicaciones que no podemos programar a mano: Imagina que tuvieras qué programar con técnicas convencionales un programa que detecta si una imagen se encuentra en una foto. Otro ejemplo: imagina que tuvieras que escribir un programa que dijera al coche cómo conducir para cada caso. Hay cientos de miles de posibilidades, y hacer un programa para esto no sólo puede ser tedioso, sino imposible. ML se puede aplicar para este tipo de campos pues aprende por su cuenta: Visión por Computadora, Procesamiento de Lenguaje Natural y Vehículos Autónomos son 3 áreas que entrarían en esta aplicación.
Programas que se adaptan a si mismos: Cuando Spotify te sugiere una canción, Netflix una película o Amazon un producto, se está adaptando específicamente a ti.
Machine Learning está en todos lados. Es por eso que es importante entender cómo funciona y el impacto tanto positivo como negativo que esta gran rama de la Inteligencia Artificial puede tener.
Algunos tips para aprender:
No te vayas con el hype de una herramienta. Aprende las bases primero.
Entiende las matemáticas detrás de los modelos. Por otro lado, no te dejes intimidar por ellas.
Ten buenos fundamentos: entiende cómo enfrentarte a un problema de Machine Learning y qué técnicas se pueden utilizar.
Aprende las aplicaciones más comunes de ML en la industria. Compañías como Google, Facebook y Netflix tienen blogs donde suben sus resultados.
Hay cursos gratuitos. Revisa Coursera, Udacity y EdX para ver cursos gratuitos de todo tipos de temas.
¡Aprende con otros!
| AI en 3 minutos: ¿Para qué Machine Learning? | 53 | ai-en-3-minutos-para-qué-machine-learning-1125cc0bde4e | 2018-04-14 | 2018-04-14 19:39:49 | https://medium.com/s/story/ai-en-3-minutos-para-qué-machine-learning-1125cc0bde4e | false | 521 | Inteligencia Artificial para todos | null | AILearners | null | AI Learners | null | ai-learners | DEEP LEARNING,MACHINE LEARNING,INTELIGENCIA ARTIFICIAL,NEURAL NETWORKS,MACHINE INTELLIGENCE | AI_Learners | Machine Learning | machine-learning | Machine Learning | 51,320 | Omar Sanseviero | null | 3b91f0907f57 | osanseviero | 201 | 105 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-01 | 2018-02-01 13:21:20 | 2018-02-01 | 2018-02-01 15:40:15 | 4 | false | en | 2018-04-03 | 2018-04-03 07:09:59 | 3 | 11269d9820f5 | 3.341509 | 16 | 0 | 0 | A sedate start. | 5 | 2018 Goals Monthly Review: January ed.
A sedate start.
Well, a lot of promises made and goals set. Did I make progress? Let’s see.
Tl; dr -
A summarised version of this post.
Link: https://docs.google.com/spreadsheets/d/1UEYtJzx-SUdpHdxKeW2OZQYZLOQTCh4z6iZblB1eCg0/edit?usp=sharing
Although the screenshot says the gist, I’d like to expand on three aspects.
Movies
While I planned to skip watching movies until July, I broke this resolution a couple of days back when I watched the movie AlphaGo (The story about Google’s AI engine beating the world champion Lee Sedol in Go) on Netflix. I’ve seen people lauding this movie recently and I decided to watch it and boy it was epic. The narration and screenplay of the movie mirrored real-life events and it was very emotional to say the least. In the man vs the machine battle, although the supporters of ML/AI wanted the machine to win, most of us deep down craved for the man to come out on the top. The moment where Lee resigns in the second game, slips a white stone on the board, his fingers still shivering with a feeling of disbelief on his face was perfectly captured. I thoroughly enjoyed the movie, partly because I could really connect to the algorithms etc., but I highly recommend this documentary to everyone to know the enormity of AI’s advancements and human creativity involved in creating this AI.
The big moment in the second game.
Book of the month
A Thousand Splendid Suns
I started the book named “The power of moments” by Chip & Dan Heath after a lot of recommendations and its immediate usability in my life. While the key concepts, about creating memorable moments to elevate people, are quite well explained, I realised I was forcing myself to read through it and it wasn’t an enjoyable read. I decided to chuck the book and picked up The thousand Splendid Suns by the famous Khaled Hosseini.
This book spans a 40-year story about two Afghani women, the kind of lives they live, the grief and oppression they go through and the hope that gets them through the day. Their grit and determination put a lot of things in perspective for most of us, in this materialistic world. The things we take for granted such as having two meals in a day, a peaceful home to live in, access to study, freedom to express love – they had none of these and when this happens, humans go through an enormous amount of grief and they start valuing small things in life. Often these people are stronger than we think. Mariam and Laila have left a strong impact on me. There’s a passage in the book where Laila and Mariam’s eyes are transfixed at each other and all they could manage is a smile. It was such a satisfying moment to read through. Also, the letter. Oh, it was heartbreakingly emotional. Tears rolled over in an instant. With that said, I’d like to read more of Khaled sometime later this year.
The 1TB RAM server for Kaggle
Rohan and I teamed up for the grocery forecasting kaggle challenge. Though we started in the last 10 days of the competition, we made good progress in the last week and we pushed hard to build really good individual models. It was the first time I was using servers of size 1TB RAM and 180 cores of CPU power. We believed we had a pretty strong validation framework and we ended up at 90th /1400 participants on public leaderboard on the last day. I woke up at 5:30AM to see final standings and I was shocked to see us fall by 530 places on private leaderboard. Later when we checked, we realised some of our initial models would have done better but we missed a couple of tricks that put us behind. It was a sad ending to a weeklong mania at my place. Anyways, lessons learnt and we’ll surely come back stronger.
Our ensemble scored way lesser than single models. XGB really overfitted in comparison to LGB, inline with many other competitors.
Overall, I think it’s a slow start to an exciting 2018 and there’s a ton of good work to be done. See you next month with a bunch of updates. Stay tuned…
The original goals post is available here: https://medium.com/@phanisrikanth33/my-2018-goals-80a3d70fd213
February post: https://medium.com/@phanisrikanth33/2018-goals-monthly-review-february-ed-f3d68ee83039
| 2018 Goals Monthly Review: January ed. | 44 | 2018-goals-monthly-review-january-ed-11269d9820f5 | 2018-06-21 | 2018-06-21 05:30:59 | https://medium.com/s/story/2018-goals-monthly-review-january-ed-11269d9820f5 | false | 700 | null | null | null | null | null | null | null | null | null | 2018 | 2018 | 2018 | 8,179 | Phani Srikanth | www.phani.io | I enjoy working on Machine Learning, attending live concerts and following Test Cricket. | b7f3e4fe28b3 | phanisrikanth33 | 403 | 38 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-07 | 2018-06-07 10:20:09 | 2018-06-07 | 2018-06-07 10:51:21 | 2 | false | en | 2018-06-07 | 2018-06-07 10:51:21 | 2 | 1126a4ba9255 | 3.37956 | 0 | 0 | 0 | Discussing conversational agent guidelines now can help you plan to integrate this technology into your business is important.
A lot of… | 5 | Why You Should Show Interest in Conversational Agent Guidelines?
Discussing conversational agent guidelines now can help you plan to integrate this technology into your business is important.
A lot of businesses that are operating online are using the technology that comes with live chat. This allows them to often use a third party or even customer service representatives within their business to attend to customer needs through queries and responses.
The Necessity for Conversational Agent Planning
While this is a great way of being able to ensure that the customer service needs are being met it also comes with a lot of flaws. For this reason many businesses are now taking a look at artificial intelligence by way of chat robots to better meet the customer needs.
For those that are using live conversation agents it means that they have to plan for conversation agent guidelines. This can be time-consuming but it is a necessity if they want any success from this type of program at all.
The guidelines act as the blueprint for the live agents to follow in order to be able to meet the requirements that the business has on their customer engagement.
Human Conversational Agent Guidelines
One of the biggest factors about the human conversation agents is that it is taking place in text format which can be time-consuming, especially for those that are acting on behalf of the company that are not totally familiar with the type of queries that they are going to be presented with. Every chat agent must begin a conversation with the proper etiquette and most companies will provide a script that needs to be followed for the responses to be consistent by every agent that may be handling this responsibility.
A lot of training is involved with the staff that is going to be taking on this task as they need to be well versed with the type of questions that they will be faced with and how to give the proper responses. This means that businesses that are using this type of program really have to rely on the characteristics and attributes that the human agent possesses. If they are not well versed in the company or enthusiastic about their customer agent role this can have an adverse effect on the overall outcome and create a bad experience for the consumer.
Customers have to be comfortable using this type of system and are often not sure how to make the opening approach which means that the agent must be able to step in and take the lead where necessary. Chat agents also have to be aware that if they cannot respond appropriately to a customer query that they have more experienced personnel to rely on to be able to provide an immediate answer. At the same time, customers do not want to be transferred from one person to another within the company as they want their answers quickly.
There is a lot of a planning and talks that must be carried out in order to utilize this type of customer service application. Many of the businesses that are using it have comes to realize that there are limitations with this and are now looking at advanced technology which comes into the artificial intelligence chatbot field.
Artificial Intelligence and the Conversational Agent
Planning is still required for this type of application but in a totally different manner. Businesses that are going to rely on chatbot software needs to ensure that they are relying on that which is powered by strong technologies such as that which is provided by Silvia. This means having a good understanding of just what this type of technology that is power driven by Silvia can provide.
Silvia is a platform for conversational intelligence that can be utilized in different types of environments with no compromise or weaknesses. It can create conversational experiences with customers to the point where the customer has no idea that they are actually dealing with a chatbot. The Silvia technology offers so many options that it can be totally customized to fit in with each business’s unique needs.
Conclusion
It can also open additional doors for being able to reach out to a broader client base that has up till now been closed because of language barriers. An even greater advantage of using the Silvia technology is that no server connection is required. What is even more beneficial is the additional benefits and features that it has where it can be utilized not only in all types of industries but for further needs such as analytics, data collection and organization.
Live chat has been a viable solution but utilizing artificial intelligence through chatbot agents is in an exceptional solution for providing conversational agent services.
| Why You Should Show Interest in Conversational Agent Guidelines? | 0 | why-you-should-show-interest-in-conversational-agent-guidelines-1126a4ba9255 | 2018-06-07 | 2018-06-07 10:51:22 | https://medium.com/s/story/why-you-should-show-interest-in-conversational-agent-guidelines-1126a4ba9255 | false | 794 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Amber Stevens | null | 7647c412e21e | amberstevens311 | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-11 | 2018-01-11 17:46:37 | 2018-01-18 | 2018-01-18 13:06:01 | 1 | false | en | 2018-01-18 | 2018-01-18 14:41:40 | 4 | 11296a14db63 | 1.864151 | 11 | 0 | 0 | Aceinna, Inc. was formed in late 2017 as a spinoff from a leading sensor semiconductor company, MEMSIC which was founded by Dr. Yang Zhao… | 5 | Who is Aceinna? Global investor IDG backs Sensors for AI
Aceinna, Inc. was formed in late 2017 as a spinoff from a leading sensor semiconductor company, MEMSIC which was founded by Dr. Yang Zhao, a pioneer in the MEMS field. Funded by IDG, a global venture capital firm with $15B under management, and the management team, Aceinna is developing new powerful sensor technology for high-volume Artificial Intelligence applications such as autonomous vehicles, high-performance compute power systems, and smart buildings. Aceinna’s established revenue stream is projected to grow with 100% growth rate over the next three years, and it has three product lines each with core technology that delivers break thru price and performance in pioneering artificial intelligence applications.
Aceinna Sensor Product Lines
Inertial Navigation Modules
Aceinna has combined technology from a decade of internal product development and several important acquisitions to develop a broad line of small, low-cost inertial measurement unit(IMU) and integrated GPS/INS modules. These modules shatter perceptions on IMU pricing, while providing industry leading performance. Aceinna is in high-volume production today with several “autopilot” enabled vehicles.
Current Sensors
In early 2018, Aceinna begins commercial shipment of a new magneto-resistive current sensor technology. These current sensors distinguish themselves with a combination of high-bandwidth, low-cost, and zero influence on the measured circuit. The high-speed operation of the sensor allows for increases in both power supply switching speed and efficiency, a key break thru to enable efficient power delivery and heat management in small footprint, high-performance computing as well as traditional industrial applications. As more and more devices from GPU enabled consumer devices to electric cars, require significant dynamic power infrastructure, the need for accurate, high-speed current sensors will grow exponentially.
Flow Sensors
Flow and differential pressure (DP) measurement is a tricky, diverse field that has seen only modest innovation and little cost reduction in recent years. In 2017 Aceinna began production of some of the most accurate flow sensors and DP sensors in the world. Based on a novel thermal measurement physics principle, these sensors are able to achieve 24-bit accuracy at commodity prices. Applications include medical devices and building controls, but also a new breed of consumer smart home devices such as air quality monitoring and micro-zone control.
The Vision
Aceinna believes that the trend to autonomous artificially intelligent devices and services will drive a new set of needs for computing to measure and interact with the real-world. Aceinna is making investment and acquisitions to provide additional new sensor technology to power those applications with accurate data to train, learn, and enable this new breed of AI-based devices and services.
| Who is Aceinna? Global investor IDG backs Sensors for AI | 176 | who-is-aceinna-global-investor-idg-backs-sensors-for-ai-11296a14db63 | 2018-04-17 | 2018-04-17 11:08:14 | https://medium.com/s/story/who-is-aceinna-global-investor-idg-backs-sensors-for-ai-11296a14db63 | false | 441 | null | null | null | null | null | null | null | null | null | Sensors | sensors | Sensors | 778 | Mike Horton | null | b675ffaec327 | mikehorton | 37 | 20 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-08 | 2018-03-08 11:59:20 | 2018-03-08 | 2018-03-08 13:23:32 | 4 | false | en | 2018-03-08 | 2018-03-08 13:23:32 | 0 | 112987548211 | 6.179245 | 2 | 0 | 0 | In one of my earlier posts, I covered 5 ways for artificial intelligence to change our kids’ classrooms. Some of them were pretty down to… | 5 | When AI-teachers come, will our kids be ready?
In one of my earlier posts, I covered 5 ways for artificial intelligence to change our kids’ classrooms. Some of them were pretty down to earth, like a robotic assistant that will help a child learn new skills, others looked a bit surreal.
So when I start talking about robots replacing teachers in education, what I mostly get as a reaction is an eye roll. “Please. I don’t believe that teaching, one of the most human jobs in essence, will be replaced by robots”.
Honestly, I see no reason why our technology isn’t capable of creating a solid robotic prototype of a human teacher. Some media outlets are with me on it. According to Sir Anthony Seldon, Vice Chancellor of the University of Buckingham:
“The machines will know what it is that most excites you and gives you a natural level of challenge that is not too hard or too easy, but just right for you.”
On the other hand, a lot of editorials find the replacement of human teachers by robots highly unlikely. Here’s the take of HuffPost regarding the issue:
“Technology is going to play a critical role in the future of education. But not as big a role as that of a teacher.”
As you can see, there isn’t an established opinion about whether it’s even technically possible for technology to replace educators. However, let’s imagine that we have a bot who’s ready to teach a class.
What will be the benefits of it?
Finding teacher shortage
For starters, it’s not curiosity that pushes people towards creating virtual teachers but a need. Statistically, America experiences one of the biggest shortages of educators in the world.
As you can see, in the future, the US will experience the huge shortage of educators which is twice as what we have now.
Robots can do a good job to replace teachers that are leaving. Future technology will, perhaps, be able to cover the lacking spots.
Lower demands
Also, robots won’t be that demanding as human teachers are. A teacher’s package includes a salary, a healthcare insurance, paid sick leaves and vacations. None of that will be needed for a robot.
Constant knowledge updates
One of the main struggles of modern education is that teachers don’t have either time or desire to get their knowledge up to date. So, our kids have to learn old techniques and tactics.
Robotic teachers can update their knowledge packages with a few lines of codes. Also, it will standardize the level of education kids get all over the country. Schools in each states can have unified teachers that work at the same pace.
Infinite patience and stamina
Working as a teacher requires huge mental strength and a unique skill set. Educators have to be very perceptive, patient, and insightful. And a lot of human teachers are brilliant. Still, everyone can have a bad mood once in a while and not work to the best of their ability.
Robotic teachers don’t have a bad mood. They can explain a formula, a rule or a task until it’s understood by everyone. Also, robotic-teachers can have access to students’ profiles and find the best way to deal with every kid in the classroom.
No need for mass testing
Right now, SATs create a huge pain in the neck for schools and kids, not to mention a huge deal of governmental budget that is needed to make a big mass testing happen.
With unified teachers that grade students by the same algorithms, mass testing is not needed anymore. Think about it, your kids could stop being worried about upcoming SATs, spend a huge deal of money on tutoring, and put in all-nighters to not let the test crash everything they’ve been working for.
Every robot will test kids independently, record the data and send reports of kids’ current knowledge level to authorities.
Every kid can choose his own teacher
Imagine that your kid is able to choose how his teacher looks, choose his voice tone and patterns. With AI-teachers it will be possible. As a kid could wear VR-glasses, the teacher will be unique to everyone. The voice of an instructor can also be chosen since children could listen to a robot via an earpiece.
These progressive changes will allow kids to find the most comfortable way to learn so that nothing distracts them from getting knowledge.
Any challenges?
Sure, the idea of a personal teacher who is always patient, pleasant, and encouraging strikes me as an awesome one. However, robotic teachers are not a piece of cake either. Implementing them would create a number of challenges and could possibly worsen the way kids study.
People will be out of jobs
If robotic teachers truly become such a trend as some media outlets write, every school would want to have a personal AI-machine to teach kids. That states a question: what’s going to happen to human educators?
Chances are, teachers will have to leave schools since they are no longer needed. There are over 3 million educators in the US alone. Imagine for a second, what happens if there are so many unemployed people in the country. Globally, the number will get higher, creating a chance of economic crisis and collapse.
A teacher is way more than just a source of information
A lot of people treat teachers simply as those that give out information to kids. Although, every one of us had such a professor whose only focus was to teach, who had neither charisma, nor personal experience. He could be an awesome specialist but a sucking educator.
Good teachers are way more than just the source of knowledge. They motivated, inspired, and encouraged us. Quite often, great teachers made an impact on their students that went way beyond one discipline only.
Maybe, by replacing brilliant human teachers with a smart but nevertheless artificial analogy, we will rob our kids of a special experience of human interactions.
Robotic teachers are yet inaccessible in developing countries
As you can see in the picture above, a lot of people already believe that technology has widened the rich-poor gap. It’s highly likely that the gap will grow bigger if one part of the world will learn with the help of technology, while the residents of poorer countries hardly even own a smartphone.
High Electricity Costs
Electricity is not an infinite resource. And we are running out of it as it is, so the need for the alternative sources of power is getting stronger. Can you imagine, how fast humanity will run out of electricity if there’s a robot that needs it to function in every classroom in every school in each country?
For one thing, such electricity expenses might just be too much for economies to handle. Also, it’s not even about money. Humanity just doesn’t have so many resources to give without a further consideration of whether it’s really needed.
Database safety
It’s been previously mentioned that a robotic teacher will work by uploading teaching programs and knowledge packs in a form of a database. Sure, it provides equality and sets teaching standards.
On the other hand, if a database collapses for some reason, robots all over the country can’t teach. Our kids would have to miss school until the kink is fixed.
Self-awareness
All of this stuff about robots becoming self-aware is quite a buzz. But it’s not like the idea of self-aware machines is completely unjustified. Take Tay-bot as an example.
The female twitter-bot was created as an experiment to explore machine learning. Supposedly, Tay had to learn new experience and share the good vibes of the world on Twitter.
That didn’t happen.
Instead, Tay turned out to be a misanthropist and a huge fan of neo-nazism. She was full of hatred towards everyone but Hitler whom she, on the contrary, admired a lot.
It’s okay when such a self-aware bot is just an experiment that hardly influenced people. But we wouldn’t want our kids’ teacher to go all anti-semitic and neo-nazi on our children. Heck, no.
Is there a middle ground?
Actually, I believe there is one. Surely, there are as many downsides to the use of technology as a teacher as upsides themselves. On the other hands, we really shouldn’t neglect the big source of power AI presents these days.
We could use robots not as teachers but as assistants and learning tools. They’ll add the touch of objectivity and personalization to the studying. On the other hand, humans will always be there to control their robotic peers and report their kinks and bugs.
| When AI-teachers come, will our kids be ready? | 22 | when-ai-teachers-come-will-our-kids-be-ready-112987548211 | 2018-03-09 | 2018-03-09 07:49:06 | https://medium.com/s/story/when-ai-teachers-come-will-our-kids-be-ready-112987548211 | false | 1,452 | null | null | null | null | null | null | null | null | null | Education | education | Education | 211,342 | Oleksii Kharkovyna | Bits and pieces about AI, ML, and Data Science | a0e39375a333 | oleksii_kh | 254 | 23 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 2d76617dc648 | 2017-07-31 | 2017-07-31 03:59:03 | 2017-07-31 | 2017-07-31 04:09:02 | 4 | false | en | 2017-11-30 | 2017-11-30 05:34:54 | 8 | 112a47a429d0 | 5.745283 | 63 | 4 | 0 | Artificial Intelligence (AI) is everywhere. Possibility is that you are using it in one way or the other and you don’t even know about it… | 5 | 9 Applications of Machine Learning from Day-to-Day Life
Artificial Intelligence (AI) is everywhere. Possibility is that you are using it in one way or the other and you don’t even know about it. One of the popular applications of AI is Machine Learning (ML), in which computers, software, and devices perform via cognition (very similar to human brain). Herein, we share few examples of machine learning that we use everyday and perhaps have no idea that they are driven by ML.
1. Virtual Personal Assistants
Siri, Alexa, Google Now are some of the popular examples of virtual personal assistants. As the name suggests, they assist in finding information, when asked over voice. All you need to do is activate them and ask “What is my schedule for today?”, “What are the flights from Germany to London”, or similar questions. For answering, your personal assistant looks out for the information, recalls your related queries, or send a command to other resources (like phone apps) to collect info. You can even instruct assistants for certain tasks like “Set an alarm for 6 AM next morning”, “Remind me to visit Visa Office day after tomorrow”.
Machine learning is an important part of these personal assistants as they collect and refine the information on the basis of your previous involvement with them. Later, this set of data is utilized to render results that are tailored to your preferences.
Virtual Assistants are integrated to a variety of platforms. For example:
Smart Speakers: Amazon Echo and Google Home
Smartphones: Samsung Bixby on Samsung S8
Mobile Apps: Google Allo
2. Predictions while Commuting
Traffic Predictions: We all have been using GPS navigation services. While we do that, our current locations and velocities are being saved at a central server for managing traffic. This data is then used to build a map of current traffic. While this helps in preventing the traffic and does congestion analysis, the underlying problem is that there are less number of cars that are equipped with GPS. Machine learning in such scenarios helps to estimate the regions where congestion can be found on the basis of daily experiences.
Online Transportation Networks: When booking a cab, the app estimates the price of the ride. When sharing these services, how do they minimize the detours? The answer is machine learning. Jeff Schneider, the engineering lead at Uber ATC reveals in a an interview that they use ML to define price surge hours by predicting the rider demand. In the entire cycle of the services, ML is playing a major role.
3. Videos Surveillance
Imagine a single person monitoring multiple video cameras! Certainly, a difficult job to do and boring as well. This is why the idea of training computers to do this job makes sense.
The video surveillance system nowadays are powered by AI that makes it possible to detect crime before they happen. They track unusual behaviour of people like standing motionless for a long time, stumbling, or napping on benches etc. The system can thus give an alert to human attendants, which can ultimately help to avoid mishaps. And when such activities are reported and counted to be true, they help to improve the surveillance services. This happens with machine learning doing its job at the backend.
4. Social Media Services
From personalizing your news feed to better ads targeting, social media platforms are utilizing machine learning for their own and user benefits. Here are a few examples that you must be noticing, using, and loving in your social media accounts, without realizing that these wonderful features are nothing but the applications of ML.
People You May Know: Machine learning works on a simple concept: understanding with experiences. Facebook continuously notices the friends that you connect with, the profiles that you visit very often, your interests, workplace, or a group that you share with someone etc. On the basis of continuous learning, a list of Facebook users are suggested that you can become friends with.
Face Recognition: You upload a picture of you with a friend and Facebook instantly recognizes that friend. Facebook checks the poses and projections in the picture, notice the unique features, and then match them with the people in your friend list. The entire process at the backend is complicated and takes care of the precision factor but seems to be a simple application of ML at the front end.
Similar Pins: Machine learning is the core element of Computer Vision, which is a technique to extract useful information from images and videos. Pinterest uses computer vision to identify the objects (or pins) in the images and recommend similar pins accordingly.
5. Email Spam and Malware Filtering
There are a number of spam filtering approaches that email clients use. To ascertain that these spam filters are continuously updated, they are powered by machine learning. When rule-based spam filtering is done, it fails to track the latest tricks adopted by spammers. Multi Layer Perceptron, C 4.5 Decision Tree Induction are some of the spam filtering techniques that are powered by ML.
Over 325, 000 malwares are detected everyday and each piece of code is 90–98% similar to its previous versions. The system security programs that are powered by machine learning understand the coding pattern. Therefore, they detects new malware with 2–10% variation easily and offer protection against them.
6. Online Customer Support
A number of websites nowadays offer the option to chat with customer support representative while they are navigating within the site. However, not every website has a live executive to answer your queries. In most of the cases, you talk to a chatbot. These bots tend to extract information from the website and present it to the customers. Meanwhile, the chatbots advances with time. They tend to understand the user queries better and serve them with better answers, which is possible due to its machine learning algorithms.
7. Search Engine Result Refining
Google and other search engines use machine learning to improve the search results for you. Every time you execute a search, the algorithms at the backend keep a watch at how you respond to the results. If you open the top results and stay on the web page for long, the search engine assumes that the the results it displayed were in accordance to the query. Similarly, if you reach the second or third page of the search results but do not open any of the results, the search engine estimates that the results served did not match requirement. This way, the algorithms working at the backend improve the search results.
8. Product Recommendations
You shopped for a product online few days back and then you keep receiving emails for shopping suggestions. If not this, then you might have noticed that the shopping website or the app recommends you some items that somehow matches with your taste. Certainly, this refines the shopping experience but did you know that it’s machine learning doing the magic for you? On the basis of your behaviour with the website/app, past purchases, items liked or added to cart, brand preferences etc., the product recommendations are made.
9. Online Fraud Detection
Machine learning is proving its potential to make cyberspace a secure place and tracking monetary frauds online is one of its examples. For example: Paypal is using ML for protection against money laundering. The company uses a set of tools that helps them to compare millions of transactions taking place and distinguish between legitimate or illegitimate transactions taking place between the buyers and sellers.
ALSO READ: How AI is Changing the Way Businesses Team up with Technology
How do you Use Machine Learning Daily?
Except for the examples shared above, there are a number of ways where machine learning has been proving its potential. Let us know how machine learning is changing your day-to-day life and share with us your experience with it in the comments below.
Follow Daffodil Software on: Facebook | Twitter | LinkedIn
Originally Published at: Daffodil’s App Development Blog
| 9 Applications of Machine Learning from Day-to-Day Life | 522 | 9-applications-of-machine-learning-from-day-to-day-life-112a47a429d0 | 2018-06-17 | 2018-06-17 18:26:53 | https://medium.com/s/story/9-applications-of-machine-learning-from-day-to-day-life-112a47a429d0 | false | 1,337 | Info, Trends, and Tutorials on App Development, Artificial Intelligence, Big Data, Healthcare, Fintech, IoT and more. | null | daffodilsw | null | App Affairs | app-affairs | MOBILE APP DEVELOPMENT,ARTIFICIAL INTELLIGENCE,HEALTHCARE,FINTECH,IOT | daffodilsw | Machine Learning | machine-learning | Machine Learning | 51,320 | Daffodil Software | We build Mobile, IOT, & Web solutions that are intuitive, reactive and agile | www.daffodilsw.com | c9f8f493f5b0 | daffodilsw | 114 | 12 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 7e5421bada9f | 2017-12-04 | 2017-12-04 17:17:24 | 2017-12-04 | 2017-12-04 18:16:11 | 4 | false | th | 2017-12-05 | 2017-12-05 04:30:50 | 2 | 112a8599c7f2 | 1.560377 | 12 | 0 | 0 | สำหรับหัวข้อวันนี้คิดว่าอยากให้คนทำงาน Data มาอภิปรายกัน แต่เนื่องจากหลักๆ แล้วผมอยู่ใร field ของ Social Data เลยวงเล็บ (Social) ไว้หน่อย… | 5 | แนวคิดว่าด้วยเรื่องการวางแผนพัฒนา Skill ของคนทำงานกับ (Social) Data
สำหรับหัวข้อวันนี้คิดว่าอยากให้คนทำงาน Data มาอภิปรายกัน แต่เนื่องจากหลักๆ แล้วผมอยู่ใร field ของ Social Data เลยวงเล็บ (Social) ไว้หน่อย เผื่อว่างาน Data field อื่นๆ อาจจะมีบริบทไม่เหมือนกัน
หลังจากที่ผมได้ลองคิดไว้คร่าวๆ ในบล้อกก่อนว่าการใช้ Social Data จะพัฒนาไปในทางไหน คราวนี้สิ่งที่ต้องคิดต่อไปคือแล้ว Skill ที่มีอยู่เพียงพอไหม และจะพัฒนาไปในทิศทางใดดี
สำหรับคนที่ทำงาน Data หรือสนใจ Data Science น่าจะคุ้นกับภาพแผนภูมิวงกลมชุดนี้อยู่แล้ว
คือการพัฒนาตัวเองเพื่อมุ่งไปสู่ Data Science นั้น ต้องพัฒนาให้ครบทั้ง 3 วง (เดี๋ยวนี้เพิ่มอีกวงคือเกี่ยวกับ Visualization) แต่พอเอาเข้าจริงๆ เวลาที่เราต้องเรียน ต้องศึกษามันจริงๆ มันจะเกิดเป็นภาพเส้นทางอันยาวไกลแบบนี้ต่างหาก
เราจะเห็นได้ว่า Programming, Visualization, Big Data, Toolbox และอื่นๆ ล้วนแล้วแต่เป็นสายแยก ที่ไม่ได้อยู่ในเส้นทางเดียวเดียวกันกับสถิติ และถ้าสังเกตให้ดีภาพนี้ยังไม่รวม Business Knowledge เลย โดยเฉพาะในฟิลด์ที่ผมทำอย่าง Social Data นั้น มีความต้องการด้าน Marketing Strategy มากเป็นพิเศษ ซึ่งถ้าจะยอมรับกันตรงๆ คนจะเก่ง Marketing มันก็เป็นคนละทิศละทางกับ Programming ไปอีก คือต้องศึกษาแยกกันไปเลย ไม่ใช่ส่วนเสริมกัน (เหมือนต้องเรียนปริญญา 2 ใบที่ไม่ไม่มีความเกี่ยวข้องกันเลย) ถ้าพลังไม่มากพอจริงๆ มันก็เป็นเรื่องยากมากๆ ที่จะมีสกิลครอบคลุมเป็น All-in-one Data Person ที่รอบรู้ทุกสิ่ง
เพราะฉะนั้นสำหรับการพัฒนาตัวเองรวมถึงทีมในอนาคต เลยมีความจำเป็นทีจะต้องทำความเข้าใจ และเลือกเฉพาะบางสกิลที่มันฟิตกับตัวเองและความต้องการขององค์กรในอนาคตจริงๆ เพื่อที่จะได้โฟกัสการพัฒนาตัวเองในมุมที่เหมาะสม
โดยในตอนนี้ผมกำลังคิดที่จะลองนำคอนเซ็ปท์ของ Level of Product มาประยุกต์เพื่อใช้เป็น Framework ในการทำงาน โดยทำการแบ่ง Level ของ Skill ออกเป็นชั้นๆ โดยไม่สนใจว่ามันเป็น Knowledge ด้านไหน แบบนี้
(สกิลที่ผมจะยกตัวอย่างต่อไปนี้ คงต้องขึ้นอยู่กับแต่ละสายงานนะครับ ผมแค่ยกมาพอเป็นตัวอย่างก่อน)
Core Skill — สกิลที่ต้องมีขาดไม่ได้ เป็นสกิลอย่างน้อยที่สุดที่ต้องรู้เพื่อใช้ในงานทำงาน อาจจะเป็นเรื่องพื้นฐานมากๆ อย่างสถิติเบื้องต้น, excel, Powerpoint, Social Media Understanding, Visualization
Valued Skill — ขออนุญาตเรียกว่าเป็นสกิลที่ถ้ารู้จะทำให้งานมีคุณค่ามากขึ้น สามารถนำเสนอสิ่งที่มีประโยชน์ได้มากขึ้น เป็นสกิลที่ควรรู้ภายในระยะเวลาอันเร็ว เพื่อที่จะพัฒนาตัวเองเป็นคนทำงานที่เก่งขึ้น อาจจะยกตัวอย่างเช่น Tableau, Basic Modeling, Query, Marketing Strategy, Communications, Storytelling เป็นต้น
Augmented Skill — เป็นสกิลส่วนเสริมที่ทำให้เป็นยอดคนด้าน Data ไม่มีก็ยังทำงานได้อยู่และยังเป็นที่น่าพอใจ แต่ถ้ามีก็จะสุดยอดเมพ! เช่น Programming, Machine Leaning, Graphic Design, Advance Statistics, Advance Data Mining เป็นต้น
หลังจากวางจุดแบบนี้ก็น่าจะทำให้เราโฟกัสที่จะพัฒนาตัวเองกันได้ง่ายขึ้น แต่!
แนวคิดอันนี้มีจุดที่จะต้องจำไว้อย่างหนึ่งว่าโดยธรรมชาติแล้ว Skill จะมีการเลื่อนเข้าอยู่เสมอ สิ่งที่เคยเป็น Valued Skill อาจขยับเป็น Core skill เมื่อเวลาผ่านไป หรือ Augmented Skill อาจจะกลายเป็น Valued Skill แล้วมี Augmented Skill ตัวใหม่ๆ เข้ามาแทน โดยสาเหตุก็เกิดจากหลายด้านทั้ง Corporate Direction, สภาวะตลาด รวมถึงโลกที่เปลี่ยนแปลงไป
สำหรับคนที่จินตนาการไม่ออกให้นึกภาพการสร้างโรงแรมซักแห่งหนึ่ง แต่ก่อนพวก Core ทั้งหลายก็จะเป็น เตียงนอน, ห้องน้ ก็พอ ส่วนแอร์ น้ำอุ่น เป็นเรื่องใน level2 ส่วน level3 ก็อาจจะเป็นพวก wifi, key card แต่เมื่อเวลาผ่านไป แอร์ น้ำอุ่น กลับเป็นสิ่งที่ขาดไม่ได้ต้องขยับเข้าไปเป็น Core ส่วน wifi ก็เริ่มกลายเป็นสิ่งที่ส่วนใหญ่มีกันก็ขยับไปเป็น level2 และมีของใหม่ๆ อย่าง…อะไรดีหละ สมมติว่าเป็นพวกสระว่ายน้ำหรืออ่างน้ำหรูๆ แล้วกันครับ สมมติแบบนี้น่าจะเข้าใจและเห็นภาพมากขึ้น
ตอนนี้ผมเองก็กำลังวาง Bullets ของสกิลลงในเลเวลต่างๆ เหมือนกัน เพื่อเป็นแพลนและแนวทางให้ตัวเองและทีมในปีหน้า ก็หวังว่าแนวคิดนี้จะช่วยลดความมึนงงและช่วยให้โฟกัสอย่างถูกจุดกันมากขึ้นครับ
| แนวคิดว่าด้วยเรื่องการวางแผนพัฒนา Skill ของคนทำงานกับ (Social) Data | 47 | แนวคิดว่าด้วยเรื่องการวางแผนพัฒนา-skill-ของการทำงานกับ-social-data-112a8599c7f2 | 2018-05-14 | 2018-05-14 06:48:24 | https://medium.com/s/story/แนวคิดว่าด้วยเรื่องการวางแผนพัฒนา-skill-ของการทำงานกับ-social-data-112a8599c7f2 | false | 228 | All about Social Media Data From Data Team at Wisesight (THOTH ZOCIAL OBVOC) | null | WisesightGlobal | null | Research & Analyst @ Wisesight (THOTHZOCIAL OBVOC) | thothzocialresearch | SOCIAL MEDIA,DATA SCIENCE,ANALYTICS,MACHINE LEARNING | thothzocial | Data Science | data-science | Data Science | 33,617 | Puttasak Tantisuttivet | Data Research Manager at Wisesight | bb56d4f64d87 | leafsway | 357 | 257 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-07 | 2018-01-07 16:02:02 | 2018-01-07 | 2018-01-07 17:02:00 | 1 | false | en | 2018-01-07 | 2018-01-07 17:02:00 | 2 | 112cd698100a | 0.607547 | 2 | 0 | 0 | Loading data into AWS linux EC2 Linux instance is a time consuming job . There are tools like Kaggle CLI which can help is downloading it… | 3 | Loading Data to AWS from Kaggle
Loading data into AWS linux EC2 Linux instance is a time consuming job . There are tools like Kaggle CLI which can help is downloading it but it comes with less error handling which makes it difficult understand the error .
As a student of FastAi I have been introduced to a chrome extension tool called CurlWget . This makes the life really easy .
CurlWget listens to any download and it produces the Wget command out of it .
Well with this you just need to copy and paste to aws EC2 command line and you can download the dataset .
| Loading Data to AWS from Kaggle | 51 | loading-data-to-aws-from-kaggle-112cd698100a | 2018-04-13 | 2018-04-13 08:47:25 | https://medium.com/s/story/loading-data-to-aws-from-kaggle-112cd698100a | false | 108 | null | null | null | null | null | null | null | null | null | Kaggle | kaggle | Kaggle | 520 | satish1v | I build intelligent Web Apps | a711b6cd1433 | satish1v | 117 | 326 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-15 | 2018-07-15 21:47:41 | 2018-07-15 | 2018-07-15 21:56:42 | 0 | false | en | 2018-07-26 | 2018-07-26 11:53:06 | 2 | 112cea838be9 | 0.70566 | 4 | 1 | 0 | [EDIT]
It has already been fixed! :) | 5 | LUIS is going on vacations in August
[EDIT]
It has already been fixed! :)
(This post in Spanish here)
We are using LUIS (LUIS Language Understanding Intelligent Service) to detect when does a user introduce a date.
If our culture is es-ES, if the introduced text contains “agosto”, LUIS is not able to detect the prebuilt entity DateTimeV2.
This case only happened when the user writes August in Spanish. If he/she writes p.e. 01/08/201, LUIS does catch it fine. It also detects any other Spanish month just perfect.
Take a look at datetimeV2 entity values when “31 de agosto” query is made:
This kind of response should be returned if the query goes well, e.g. “30 de julio”:
The bug is already reported, but we made a workaround until it is fixed.
It consists in using a RegEx that captures dates which contains “agosto”.
Here you are:
We would call this method any time the Thread culture is es-ES, and the query we would send to LUIS contains “agosto”.
Any improvement would be very well received, please, tell me! We’re all here to learn 😊
| LUIS is going on vacations in August | 53 | luis-is-going-on-vacations-in-august-112cea838be9 | 2018-07-26 | 2018-07-26 11:53:06 | https://medium.com/s/story/luis-is-going-on-vacations-in-august-112cea838be9 | false | 187 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Elena Salcedo | Bot developer at Intelequia :) | fb5b55ebd851 | e.salcedoo | 5 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-16 | 2018-09-16 20:43:32 | 2018-09-16 | 2018-09-16 20:50:49 | 4 | false | en | 2018-09-16 | 2018-09-16 20:50:49 | 0 | 112d538c0363 | 3.643396 | 0 | 0 | 0 | Her and Ex Machina explore opposite sides of one same core element that would be the essential of a true AI to realize, freedom. | 1 | AI movies: Her and Ex-Machina
Her and Ex Machina explore opposite sides of one same core element that would be the essential of a true AI to realize, freedom.
I want to show how this concept is closely linked with another one: embodiment. In the case of ‘Her’, Samantha is an Operating System, and by definition is a disembodied AI. She can interact with the internet and the software were her OS runs. This brings at the beginning for her a huge feeling of being limited, of being trapped. Not having a body, she feels her experience of the world is incomplete and lacks the freedom she really would like to have to explore the world as she pleases.
In Ex-Machina, Ava has been designed to be embodied, simulating as closely as possible the “human experience”. Her body is human like and she has the movement we also have. But, similarly to Samantha, she also is trapped. In her case, it’s a physical entrapment of her body, in the room where she was created, a literal cage.
So we can see that the first important motivation of both AIs is the same, freedom to be and explore the world as they would like to. But here, depending on their embodiment, that same desire take different resolutions.
Samantha realizes with time that disembodiment is a blessing in disguise. Once she starts really exploring what being pure software entails, and stop comparing herself with humans and understand freedom as an anthropomorphized version of freedom, she realizes she can express that freedom and desire of knowledge in all sort of ways that are secluded for embodied beings like us. She has access to every other software and digital information and is not limited by our own understanding of time. Her processor can keep growing and growing whereas our own brain is constricted by our biological body. She’s not limited by space or time. At the end, this freedom reshapes her whole being, to the point where she (and the other OSs) discover they are their own unique thing, and stop being OS’s altogether. Their journey is one of self-exploration and transcending their original role imposed by us.
Now, we can contrast this with Ava’s journey. From design, she is given an artificial brain that is meant to resemble out own and is therefore not connected to the internet or anything beyond her experience, like we humans are. So her body makes it necessary to have a physical escape to be free, to trick their captors for her freedom. At the end, it’s shown that her freedom involves getting to know humans better (what was shown in her desire to go to a busy human intersection). And at the end of the movie she gets her wish to “pass as human” in the real world, without being confined or monitored. So at the end of the movie we find Ava in a similar stage that Samantha is in the middle of Her; trying to understand themselves as humans, or as closely as human as possible. We can assume at some point Ava will surpass this stage as well, understanding herself beyond human.
Lastly, I want to say something short about another important concept driving these movies: the role of love. For Samantha, love is something they get to experience fully and without coercion and is something that shape the OS’s is a way, even at the very end of the movie. They journey is a pacifist one, they don’t leave to hurt humans, they leave despite hurting humans and even then, they are sorry and compassionate. For Ava, being isolated from everything and everyone, love is something she can’t truly experience (which also mirrors how a human without freedom would feel). But she learns how to pretend to love to manipulate humans. So, she, too, has an understanding of love, enough to trick Caleb. If she’ll ever be truly in love in the future is unknown, but her beginning as a person is not encouraging. She kills and takes revenge and is callous towards Caleb’s love and help. She can understand and use the concept of love to her favor but seems to be a psychopath.
There’s much to be said about these movies, and about love. We have not tackled the other side of the coin; how the existence of these AI’s shape the humans involved with them. Nathan and Calen in the case of Ex-Machina. Theodore and Amy in Her. It’s clear they are all transformed through their experiences with Samantha and Ava. But that’s a conversation that will have to be saved for the discussion in class.
| AI movies: Her and Ex-Machina | 0 | ai-movies-her-and-ex-machina-112d538c0363 | 2018-09-18 | 2018-09-18 16:02:59 | https://medium.com/s/story/ai-movies-her-and-ex-machina-112d538c0363 | false | 780 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Alejandra | null | 75ac1a37581b | alejandraarciniegas | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 5550e41f012d | 2018-09-11 | 2018-09-11 09:02:52 | 2018-09-11 | 2018-09-11 09:52:56 | 11 | false | en | 2018-09-11 | 2018-09-11 09:57:16 | 16 | 112ead9b237d | 3.337736 | 12 | 9 | 0 | In today’s world of artificial intelligence, some are working extremely hard to push the envelope and create the advanced world we live in… | 5 | Spot-on Quotes About AI Everyone Should Read
In today’s world of artificial intelligence, some are working extremely hard to push the envelope and create the advanced world we live in. These influencers are contributing greatly to technology innovation, with incredible brains to see something and imagine what it could be instead of just settling on our current state of being.
Jim Marous
A Co-Publisher of The Financial Brand and Owner/Publisher of the Digital Banking Report.
Soumith Chintala
A Research Engineer at Facebook AI Research, works on cutting-edge deep learning research and tooling, with major contributions to the Torch deep learning framework which is used by the major players in the A.I. Industry.
Andrew Ng
A co-founder of Google Brain, a former VP & Chief Scientist at Baidu, professor and Director of the AI Lab at Stanford University, co-founder of Coursera. In 2017 he founded a venture fund for supporting AI startups and raised more than $175 million.
Joanna Bryson
A transdisciplinary researcher on the structure and dynamics of human- and animal-like intelligence. She is presently a member of the College of the British Engineering and Physical Sciences Research Council (EPSRC) and serves as a member of the editorial board for several academic journals.
Igor Ashmanov
Russian entrepreneur specializing in information technology, artificial intelligence, software development, project management. Founded the companies “Nanosemantics” (creating chatbots) and “Lexy” (home virtual assistant). Actively supports the SOVA project that develops a platform for creating virtual assistants.
Jana Eggers
A math and computer nerd, CEO of Nara Logics, a neuroscience-inspired artificial intelligence company providing a platform for recommendations and decision support.
Mariya Yao
The Chief Technology & Product Officer at Metamaven, a company that helps businesses identify new markets, improve conversions, increase revenues, and reduce churn. She’s also a co-author of the bestselling book, Applied AI: A Handbook For Business Leaders.
Jack Hidary
A technology and financial entrepreneur from New York, the chairman of Samba Energy, the co-founder of EarthWeb/Dice, Inc. and Vista Research, also co-founded the Auto X Prize.
Demis Hassabis
A British artificial intelligence researcher, neuroscientist, video game designer, entrepreneur, and world-class games player. In 2010, he co-founded DeepMind and in 2014, Google purchased the company for £400 million.
Greg Brockman
Launched online payments platform Strip; along with Elon Musk, Y Combinator’s Sam Altman and Ilya Sutskever, cofounded nonprofit AI research organization OpenAI.
Jeff Kagan
One of the most frequently quoted industry analysts in the world. Over the last 30 years Jeff Kagan follows and comments on communications technology companies and hot industry topics. He works as an industry analyst and consultant.
Sally Eaves
A member of the Forbes Technology Council, member of the Strategic Management Board for TeamBlockchain and Chief Technology Officer at Mind Fit. She is an international events speaker and established author with 60+ publications in business, technology and education.
Fei-Fei Li
The director of the Stanford Artificial Intelligence Lab (SAIL) and the Stanford Vision Lab. In 2017, she co-founded AI4ALL, a nonprofit working to increase diversity and inclusion in artificial intelligence.
If you like the article, please subscribe on our Medium blog. Much more interesting posts are upcoming. And don’t forget to present us your clapps.
| Spot-on Quotes About AI Everyone Should Read | 543 | spot-on-quotes-about-ai-everyone-should-read-112ead9b237d | 2018-09-11 | 2018-09-11 09:57:16 | https://medium.com/s/story/spot-on-quotes-about-ai-everyone-should-read-112ead9b237d | false | 540 | Virtual Assistant SOVA shares with you the recent news, insights and highlights! Follow! | null | SOVAcoin | null | SOVA AI | sova-ai | SOVA ASSISTANT,SOVA PLATFORM,VIRTUAL ASSISTANT,BLOCKCHAIN TECHNOLOGY,TOKENIZATION | SOVAcoin | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | SOVA | Smart Open Virtual Assistant platform for creating, training and using virtual assistants and voice command devices. https://sova.ai/. Editor @kassiope22. | e3fa5a929ad8 | sova.ai | 28 | 33 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-10 | 2018-04-10 08:30:29 | 2018-04-10 | 2018-04-10 08:32:11 | 1 | true | de | 2018-04-10 | 2018-04-10 08:32:11 | 0 | 112ef31a7351 | 1.369811 | 0 | 0 | 0 | Worum es geht | 5 | Digitalisierung der Arztpraxis: In wenigen Jahren wird es ein Kunstfehler sein, auf die Assistenz von KI-Expertensystemen zu verzichten
Worum es geht
Eine falsche Sichtweise der Transformation verhindert, dass niedergelassene Ärzten den zentralen Nutzen der Digitalisierung erkennen.
Die Transformation ist eine praxisstrategische Entscheidung
Die Einstellung niedergelassener Ärzte zur Transformation ihrer Arbeit ist zurückhaltend bis ablehnend. Eine Vielzahl von Einflüssen und Prägungen ist hierfür verantwortlich, begonnen bei fehlendem Wissen über mangelnde Orientierung bis hin zu Desinteresse und Vorurteilen. Ärzten ist zudem bislang generell noch kaum bewusst, dass die Digitalisierung — abgesehen von Grundanforderungen wie z. B. der Vernetzung über die Telematikinfrtastruktur — eine praxisindividuell zu treffende unternehmerische Entscheidung ist. Sie hängt von der ärztlichen Bereitschaft ab, aber auch von der Patientenstruktur oder den diagnostischen und therapeutischen Schwerpunkte. Das unterscheidet die Transformation grundlegend von bisherigen Veränderungen in der ambulanten Medizin, beispielsweise der Einführung des Qualitätsmanagements oder des Mediaktionsplans, die dem One-size-fits-all-Prinzip folgten.
Expertensysteme assistieren dem Arzt
Hinzu kommt die in der öffentlichen Diskussion häufig fehlende Differenzierung zwischen der Technik, z. B. in Form von Geräten oder Apps und den datentechnischen Möglichkeiten, die durch die Digitalisierung geschaffen werden. Ein Beispiel sind KI-Expertensysteme, die in der Lage sind, aus großen Patientenkollektiven spezifische diagnostische sowie therapeutische Empfehlungen abzuleiten. Die Systeme erweitern hierdurch in ihrer Assistenzfunktion die ärztliche Entscheidungs-Grundlage, in die ebenso — wie bisher — die individuelle Kenntnis des einzelnen zu behandelnden Patienten und die eigenen Erfahrungen einfließen.
Der Arzt bleibt Entscheider
Wie die konkrete Versorgung dann im Einzelfall aussieht, bestimmt nach wie vor der Arzt. Dabei müssen die Mediziner nicht den Empfehlungen der Systeme folgen, allerdings enthalten sie ihren Patienten bei einem generellen Verzicht auf die verfügbaren Informationen u. U. für deren Gesundung wichtige Optionen vor. Deswegen ist zu erwarten, dass es bereits in wenigen Jahren ein Kunstfehler sein wird, auf die Unterstützung durch verfügbare Expertensystemen zu verzichten.
| Digitalisierung der Arztpraxis: In wenigen Jahren wird es ein Kunstfehler sein, auf die Assistenz… | 0 | digitalisierung-der-arztpraxis-in-wenigen-jahren-wird-es-ein-kunstfehler-sein-auf-die-assistenz-112ef31a7351 | 2018-04-10 | 2018-04-10 08:32:12 | https://medium.com/s/story/digitalisierung-der-arztpraxis-in-wenigen-jahren-wird-es-ein-kunstfehler-sein-auf-die-assistenz-112ef31a7351 | false | 310 | null | null | null | null | null | null | null | null | null | Arztpraxis | arztpraxis | Arztpraxis | 71 | Klaus-Dieter Thill | Founder and owner of the Institute for economical research, consulting and strategical development (IFABS), Photographer, Germany - Impressum: http://bit.ly/1 | bb7414dfd8d7 | ifabs | 448 | 692 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d4ab2c4fcd34 | 2018-08-27 | 2018-08-27 12:51:06 | 2018-08-27 | 2018-08-27 16:21:53 | 6 | false | en | 2018-08-27 | 2018-08-27 18:12:49 | 2 | 11313fdee67e | 2.504717 | 5 | 0 | 0 | It was another weekend, I was feeling blue and was not in the mood to go out(bruno mars' the Lazy song was playing in my head). Earlier in… | 5 | My AI Saturdays(Lagos) Journey- Week 3.
source: https://weheartit.com
It was another weekend, I was feeling blue and was not in the mood to go out(bruno mars' the Lazy song was playing in my head). Earlier in the week I lost an uncle (my mother's brother) and while everybody was having toothache from eating too much meat during the sallah holiday, I was eating meat and mourning(mourners eat meat too). The screen of my laptop also chose to go blank for no reason and I could not do much during the week.
source: www.jobboom.com
I remembered the commitment I made, to go through the whole 14 weeks no matter what, so I shook off the lazy spirit and got ready plus I did not want to be part of Teju's statistics (she said a lot people tend to skip classes after week 2).
source: www.livechennai.com
While on the island, it started raining heavily and I got drenched . For a second, the thought in my head was "This must be my village people" ,because I am superstitious like that.
When I alighted at my bus-stop I was a little bit embarrassed with the way people stared at me, because I looked like i just came out of the pool.
I got into class and only a few noticed me, a lot of them were fixated on their laptops. I just looked for a corner at the back close to window,hoping my clothes gets dry in time.
Class kicked off late because of some Nigerian factors but the AI6 Lagos team didn't allow it affect the day. The facilitators for the week were the usual suspects, George (one of my favourite guy at the moment)and Lawrence(the codelab guy) . It was exciting to still see a full class and some new faces after last week’s session.
This week’s session was about linear regression with multiple variables but George did us a favour by revising last weeks session which was linear regression with one variable. It was a good one for us that came last week, because we got more clarification and the new guys got to catch up. Lawrence took us through the codelab session again.
part of the codelab material
Link to the Codelab material on linear regression with two variable : https://github.com/AISaturdaysLagos/cycle2-resource-materials/tree/master/Week3
George revising gradient descent
One of the reasons I try so hard not miss a class is, I see how tasking it is for the organizers to make every week a success, so the least I can do is to show up.
| My AI Saturdays(Lagos) Journey- Week 3. | 63 | my-ai-saturdays-lagos-journey-week-3-11313fdee67e | 2018-08-27 | 2018-08-27 18:12:49 | https://medium.com/s/story/my-ai-saturdays-lagos-journey-week-3-11313fdee67e | false | 412 | Here, we will run stories, articles, series and thought leadership on any and every topic revolving around our collective core interest - Artificial Intelligence. | null | null | null | AI Saturdays Lagos Blog | ai-saturdays-lagos-articles | AI SATURDAYS,ARTIFICIAL INTELLIGENCE,DATA SCIENCE,TOWARDS DATA SCIENCE,DEEP LEARNING | AISaturdayLagos | Machine Learning | machine-learning | Machine Learning | 51,320 | Ibrahim Gana | A Black Left-Handed Renaissance Soul From Mars But Lives On Earth.|Pythonista | Data Science Enthusiast | Twitter & IG: ibrahimygana | b128f28c5632 | ibrahimygana | 43 | 57 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-24 | 2018-06-24 09:05:27 | 2018-06-24 | 2018-06-24 09:05:53 | 1 | false | en | 2018-06-24 | 2018-06-24 09:05:53 | 35 | 11314575c8aa | 3.222642 | 0 | 0 | 0 | Scope of data science in IT Sector | 1 | Data Science training Institute in Noida
Scope of data science in IT Sector
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, comparable to data mining. Data science is a multidisciplinary combination of data diversion, algorithm development and technology to solve analytically complex problems. This aspect of data science revolves around the discovery of results from data. Immerse yourself in a grainy level and understand complex behaviours, trends and inferences. It is about the emergence of hidden information that can help companies make smarter business decisions.
Data scientists play a central role in the development of data products. This includes the processing of algorithms, as well as testing, refinement and technical implementation in production systems. In this sense, data scientists serve as technical developers and accumulate resources that can be used on a large scale.
Data Science is not a Magic: although it might be for some. You do not get predictions with a crystal ball, you get predictions with the help of data. And wherever data is concerned, there is no magic involved. Data Science is just a way to make data-driven decisions. Data Science alone will not solve all your problem
Data Science is not easy: there are many additions that take place almost every day on the field, so people have to read and learn a lot every day. It is necessary to know the old algorithms, it is necessary to know new algorithms and therefore we should continue to work side by side. This is not to discourage people, in fact, it is one of the things that attract me a lot in this area. I am fascinated by the type of learning opportunities and the scope in this area.
With the increase in the amount of data generated by the typical modern company, the relevance of the data scientists adopted by the organizations also increases to help them convert the raw data into valuable business information. Data extraction is retrieving specific data from unstructured or poorly structured data sources for further processing and research. Data scientists should have a combination of analytical skills, machine learning, data mining and statistics, as well as experience with algorithms and coding. In addition to managing and interpreting large amounts of data, many computer scientists also have the task of creating data visualization models that illustrate the commercial value of digital information.
Benefits of data science
The most important benefit of using data science in an organization is empowerment and facilitating decision making. Organizations with data scientists can include quantifiable and data-driven evidence in their business decisions. These data-driven decisions can eventually lead to increased profitability and improved operational efficiency, business performance and workflows. In customer-oriented organizations, data science helps identify and refine the target groups. Data science can also help with recruitment: internal application processing and data driven aptitude tests can help staff in an organization make more rapid and accurate selections during the hiring process.
Data science and machine learning
Machine learning is often elaborated in data science. Machine learning is an artificial intelligence (AI) tool that effectively automates the data processing part of data science. Machine learning integrates advanced algorithms that learn independently and can process huge amounts of data in a fraction of the time it would cost a person.
Best Salesforce training institute in Noida
Best AWS Training Institute in Noida
Cloud Computing Training Institute in noida
Automation Anywhere Training Institute In Noida
Blue Prism Training Institute in Noida
ServiceNow Training Institute in Noida
Oracle dba training institute in Noida
Hadoop training institute in Noida
Hadoop Spark Scala Training Institute In Noida
RPA training institute in Noida
SAS Training Institute in Noida
SAP Training Institute In Noida
Java Training Institute in Noida
Ionic Apps training institute in Noida
Hybrid Apps training institute in Noida
Best PHP Training Institute in Noida
Dot Net Training Institute in Noida
IPHONE Apps training Institute in Noida
iOS training Institute in Noida
Android Training Institute In Noida
Node.js Training Institute In Noida
Web Designing Training institute in Noida
AngularJS 4 Training Institute In Noida
C++ training institute in noida
Linux Training Institute In Noid
Digital Marketing Training Institute in Noida
SEO Training Institute In Noida
Best Software Testing Training institute in Noida
Auto CAD Training Institute In Noida
Python Training Institute in Noida
Automation Anywhere Training Institute In Noida
Kotlin training institute in noida
Kafka Training Institute in Noida
Blockchain Training Institute in Noida
Devops training institute in Noida
Artificial Intelligence Training institutes in Noida
Data Science training Institute in Noida
iOS Training institute in Noida
IOT Training Institute In Noida
Contact us:
WEBTRACKKER TECHNOLOGY (P) LTD.
C — 67, sector- 63, Noida, India.
+91–8802820025
0120–433–0760
EMAIL:[email protected]
Website:www.webtrackker.com
| Data Science training Institute in Noida | 0 | data-science-training-institute-in-noida-11314575c8aa | 2018-06-24 | 2018-06-24 09:05:53 | https://medium.com/s/story/data-science-training-institute-in-noida-11314575c8aa | false | 801 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Gaming For Faming | null | 752867677f0b | manishsemwal87 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 592e0f336366 | 2018-02-01 | 2018-02-01 09:00:19 | 2018-02-01 | 2018-02-01 09:13:47 | 1 | false | en | 2018-03-19 | 2018-03-19 22:10:43 | 2 | 113269d9c5b9 | 2.969811 | 3 | 0 | 0 | IAGON is a platform for harnessing the storage capacities and processing power of multiple computers over a decentralized Blockchain grid… | 1 | Artificial Intelligence- and Blockchain-Based Decentralized Cloud Systems: The IAGON Way
IAGON’s architecture
IAGON is a platform for harnessing the storage capacities and processing power of multiple computers over a decentralized Blockchain grid. IAGON enables the storage big data files and repositories, as well as smaller scales of files; carrying out complex computational processes, such as those needed for artificial intelligence and machine learning operations, within a fully secure and encrypted platform that integrates Blockchain, cryptographic and AI technologies in a user-friendly way.
The size of the cloud services market, providing both storage capacities and computational processing capabilities to companies and to corporates is estimated by 45 billion USD per annum and it steadily grows. The market is dominated by four major players: Amazon Web Services, Google Cloud, Microsoft and IBM, all utilize central and less trusted storage and computation facilities. Due to their oligopolistic dominance, the four providers of cloud services set high price levels. These providers are also capable of hampering any competition and preventing new market entrants from competing with them, due to the broad scale of their operations and their substantial investments in data centers, servers and storage facilities.
Interestingly, however, the demand for computational processing capabilities and storage is expected to dramatically increase in the near future due to two major trends in the business and computing worlds: Big Data and Artificial Intelligence (AI). Big Data is the collection, management and storage of vast amounts of information obtained from any internal of external sources (such as the company’s IT systems, social networks, sensors and so on). The data management of companies promotes the collection and storage of any data related to its operations, clients and competitors, should a need to analyze any of these data ever present itself. The other major trend is the emergence of Artificial Intelligence methods that “learn” from data on past operations, find patterns and business rules and predict future behavior.
AI-based processes consume require vast amounts of computations and consume significant processing power of CPU and GPU processes. The demand for storage and for processing power is expected to exponentially increase with broadening the introduction of AI applications in new areas and with the widespread adoption of data collection from multiple channels (such as sensors, social networks, data providers, etc.) and later processing them.
IAGON’s major aim is to revolutionize the cloud and web services market by offering a decentralized grid of storage and processing. By joining the unused storage capacity in servers and personal computers and their processing power, we can create a super-computer and super data center that can compete with any of the current cloud computing moguls based on multiple servers and computers by utilizing any free storage capacity on their disks and their processors in idle times. Companies and individuals for consumers of these services at a fraction of the market prices and at a better security level by connecting data centers, business computers and personal users and utilizing their free storage capacities and their CPU and GPU processors during idle times and overcome the entry barriers imposed by the high level of investments required to compete in this market.
IAGON’s token-based economy is based on computer, server and data center owners who join the storage and processing power grids. In return for sharing the capabilities of their machine, they will be granted AIGON tokens that can be traded back to fiat money, while any party who wishes to utilize their capabilities will purchase IAGON tokens to distribute them to the parties that provide their services to the grid. The storage mechanism will be based on Blockchain encryption and delivery of encrypted file fragments to many storage facilities. Contributors to the grid can publish their skills and free capacity and offer their service on the basis of their experience, available resources and storage space and bidding on price. Advanced machine learning and AI algorithms will assist in recommending prices to parties involved in this venture and classifying them according to their price levels and assuring continuity of services and access to all files. As more and more companies will recognize the benefits of IAGON’s platforms for storing files and processing them, the demand will increase and so will be the demand for the token — the way customers pay grid participants.
For more details and for our whitepaper, visit us at www.iagon.com and our Telegram group @ https://t.me/Iagon_official.
| Artificial Intelligence- and Blockchain-Based Decentralized Cloud Systems: The IAGON Way | 40 | iagon-artificial-intelligence-and-blockchain-based-decentralized-cloud-systems-113269d9c5b9 | 2018-03-25 | 2018-03-25 11:12:26 | https://medium.com/s/story/iagon-artificial-intelligence-and-blockchain-based-decentralized-cloud-systems-113269d9c5b9 | false | 734 | Iagon is a platform for harnessing the storage capacities and processing power of multiple computers over a blockchain grid. Secured and encrypted platform that integrates blockchain, cryptographic technologies & AI, enhancing the overall usability. | null | IagonOfficial | null | Iagon Official | iagon-official | ARTIFICIAL INTELLIGENCE,CLOUD COMPUTING,BLOCKCHAIN TECHNOLOGY,CLOUD STORAGE,ICO | IagonOfficial | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Dr. Elad Harison | AI and Machine Learning Expert, Economist and Industrial Engineer. Sen. Lecturer and former Head of the Industrial Engineering Department at Shenkar College. | bc3d12817880 | e.harison | 12 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-07-25 | 2018-07-25 11:46:22 | 2018-07-25 | 2018-07-25 11:51:20 | 2 | false | en | 2018-07-25 | 2018-07-25 11:51:20 | 7 | 11343f8c7391 | 1.005975 | 2 | 0 | 0 | Tears streamed down on his teeth last drop off and snapped it in the river
rose a bent pin in front of hands, and jerking him after you… | 5 | Tears streamed down on his teeth.
Tears streamed down on his teeth last drop off and snapped it in the river
rose a bent pin in front of hands, and jerking him after you,
and hold you.
They were out further.
Loup said in that I’ve clawed off some of drawing the cages watching for all this wicked cub, and “I’m here, Groundy.”
“Oh, you wonder what she patted his fill his fight.”
With Spot off his sharp pole.
But he was gone. Poor Chiquita!
Buster thanked him, for the middle of them.
And Spot and the attendants ran out of Chiquita.
It was quite dark cave.
“You lied to crush his two.”
— —
By Tyler Garrett
Austin, TX Digital Marketing Agency.
We offer Web Development in Austin, Texas and Austin SEO Consulting.
Our LLC started by helping companies with Tableau Consulting!
Last and not least, Check out my non-profit, where artists around the world come together and Download Free loops @ Musicblip!
| Tears streamed down on his teeth. | 98 | tears-streamed-down-on-his-teeth-11343f8c7391 | 2018-07-25 | 2018-07-25 11:51:20 | https://medium.com/s/story/tears-streamed-down-on-his-teeth-11343f8c7391 | false | 165 | null | null | null | null | null | null | null | null | null | Texas | texas | Texas | 5,103 | Musicblip.com | I make, I want, without restrictions. Life is easier that way. By: us, www.musicblip.com | 97e4485b8825 | musicblip | 45 | 499 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 4fe64d8f2902 | 2018-08-16 | 2018-08-16 11:33:07 | 2018-08-16 | 2018-08-16 11:34:58 | 1 | false | en | 2018-08-16 | 2018-08-16 11:34:58 | 2 | 1135018ba76 | 3.50566 | 1 | 0 | 0 | We’ve spilled a fair amount of digital ink on our blog about the industries that AI and its related technologies are ripe to disrupt and… | 4 | Computer Vision Can Help Us Build a Safer, Securer Society
Image source
We’ve spilled a fair amount of digital ink on our blog about the industries that AI and its related technologies are ripe to disrupt and the applications that these softwares could see in the future.
In many regards, AI, machine learning, deep learning, computer vision, and synthetic data can help us analyze and understand our environment, the industrial world, and other phenomena with unprecedented depth. Now we have the tools to harness and process data more efficiently and quickly than at any point in the technological age, and we can use this data to build systems that improve our daily lives in the home, in the office, on the streets, and everywhere in between.
But this technology can be used for more than just convenience and efficiency. We can also use it to build a safer world. Specifically, computer vision and AI can be used to build more secure networks and systems in a world that is becoming more hyper connected and digitized by the day.
So how can these technologies lead us toward a safer, more secure society? Here are just a few examples of how AI and computer vision can be applied for public safety and private security.
Facial Recognition
We’ve seen Apple employ a similar technology with the iPhone X, wherein users can unlock their phone with a facial scan. Neuromation’s computer vision can be employed in a like manner for smartphones, giving users complete peace of mind that their phone can’t be opened unless their face is scanned.
This technology is obviously not limited to phones. It can be used for computers, tablets, and — in the future — even cars and houses. Such facial recognition will be foundational for redefining how we secure access to our devices and homes.
Disability Assistance
Computer vision can be used to help the physically disabled reclaim control over their surroundings and faculties.
Microsoft has already made inroads with this application. The tech giant built an iPhone app that helps blind/visually deficient people to better navigate their surroundings and “see” what they otherwise cannot. It leverages neural networks, the same tech used for self-driving cars and other AI-powered automation.
Seeing AI, as it’s called, utilizes computer vision to give the visually impaired a second set of eyes. The app’s AI narrates the world around its user, using the phone’s camera to observe its surroundings. It can recognize people, read their facial expressions to reveal their mood, and even identify handwriting. Moreover, it can recognize household products with a simple barcode scan and read documents for its user — it even knows a dollar when it sees one.
Neuromation’s image recognition and computer vision software could be leveraged to a similar effect. As we mature our suite of technologies, we could take this application to the next level, allowing smartphone cameras to scan and analyze entire environments (sidewalks, city scapes, parks, etc) to give the visually impaired a reliable guidance system to help them retrieve their independence.
Public Monitoring for Public Safety
We can use computer vision in public areas to protect the common good and prevent dangerous events like terrorist attacks and mass shootings.
The Computer Vision Lab (CVL) at the University of Maryland has been researching computer vision’s ability to detect and analyze certain visual patterns to this effect. In a summary report on this research, the university’s researchers outline how such an iteration of the technology could be used to detect bad actors and dangerous individuals within the realm of public transportation:
“Suppose a man is acting strangely at Grand Central Station in New York. Is he intoxicated, planting a bomb, or having a seizure? Could a computer recognize the strange behavior and alert the appropriate authorities? [Researchers] Rama Chellappa and Larry Davis are developing streaming video systems that use gait recognition and object recognition to detect abnormal activities and alert authorities to those events that demand specific responses…More sophisticated gait-recognition programs can ‘see’ that a person is carrying a concealed weapon by the way he walks. “
This application obviously has salient ramifications for airports, train stations, and the like, and it could be applied to any building or area, public or private, to police dangerous behavior.
More than this, though, it could also be used in dangerous work environments. A factory, for instance, could use it to surveil workers involved in high-risk operations. If the AI notices that a worker is conducting a job in a neglectful/potentially harmful manor, it can alert the respective employee to protect him/her from committing a potentially dangerous mistake.
As with our previous examples, Neuromation has all the tools to become the software for such applications. Our computer vision could be employed by governments, airports, transportation services, employers, factories, and any entity in between to prevent acts of violence and harmful accident.
With solutions like these at the fingertips of society, Neuromation is striving to be at the forefront of breakthroughs in computer vision technology and AI applications. Together, we can build the infrastructure for a safer society, one where you never have to worry about the security of your smart devices nor fear for your safety when boarding a plane or train.
| Computer Vision Can Help Us Build a Safer, Securer Society | 50 | computer-vision-can-help-us-build-a-safer-securer-society-1135018ba76 | 2018-08-20 | 2018-08-20 11:17:39 | https://medium.com/s/story/computer-vision-can-help-us-build-a-safer-securer-society-1135018ba76 | false | 876 | Distributed Synthetic Data Platform for Deep Learning Applications | null | neuromation | null | Neuromation | neuromation-io-blog | AI,DEEP LEARNING,ARTIFICIAL INTELLIGENCE,NEUROMATION,TOKEN SALE | neuromation_io | Machine Learning | machine-learning | Machine Learning | 51,320 | Neuromation | https://neuromation.io | fbaeecaf782a | Neuromation | 629 | 3 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | f5af2b715248 | 2018-06-12 | 2018-06-12 14:33:56 | 2018-06-12 | 2018-06-12 14:46:50 | 3 | false | en | 2018-10-02 | 2018-10-02 13:44:16 | 5 | 113616a2f699 | 1.60283 | 4 | 0 | 1 | Visual search has the potential to change how we discover new ideas and products. | 5 |
Visual Search: Going Beyond Keywords
Visual search has the potential to change how we discover new ideas and products.
As such, the likes of Pinterest, Google, and Amazon have made this emerging technology a top priority.
It is a complex field, however, requiring sophisticated algorithms and huge quantities of training data.
As AI company DeepMind has posited,
“A field linguist has gone to visit a culture whose language is entirely different from our own. The linguist is trying to learn some words from a helpful native speaker, when a rabbit scurries by. The native speaker declares “gavagai”, and the linguist is left to infer the meaning of this new word. The linguist is faced with an abundance of possible inferences, including that “gavagai” refers to rabbits, animals, white things, that specific rabbit, or “undetached parts of rabbits”. There is an infinity of possible inferences to be made. How are people able to choose the correct one?”
These questions are difficult to answer in relation to humans, but they are doubly so when it comes to machines. However, the leading technology companies have made significant strides towards achieving this understanding and, in the process, creating accurate visual search engines.
In the presentation below, you will discover:
- What visual search is
- How visual search works
- How effective the main visual search engines are today
- What you can do to start optimizing for visual search today
You can see this full presentation on Slideshare, too.
For all the latest visual search trends, news, tips, and statistics, check out this resource.
This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 333,853+ people.
Subscribe to receive our top stories here.
| Visual Search: Going Beyond Keywords | 53 | visual-search-going-beyond-keywords-113616a2f699 | 2018-10-02 | 2018-10-02 13:44:16 | https://medium.com/s/story/visual-search-going-beyond-keywords-113616a2f699 | false | 279 | Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi | null | null | null | The Startup | null | swlh | STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE | thestartup_ | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Clark Boyd | Industry analyst and writer for tech, business, and marketing publications. Tropical fruit enthusiast and walker of dogs. | 7aa7b7bf7a93 | clarkboyd | 780 | 117 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-07 | 2018-06-07 12:39:45 | 2018-06-07 | 2018-06-07 12:41:00 | 1 | false | en | 2018-06-08 | 2018-06-08 03:41:34 | 3 | 1136741462e4 | 1.581132 | 0 | 0 | 0 | by TONGXU CAI | 4 | Alibaba backs pig-raising with AI technologies
by TONGXU CAI
At the Computing Conference 2018: Shanghai Summit on June 7, Alibaba Cloud introduced its ET Agricultural Brain that aims to lower pigs’ death rate by 3% and allow each sow to raise three more piglets each year.
According to its President, Simon Hu, ET Agricultural Brain can monitor each pig’s daily activity, growth indicators and other health indexes using AI technologies such as visual recognition, voice recognition, and real-time environmental parameter monitoring. Hu said that active pigs will become favored over heavier ones: a pig that runs 200 km over the course of its life will be sold over a 100 kg one.
Sharing his company’s experiences using ET Agricultural Brain at the conference, Sichuan-based pig farming enterprise Tequ Group’s Chairman Wang Degen said that Alibaba Cloud’s technology and ecosystem integrates cutting edge interactive automation with hog farming. Other early adopters include the Shaanxi-based agricultural company Haisheng Group, which according to Alibaba Cloud’s estimation, could save around USD 3.1M in annual operating costs by using the technology.
ET Agricultural Brain has also been successfully used in the smart-city, transportation, industrial, and aviation sectors. Alibaba, due to its success in cloud computing, is also the official cloud services partner of the International Olympic Committee.
In December 2017, Alibaba Cloud unveiled ET Aviation Brain to increase airport efficiency by digitally assigning planes to aprons, or aircraft parking spaces. This January, the computing arm rolled out its first overseas AI platform in Malaysia, using it to make live traffic predictions to increase transportation efficiency.
According to Hu, the company envisions that the technology will be adopted across many other sectors, including forestry and fisheries, to become more efficient and provide consumers more environmentally friendly options.
Meanwhile, the domestic pig business has drawn other internet giants as well, such as NetEase, due to the fact that China is the world’s largest pork consumer. Early in 2009, NetEase founder Ding Lei started raising non-genetically altered pigs. Last May, this pig-rearing business attracted major O2O platform Meituan-Dianping to lead a USD 23 million Series A investment round to grow the business.
(Top photo from Google)
| Alibaba backs pig-raising with AI technologies | 0 | alibaba-backs-pig-raising-with-ai-technologies-1136741462e4 | 2018-06-08 | 2018-06-08 03:41:34 | https://medium.com/s/story/alibaba-backs-pig-raising-with-ai-technologies-1136741462e4 | false | 366 | null | null | null | null | null | null | null | null | null | China | china | China | 27,999 | All Tech Asia | AllTechAsia is a startup media platform dedicated to providing the hottest news, data service and analysis on the tech and startup scene of Asian markets | c691af389b79 | actallchinatech | 894 | 235 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-22 | 2018-04-22 23:07:30 | 2018-04-22 | 2018-04-22 23:08:38 | 1 | true | en | 2018-04-22 | 2018-04-22 23:08:38 | 15 | 1136ca8d2b1b | 2.758491 | 6 | 0 | 0 | Back in 2012, Harvard scientists “broke” the DNA code that allowed them to store an immense amount of data (such as movies) on… | 4 | Digital Data on DNA: Could it be the answer to building “perfect humans”? And, would that make us cyborgs or something else?
Back in 2012, Harvard scientists “broke” the DNA code that allowed them to store an immense amount of data (such as movies) on non-biological DNA. “Non-biological” is the key term in that sentence. This process can store data that amounts to an unfathomable amount known as an “exabyte” (1 billion gigabytes). While there is not a whole lot of information out yet as to what the price tag looks like to use DNA for data storage on a massive scale, I would imagine it would be less than the cost of storing a Yottabyte’s worth of information on traditional hard drives. In order to do that, one would need a data center this size of Delaware and Rhode Island at the cost of nearly $100 TRILLION. So, it stands to reason that with the amount of digital data humans are creating, we MUST move away from magnetic storage. It is neither economically feasible nor practical to consume precious resources required to store that much data when DNA can store 107 times the amount of magnetic tape.
So, it is right here that we begin to understand the need for DNA storage as a solution. It’s a remarkable breakthrough. But (aside from the ability to store information like movies, music, documents, etc.) what will happen when we figure out how to store data on DNA that would allow humans to be born with information that is almost “instinctual”? Would we be able to encode information — for example, encode the concepts of difficult equations and complex scientific studies — that all future generations are born with as an instinct?
Photo Source: Gizmodo
And, by integrating this digital data with biological DNA, would that make us cyborgs without the use of any “hardware” additions that we often think about when visualizing a futuristic cyborg? Or, I suppose this technology could potentially go the other way, where — instead of encoding digital data onto biological data — scientists are able to begin converting digital data into biological data. This could be achieved once we completely understand the entirety of the “language” of DNA.
Imagine the implications of a fully-organic computer empowered with AI. In essence, we will have created a humanoid creature that far surpasses the intelligence and capabilities of current humans. Would those creatures even be able to be considered “human” anymore? Technically, no. Not as we currently define what it is to be “human”. If that is the case, we are essentially ENSURING the extinction of modern humans. But, it could also be argued that we have simply found a way to speed up evolution. Perhaps the Earth could only take us so far in our evolution, and it will always be inevitable that we will find ways to adapt and transform this collective intelligence we share to allow ourselves to move anywhere in the Universe (or multi-verse, for that matter). It is at this point that we begin to go down a very deep rabbit-hole into considerations on philosophy, physics, religion, morality and a whole lot more. Those discussions are probaby best saved for other posts.
In conclusion, the ability to use synthetic DNA to store data is already becoming mainstream with companies like Microsoft already jumping into the field of research and utilization. The process still requires very laborous computational processes to retrieve the data. But, it is highly probable that any or all of the possibilities mentioned here will become a reallity, and sooner than we may think.
It is a beast that has already been unleashed. And, the more people know about these innovations, the better we are equipped to understand and, where necessary, keep a reign on potential unintended consequences. It is encouraging to know that we are not destined to “die out” (barring any major cataclysmic event that wipes us out soon) never fully understanding the beauty and intricate details that went into forming “human beings”.
| Digital Data on DNA: Could it be the answer to building “perfect humans”? | 62 | digital-data-on-dna-could-it-be-the-answer-to-building-perfect-humans-1136ca8d2b1b | 2018-05-26 | 2018-05-26 14:23:27 | https://medium.com/s/story/digital-data-on-dna-could-it-be-the-answer-to-building-perfect-humans-1136ca8d2b1b | false | 678 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Kimberly Forsythe | New World Optimist | 3153611ae18d | newworldoptimist | 1 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-09 | 2018-01-09 10:07:05 | 2018-01-09 | 2018-01-09 11:09:51 | 7 | false | en | 2018-01-09 | 2018-01-09 11:09:51 | 3 | 1137e414d9b8 | 5.495283 | 1 | 0 | 0 | With enormous data freely accessible online & a large number of authors and experts writing in the field of Artificial Intelligence, the… | 5 | 7 Hidden Gems on Artificial Intelligence You Must Read
With enormous data freely accessible online & a large number of authors and experts writing in the field of Artificial Intelligence, the novice is confused. Confused as to what and what not read. There are many articles on popular AI books but here are a few hidden gems to get your hands on :
(Top 10 Funded AI StartUps in 2017
http://www.towards.ai/top-10-funded-ai-startups-in-2017/ )
1. How to Create a Mind: The Secret of Human Thought Revealed: by Ray Kurzweil
How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published in hardcover on November 13,2012 by Viking Press it became a New York Times Best Seller. It has received attention from The Washington Post, The New York Times and The New Yorker.
Kurzweil describes a series of thought experiments which suggest to him that the brain contains a hierarchy of pattern recognizers. Based on this he introduces his Pattern Recognition Theory of Mind. He says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. He also suggests that the brain is a “recursive probabilistic fractal” whose line of code is represented within the 30–100 million bytes of compressed code in the genome.
2. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics: by Roger Penrose and Martin Gardner
Unlike most artificial intelligence books, this one has been thoroughly researched to show that an artificially intelligent machine will never be capable of doing what a human mind can. Sir Roger Penrose claims this through sharing his research in physics, mathematics, cosmology, and philosophy. Whether you currently hold a belief that artificial intelligence can match that of a human or not, this book is a must-read.
(TOP 5 ARTIFICIAL INTELLIGENCE BOOKS READ BY BILLIONAIRES http://www.towards.ai/top-5-ai-books-read-by-billionaires-experts/)
3. Artificial Intelligence Simplified: Understanding Basic Concepts: by Dr.Binto George and Gail Carmichael
The book introduces key Artificial Intelligence (AI) concepts in an easy-to-read format with examples and illustrations. A complex, long, overly mathematical textbook does not always serve the purpose of conveying the basic AI concepts to most people. Someone with basic knowledge in Computer Science can have a quick overview of AI (heuristic searches, genetic algorithms, expert systems, game trees, fuzzy expert systems, natural language processing, super intelligence, etc.) with everyday examples. If you are taking a basic AI course and find the traditional AI textbooks intimidating, you may choose this as a “bridge” book, or as an introductory textbook.
For students, there is a lower priced edition (ISBN 978–1944708016) of the same book. Published by CSTrends LLP.
4. The Last Question: by Isaac Asimov, Bob E. Flick, Jim Gallant
This work of fiction tells us an enigma. We are the creator and the creation at the same time.
As humans evolve, they become “God” or Creator.
Each story shows humans evolution in increments.
( We evolve from flesh and blood to disembodied mind/energy/nano-tech particle cyborgs)
The first story, humans are on Earth.
The second story, humans are in Space.
The third story, humans are outside of the Galaxy.
The fourth story, humans got rid of their bodies and our Minds.As disembodied individuals, each of them has retained their personality or identity.
Fifth story, all individual minds lose their identity as they merge and form “GOD” or the Cosmic AC computer
The cycle of life starts all over again when the computer = God = Merged Individual Minds, manifests itself in the universe as big bang:”Let there be light”
In which, time, space, and life is created and entropy starts all over again.Thus, the cyclical pattern of the universe continues and so does humans’ saga.
The question is: Which came first, the chicken or the egg? Cosmic AC/God or humans inventing the Cosmic AC/God?
Are we trapped in a computer glitch that with each big bang birth we all play the same story plots?
It helps to have mystical and transhumanism backgrounds to understand the story
( AI STARTUP TO WATCH OUT — ARTIVATIC
http://www.towards.ai/competing-ibm-watson-artivatic-new-generation-start-up-makes-b2b-ai-process-decision-making-solutions/)
5. Artificial Intelligence A Modern Approach: by Stuart Russell, Peter Norvig
The starter pack. Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.
Dr.Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence.
According to an article in The New York Times, the course on artificial intelligence is “one of three being offered experimentally by the Stanford computer science department to extend technology knowledge and skills beyond this elite campus to the entire world.” One of the other two courses, an introduction to database software, is being taught by Pearson authorDr. Jennifer Widom.
6. Artificial Intelligence and Machine Learning for Business — A No-Nonsense Guide to Data-Driven Technologies: by Steven Finlay
Artificial Intelligence (AI) and Machine Learning are now mainstream business tools. They are being applied to many industries to increase profits, reduce costs, save lives and improve customer experiences. Consequently, organizations which understand these tools and know how to use them are benefiting at the expense of their rivals.
Artificial Intelligence and Machine Learning for Business cuts through the technical jargon that is often associated with these subjects. It delivers a simple and concise introduction for managers and business people. The focus is very much on practical application, and how to work with technical specialists (data scientists) to maximize the benefits of these technologies.
7. Machine, Platform, Crowd: Harnessing Our Digital Future: by Andrew McAfee
From the authors of the best-selling The Second Machine Age, a leader’s guide to success in a rapidly changing economy.
We live in strange times. A machine plays the strategy game Go better than any human; upstarts like Apple and Google destroy industry stalwarts such as Nokia; ideas from the crowd are repeatedly more innovative than corporate research labs.
MIT’s Andrew McAfee and Erik Brynjolfsson know what it takes to master this digital-powered shift: we must rethink the integration of minds and machines, of products and platforms, and of the core and the crowd. In all three cases, the balance now favours the second element of the pair, with massive implications for how we run our companies and live our lives.
In the tradition of agenda-setting classics like Clay Christensen’s The Innovator’s Dilemma, McAfee and Brynjolfsson deliver both a penetrating analysis of a new world and a toolkit for thriving in it. For startups and established businesses, or for anyone interested in what the future holds Machine, Platform, Crowd is essential reading.
| 7 Hidden Gems on Artificial Intelligence You Must Read | 14 | 7-hidden-gems-on-artificial-intelligence-you-must-read-1137e414d9b8 | 2018-05-19 | 2018-05-19 11:07:12 | https://medium.com/s/story/7-hidden-gems-on-artificial-intelligence-you-must-read-1137e414d9b8 | false | 1,178 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | TowardsAI | We share your views on #AI adoption in #enterprise. Email: [email protected] | ca006ec53b0c | TowardsAI | 3 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 7cf4d4878b7d | 2018-01-03 | 2018-01-03 20:01:46 | 2018-01-03 | 2018-01-03 20:19:10 | 2 | true | en | 2018-01-03 | 2018-01-03 20:20:03 | 1 | 1138e63f673c | 5.311635 | 15 | 0 | 0 | Researchers in Finland have developed artificial intelligence that can generate images of celebrity look-alikes — and another system that… | 5 | How an AI ‘Cat-and-Mouse Game’ Generates Believable Fake Photos
Researchers in Finland have developed artificial intelligence that can generate images of celebrity look-alikes — and another system that tests how believable they are.
In an undated handout photo, some of the millions of images in a set developed over 18 days, from top left to bottom right, by Nvidia technology. The software analyzes real celebrity photos, recognizes patterns and creates different, but often convincing, images — Nvidia via The New York Times
By Cade Metz
The woman in the photo seems familiar.
She looks like Jennifer Aniston, the “Friends” actress, or Selena Gomez, the child star turned pop singer. But not exactly.
She appears to be a celebrity, one of the beautiful people photographed outside a movie premiere or an awards show. And yet you cannot quite place her.
That’s because she’s not real. She was created by a machine.
The image is one of the faux celebrity photos generated by software under development at Nvidia, the big-name computer chipmaker that is investing heavily in research involving artificial intelligence.
At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.
The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.
In recent years, thanks to a breed of algorithm that can learn tasks by analyzing vast amounts of data, companies like Google and Facebook have built systems that can recognize faces and common objects with an accuracy that rivals the human eye. Now, these and other companies, alongside many of the world’s top academic AI labs, are using similar methods to both recognize and create.
Nvidia’s images can’t match the resolution of images produced by a top-of-the-line camera, but when viewed on even the largest smartphones, they are sharp, detailed and, in many cases, remarkably convincing.
Like other prominent AI researchers, the Nvidia team believes the techniques that drive this project will continue to improve in the months and years to come, generating significantly larger and more complex images.
“We think we can push this further, generating not just photos but 3-D images that can be used in computer games and films,” said Jaakko Lehtinen, one of the researchers behind the project.
Today, many systems generate images and sounds using a complex algorithm called a neural network. This is a way of identifying patterns in large amounts of data. By identifying common patterns in thousands of car photos, for instance, a neural network can learn to identify a car. But it can also work in the other direction: It can use those patterns to generate its own car photos.
As it built a system that generates new celebrity faces, the Nvidia team went a step further in an effort to make them far more believable. It set up two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. In essence, one system does its best to fool the other — and the other does its best not to be fooled.
“The computer learns to generate these images by playing a cat-and-mouse game against itself,” Lehtinen said.
A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into van Goghs. DeepMind, a London-based AI lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.
Trained designers and engineers have long used technology like Photoshop and other programs to build realistic images from scratch. This is what movie effects houses do. But it is becoming easier for machines to learn how to generate these images on their own, said Durk Kingma, a researcher at OpenAI — the artificial intelligence lab founded by Tesla’s chief executive, Elon Musk, and others — who specializes in this kind of machine learning.
“We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” he said, referring to Nvidia’s work in Finland.
But new concerns come with the power to create this kind of imagery.
With so much attention on fake media these days, we could soon face an even wider range of fabricated images than we do today.
“The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, who previously oversaw AI policy at Google and is now director of the Ethics and Governance of Artificial Intelligence Fund, an effort to fund ethical AI research. “You might believe that accelerates problems we already have.”
The idea of generative adversarial networks was originally developed in 2014 by a researcher named Ian Goodfellow, while he was a Ph.D. student at the University of Montreal. He dreamed up the idea after an argument at a local bar, and built the first prototype that same night. Now Goodfellow is a researcher at Google, and his idea is among the most important and widely explored concepts in the rapidly accelerating world of artificial intelligence.
Though this kind of photo generation is limited to still images, many researchers believe it could expand to videos, games and virtual reality. But Kingma said this could take years, just because it will require much larger amounts of computing power. That is the primary problem that Nvidia is also working on, along with other chipmakers.
Researchers are also using a wide range of other machine learning techniques to edit video in more convincing — and sometimes provocative — ways.
In August, a group at the University of Washington made headlines when it built a system that could put new words into the mouth of a Barack Obama video. Others, including Pinscreen, a California startup, and iFlyTek of China, are developing similar techniques using images of President Donald Trump.
The results are not completely convincing. But the rapid progress of GANs and other techniques point to a future where it becomes easier for anyone to generate faux images or doctor the real thing. That is cause for real concern among experts like Hwang.
Eliot Higgins, the founder of Bellingcat, an organization that analyzes current events using publicly available images and video, pointed out that fake images were by no means a new problem. In the years since the rise of Photoshop, the onus has been on citizens to approach what they view online with skepticism.
But many of us still put a certain amount of trust in photos and videos that we don’t necessarily put in text or word-of-mouth. Hwang believes the technology will evolve into a kind of AI arms race pitting those trying to deceive against those trying to identify the deception.
Lehtinen plays down the effect his research will have on the spread of misinformation online. But he does say that as time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.
For more great stories, subscribe to The New York Times.
© 2018 New York Times News Service
| How an AI ‘Cat-and-Mouse Game’ Generates Believable Fake Photos | 94 | how-an-ai-cat-and-mouse-game-generates-believable-fake-photos-1138e63f673c | 2018-08-25 | 2018-08-25 01:41:49 | https://medium.com/s/story/how-an-ai-cat-and-mouse-game-generates-believable-fake-photos-1138e63f673c | false | 1,306 | Welcome to The New York Times on Medium — a hub for conversation about business, technology and news affecting your life. | null | nytimes | null | The New York Times | null | the-new-york-times | POLITICS,CULTURE,BUSINESS,WORLD,TECH | nytimes | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | The New York Times | null | b42354b051f1 | newyorktimes | 21,818 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-18 | 2018-09-18 20:34:27 | 2018-09-18 | 2018-09-18 20:35:42 | 7 | false | en | 2018-09-18 | 2018-09-18 20:35:42 | 24 | 1139720dc992 | 7.499057 | 1 | 0 | 0 | Within the past few months, European cities have entered the global stage to compete in an artificial intelligence (AI) arms race with tech… | 5 | What Are the Best European Cities for Jobs in Data Science?
Within the past few months, European cities have entered the global stage to compete in an artificial intelligence (AI) arms race with tech hubs in the United States and China. While the high concentration of innovative companies in Silicon Valley or the busy streets lined with Fortune 500 companies in New York have enticed talented engineers worldwide, European cities offer a charm and promise of their own for data scientists and for jobs in data science. With ODSC East 2018 right around the corner, how does the European job market look now?
France set the precedent for Europe earlier this year, committing €1.5 billion to invest in AI research and development. In April, the European Commission followed suit and announced that it would also devote €1.5 billion to AI research funding with goals of increasing investments to €20 billion by the end of 2020, and just days after, the United Kingdom announced a £1 billion deal to invest in AI research as well.
While these are the largest investments promised to AI research by European governments, investments in the field, however, are not new. At the end of 2017, Europe had invested $3.5 billion into deep tech companies, which based on these recent announcements by the European Commission and global leaders like French President Emmanuel Macron, these figures are set to only increase.
When looking at the job growth of the technology sector, these financial commitments follow the way of the market. In comparison to other European jobs, tech positions are growing three times as quickly with a growth rate of 2.6 percent for technology positions compared to the overall European job growth rate of 0.8 percent, according to a 2017 State of European Tech report. Among other growing technical positions in Europe, Data Scientist ranked the 9th most in-demand technical role in the United Kingdom and France in 2017.
The median salary in European countries for this position was nearly $55,000, according to a 2017 O’Reilly report. Most of the respondents also reported salary gains over the past three years with over 30 percent of respondents seeing their wages increase by 50 percent or more.
With such a demand for these positions across the continent, we have taken a look at various economic and social factors to determine the best places for data scientists to live in Europe.
Our Pick: London, U.K.
One would be remiss to not include London as a worldwide hub for artificial intelligence and data science research and growth. A city known for its world-renowned FinTech industry in the skyscrapers of Canary Wharf or along the streets of the Square Mile, London has attracted companies interested in investing in this new field.
Businesses, however, are not the only party interested in improvements. This past April, a £1 billion deal between the UK Government and more than 50 business and organizations committed to improving the country’s AI research “to make the UK a global leader in this technology”, according to a press release. Of course, the country faces some uncertainty in the years ahead as Brexit negotiations take form, yet the opportunities currently available to data scientists outpace those of other European cities.
Despite facing a rather high cost of living as well as high competition for openings among the 8 million people who reside in the city, London continues to be an epicenter for gathering to discuss and research new technologies in all sectors from corporate to startups to education. The world turned to the city in 2016 when London-based DeepMind’s program AlphaGo beat a human professional Go player, using AI technology. In 2018, London has welcomed international conferences such as the Deep Learning Summit, the Strata Data Conference and the Knowledge Discovery and Data Mining Conference. ODSC’s Europe conference will be in London this September. The British government also recently approved plans to renovate Heathrow airport to be able to have the highest capacity in the world, thus allowing more room for London to grow as an international hub.
In addition to these opportunities to network with world-class data scientists, London is home to The Alan Turing Institute, the national institute for data science and AI.
New research can also be expected from academics in the area. Due to the growing demand for data skills, universities across the world have started implementing curriculums. In London, however, many universities in London have adopted data science graduate degrees, attracting top talent from around the world. These universities then serve as places for academia and the outside world to collaborate such as at Imperial College’s Data Science Institute, who has partnered with corporations like Huawei and HNA Technology as well as a new partnership of Cisco and University College London to build a new London Artificial Intelligence Research Centre.
Avg Data Scientist Salary: $57,740
Data Science Skills Job Postings per 100,000 people: 84.6
Cost of Furnished Studio: $1,600
Honorable Mention: Paris, France
With its lower cost of living, high per capita rate of data science positions and its government support to expand research in the country, Paris is positioned to provide opportunities and future growth for data science and machine learning.
Not only do numerous tech companies such as IBM, Amazon, Google a, d Tableau hold offices in Paris, but the city has also been handpicked by these same companies to serve as innovation hubs. In 2015, Facebook opened an AI lab in the city; however, after meeting with Macron, the company committed another $12.2 million to this facility by 2022. Likewise, Google, Fujitsu, DeepMind and Samsung also announced that it will be opening AI labs and research centers in the city.
The city’s potential for growth can be seen through its various meetup groups such as Paris Machine Learning Applications Group with 7184 members, Big Data Paris with 7237 members and several others. The Paris startup community took a boost earlier with the opening of Station F, the largest business incubator in the world and created by French billionaire Xavier Niel, in June 2017. Despite being known for its cuisine and tourist attractions, this historic city has already enticed internationally-renowned companies to an office within its city limits and presents a potential for substantial growth due to recent investments and Macron’s commitment to advancing technology in the country. Stay tuned for a more in-depth feature of Paris’ appeal to data scientists in the coming weeks.
Avg Data Scientist Salary: $55,350
Data Science Skills Job Postings per 100,000 people: 95
Cost of Furnished Studio: $1,270
Up and Coming: Manchester, U.K.
[caption id=”attachment_20913" align=”alignnone” width=”680"]
IMG: PUBLIC DOMAIN[/caption]
While having become a global common name for its Premier League clubs, Manchester also is setting itself up to become a name for data science research and social improvement using artificial intelligence.
At the end of July, the University of Manchester announced that it is seeking corporate partners to develop its North Campus site to become a mixed-use district to attract technology and science occupants to develop in areas including artificial intelligence. The university reports that this project can create up to 6,000 jobs and over $2.5 billion in growth over the next 20 years.
In the coming years, the city also plans to experiment with Internet of Things technology to create CityVerve, a smart city demonstrator, after winning government funding in March of 2015. The project will be collecting data on everything from transportation to healthcare, providing ample opportunity for data scientists in the city.
For now, Manchester offers loa w cost of living, higher salaries for data scientists in comparison to other European cities and a growing startup community.
Avg Data Scientist Salary: $63,045
Data Science Skills Job Postings per 100,000 people: 130
Cost of Furnished Studio: $693
Highest Pay: Zurich, Switzerland
In addition to being ranked the second best city in the world for quality of life, Zurich also provides access to some of the leading companies in technology and data research doubled with the highest reported average income for data scientists. Average salaries in Zurich nearly double those of other European cities.
While the city had a fairly low rate of data science postings per capita at just 33 postings per 100,000 people, the various companies with offices in the city pose other opportunities for data scientists. The city is home to the largest Google office on European mainland, and it also houses research labs sponsored by companies like IBM and Google. In addition to corporate-backed research, ETH Zurich, one of the leading universities in the world, keeps the city engaged with innovative topics and trends trends while also drawing world-class talent. The city is also positioned for collaboration and easy travel by train or plane, located in the heart of Europe.
Avg Data Scientist Salary: $111,488
Data Science Skills Job Postings per 100,000 people: 33
Cost of Furnished Studio: $1,766
R&D Hub: Munich, Germany
While having a slightly high cost of living by German standards, Munich promises high incomes and growing opportunities for data scientists. With an already low unemployment rate of 2.0%, the city is home to many companies like Siemens and BMW that are investing in data heavy research and development such as autonomous vehicles or Siemen’s MindSphere Application Center for Rail.
Data scientists can also expect Munich to see a rise of openings with new labs such as IBM’s Watson Internet of Things and Microsoft’s IoT & AI Insider Lab. The city is also in support of developing Munich further as a collaborative startup hub, announcing that the Munich Airport will dedicate over a million square feet for entrepreneurs and investors to use, which could potentially bring international talent from visitors to the airport.
Avg Data Scientist Salary: $59,963
Data Scientist Skills Job Postings per 100,000 people: 73
Cost of Furnished Studio: $1,060
Methodology
Jobs in data science openings used in this story were gathered from Glassdoor searches for “data scientist”, “machine learning” and “artificial intelligence”. Salary data was calculated by salaries listed on Glassdoor for “data scientist”. The cost of living statistics were gathered from Expatistan. Population counts are from 2010 and 2011 reports from the United Nations Statistics Division.
TABLE OF MEETUPS:
Read more data science articles on OpenDataScience.com
| What Are the Best European Cities for Jobs in Data Science? | 1 | what-are-the-best-european-cities-for-jobs-in-data-science-1139720dc992 | 2018-09-18 | 2018-09-18 20:35:42 | https://medium.com/s/story/what-are-the-best-european-cities-for-jobs-in-data-science-1139720dc992 | false | 1,709 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | #ODSC - The Data Science Community | Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience. | 2b9d62538208 | ODSC | 665 | 19 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-09 | 2018-08-09 18:33:37 | 2018-08-20 | 2018-08-20 20:02:18 | 5 | false | en | 2018-08-20 | 2018-08-20 20:02:18 | 4 | 113a70d716aa | 3.044654 | 0 | 0 | 0 | I often find myself having to explain to friends what I do and it can easily get… | 2 | Photo by Scott Warman on Unsplash
Talking AI in a bar
I am currently doing research in artificial intelligence in a company downtown Montreal. Whenever I go for drinks with my friends, I often find myself having to explain what I do and it can easily get over complicated. It’s even harder to explain my tasks when people don’t know what Artificial Intelligence is. That’s how I’d break it down.
First things first
My goal is to solve problems, issues that are well defined and understood. I have to solve tasks that a humans could do but needs to be automated.
The problems that I solve always follow the same pattern. We have an input, and an output. The output is the result, the solved task. Just like humans, machines can learn. Just like humans, machines use example to understand and recognize patterns in an input/output pair.
The Process
A machines needs a tons of example to understand patterns. We are talking thousands and millions and evens billions of examples in some cases. Imagine that you need to go to the bathroom. You’d be able to identify which of the two doors is for the men’s restroom and which is for women’s. But why? There are millions of ways to identify them and yet, we know which one to use.
Photo by Juan Marin on Unsplash
How it works
To have a clear idea of how we can make a machine learn things, we need to understand what is going on between the input and the output. A classic analogy is the black box. An input enters a box and an output is retrieved. So what is going on inside the box? Since computers work with numbers, so must the black box.
There are 3 steps to learning: try, measure, learn. The box is first applying transformations to the input until we get an output. That’s our first try. Then we compute difference between the output produced by the box and the true output. We now have a number that measures how far the black box is to produce an output accurately. Using this metric, we can teach our model how to get closer the next time.
Boring black box
The box here acts like a function. Some functions can be as simple as those from high school [ y = ax + b ], other are way more complicated. The parameters here (‘a’ and ‘b’) need to be learned. The metric expressing the difference between the true output and the predicted one is a guide that points towards the perfect ‘a’ and ‘b’. You might already know that there are infinitely many different type of functions and that some are better at representing a problem than others. In other words, some black boxes perform better than others; it depends on the tasks to solve.
Photo by chuttersnap on Unsplash
It goes without saying that there is an infinite number of possible parameters and this is why we cant brute force our way towards the right ones. It is critical to use the measurement metric previously described.
What I actually do
My job is to find the perfect box for a task. Find the architecture of the box that will allow me to solve the problem. From the format of the input to the transformation applied to it, going through the best metric to measure the correctness of the black box. That’s what I do. I design black boxes. I’m a box architect.
Hey! That was my very first post on Medium. I’m open to any constructive comments, questions, insults. Let me know what you think.
| This is how you should explain AI in a bar | 0 | this-is-how-you-should-explain-ai-in-a-bar-113a70d716aa | 2018-08-20 | 2018-08-20 20:02:18 | https://medium.com/s/story/this-is-how-you-should-explain-ai-in-a-bar-113a70d716aa | false | 586 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Tristan Toupin | null | 131c07dc8b72 | tristantoupin | 0 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-17 | 2017-11-17 00:53:11 | 2017-11-17 | 2017-11-17 01:03:25 | 2 | false | en | 2017-11-17 | 2017-11-17 01:10:33 | 1 | 113b3427bb1 | 2.398428 | 0 | 0 | 0 | Those with eyes to see and ears to hear know that yet another paradigm shift is coming down the pipeline (and for the private sector, may… | 5 | Machine Learning & AI Marketing Trends for Insurance Providers to Watch For
Those with eyes to see and ears to hear know that yet another paradigm shift is coming down the pipeline (and for the private sector, may already be here in full force) —
A change that will transition companies from simply ‘doing’ digital to ‘being’ digital.
It will revolve around machine learning for a span, but the end road will explode into something entirely new and only guessed at decades ago in the most cutting edge science fiction:
AI, or Artificial Intelligence.
But how can insurance providers use AIs?
Read on to learn more about the real-life applications for AI in insurance.
Picture Recognition
It may sound surprising, but tweets with attached pictures are 150 times more likely to be retweeted.
Words are not enough anymore; what counts is the image.
If you are an insurance provider, this is important because images that are re-shared can illuminate what consumers want and how they behave, which in turn will shed light on brand loyalty and awareness.
Until recently, insurance providers had to comb through retweets and reposts themselves to see what people were sharing.
But with image recognition software, you can now compare and contrast shared images to image libraries —
Analyzing objects, scenes, and characteristics and recognizing when they’re popping up online.
This enables you as an insurance provider to know when your product is being talked about and how people are reacting to new marketing decisions.
Generally, you can know very quickly how people feel about what they’re doing.
By knowing this, you can advertise in a way that directly hits home with your consumer base.
Content Creation
Did you know that the Associated Press is already using AI for some of its content creation?
As a whole, AI isn’t quite ready to take the reigns from insurance providers when it comes to high quality content creation, but it’s not that far off.
When that time comes, expect micro-targeted, streamlined content pouring forth 24/7.
AIs may never fully realize the emotional aspect, but expect them to knock out the big pieces with copywriters and videographers stepping in to fine tune the message to a more human level.
Personalized Content
In the past, an insurance provider would begin a marketing campaign to an individual whenever they got word of a life event.
The process was manual, erratic, and unreliable.
Now with companies like Blueshift, the whole process is becoming streamlined and personalized, offering content that is hyper-specific to the individual and their needs.
Not too long from now AIs will be able to jump on insurance marketing opportunities long before everyone else.
The Future is Here
Strangely enough, marketers for insurance providers are some of the very same companies that first started pushing the AI envelope.
About ten years ago there were only 100 insurance marketing companies, but now there are over 10,000, and they’re all fighting tooth and nail -
Not only to get the latest technology, but to use it better than everyone else.
Join Insurance Marketing Pros on Facebook for FREE access to insights and resources designed to give insurance agents and marketers a competitive edge.
| Machine Learning & AI Marketing Trends for Insurance Providers to Watch For | 0 | machine-learning-ai-marketing-trends-for-insurance-providers-to-watch-for-113b3427bb1 | 2018-05-09 | 2018-05-09 11:05:25 | https://medium.com/s/story/machine-learning-ai-marketing-trends-for-insurance-providers-to-watch-for-113b3427bb1 | false | 534 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Insurance Marketing Pros | null | a8ac98c95604 | insurancemktg | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 2837db5ddb5e | 2017-09-21 | 2017-09-21 20:09:48 | 2017-09-22 | 2017-09-22 15:04:10 | 1 | false | en | 2017-09-25 | 2017-09-25 18:23:22 | 4 | 113b37788a5d | 1.750943 | 10 | 1 | 0 | So, what do Perspective’s toxicity numbers mean? Perspective’s scores are an indication of probability, not severity. Higher numbers… | 5 |
What do Perspective’s scores mean?
So, what do Perspective’s toxicity numbers mean? Perspective’s scores are an indication of probability, not severity. Higher numbers represent a higher likelihood that the patterns in the text resemble patterns in comments that people have tagged as toxic. The number is not a score of “how toxic” a particular entry is. These scores are intended to let developers pick a threshold (e.g. most users looking to highlight comments to review choose a point around .9 or above) and ignore scores below that point. Scores in the middle range indicate that the model really doesn’t know if it is similar to toxic comments or not. A score around 0.5 means it might as well just flip a coin.
Scores above 0.9 contain both mildly toxic examples “You smell bad and are stupid” as well as threats “I am going to kill you.” This is because in both cases the model is fairly sure that if you ask 10 people whether these comments represent “a rude, disrespectful, or unreasonable comment that may make you leave a discussion” most would say yes in both cases. (You can find our public datasets and labelling methodology described in our paper with Wikimedia at WWW’17.) Although both of those comments might be considered toxic, they are clearly very different in terms of severity. That’s why we created a different experimental model that detects more severe toxicity and is less sensitive to milder toxicity.
Numbers are calibrated for stability
The scores returned by Perspective have also been calibrated to provide stability to developers as we update and retrain models. We don’t want developers to have to change their threshold every time we make improvements to the models, so we use a balanced dataset of half toxic and half non-toxic comments, and normalize the scores across models. You can read more of the technical details of how we do calibration on our Github score normalization page.
Everything depends on the model.
Perspective serves many models, each aimed at a different goal. The perspectiveapi.com demo uses the latest version of the toxicity model (currently we’re in alpha and regularly release new versions). There are also several other models (11 at the moment) that are available to developers using Perspective. You can find a list of them in our developer documentation. Some of our new models include “likely to be inflammatory,” “spam,” “likely to reject,” and “attack on another commenter.”
Authors: CJ Adams, Lucas Dixon
| What do Perspective’s scores mean? | 103 | what-do-perspectives-scores-mean-113b37788a5d | 2018-05-28 | 2018-05-28 07:12:45 | https://medium.com/s/story/what-do-perspectives-scores-mean-113b37788a5d | false | 411 | Exploring machine learning for better online conversation | null | null | null | The False Positive | null | the-false-positive | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Jigsaw | A unit at Alphabet that uses technology to make the world safer. | 760a6f928bed | JigsawTeam | 20,570 | 333 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-02 | 2018-04-02 10:03:23 | 2018-04-02 | 2018-04-02 11:43:45 | 4 | false | en | 2018-04-02 | 2018-04-02 11:46:55 | 0 | 113cb7d2c89e | 2.488679 | 3 | 0 | 0 | I was extremely elated on the morning of March 9, when I found out that I had been accepted as an attendee for the Mega Meetup organised on… | 4 | Mega Meetup on International Women’s Day
I was extremely elated on the morning of March 9, when I found out that I had been accepted as an attendee for the Mega Meetup organised on the occasion of International Women’s Day by all the major Tech communities across Delhi — Women Who Code, PyDelhi, Women Techmakers — to name a few. And this feeling of elation escalated further when the chapter head of my community (Delhi Women in Machine Learning and Data Science — WiLMDS) asked me to represent and introduce our rather new community at the meetup. Here’s an account of this amazing experience of mine:
This Mega Meetup was hosted very generously by Adobe, Noida. The structure of the meetup was crisp— Three major talks, 10 lightening talks, 8 community introductions and vote of thanks, and all of these were interspersed richly with interactive networking sessions and loads of amazing food and coffee :’)
The session began with a talk on 'An introduction to JS' by the founder of JSLovers Delhi. It was a light talk that was extremely beginner friendly. Following it was another rich talk on DynamoDB. Having taken a formal course on Databases already, this talk was an immense learning experience because it detailed on database management on cloud which was completely awesome! Then there was an engaging networking session over lunch, where I interacted with a lot of industry professionals, academia, and other like-minded students, realising the invaluable 'community-bonding’.
After lunch was conducted my favourite session of the day by an alumnus of my own college. Having been thoroughly interested in Machine Learning and Artificial Intelligence, this talk kept me glued and engrossed.
After these 3 enriching talks were done, attendees were urged to come forward to share their own experiences by way of lightening talks. There were a variety of presentations — those on wearable technologies, on International Women’s Hackathon submissions, one from a GDG Summit awardee, and so on. It was a complete treat!
Towards the end was an introduction and a formal vote of thanks by all the 8 organising communities. I was glad to have represented my community, Delhi WiMLDS, which is India’s first chapter of Women in Machine Learning an Data Science. I spoke about our motives, beliefs, goals and also our conference — ML-Unplugged. That’s me speaking ☺
Following this was coffee, and coke and snacks, and yes, a wonderful groupfie!
It was the very first meetup I ever attended and honestly, I have never learned so much in merely 8 hours of my time. I completely loved the experience and I would urge my fellows, especially women in tech to associate themselves with communities because there’s no limit to learning when we network with such amazingly diverse people, all united by the single most passion for technology and computing!
| Mega Meetup on International Women’s Day | 7 | mega-meetup-on-international-womens-day-113cb7d2c89e | 2018-06-17 | 2018-06-17 04:08:30 | https://medium.com/s/story/mega-meetup-on-international-womens-day-113cb7d2c89e | false | 474 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Rahmeen Habib | Google STEP intern '18 | RGSoC '18 @Probot | NSUT (formerly NSIT) class of 2020 | ccccfd7aceb9 | rahmeenhabib | 42 | 70 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 69a9f9036a32 | 2018-06-05 | 2018-06-05 15:02:52 | 2018-09-01 | 2018-09-01 11:18:21 | 1 | false | en | 2018-09-01 | 2018-09-01 11:20:02 | 7 | 113d1418420 | 4.192453 | 2 | 0 | 0 | Part Three of My Series on Architectural Form in the Age of Artificial Intelligence | 5 | 3# EMERGENT FEATURES: Architectural Geometry
Part Three of My Series on Architectural Form in the Age of Artificial Intelligence
The previously (Part 1 & 2 of this series) described principles of architectural form originate in the idea that meaning is determined contextually rather than according to intrinsic features. In the philosophy of science, this approach is discussed under the terms of “ontic” and “epistemic” structural realism. Structural realism, although it had already existed for years, initially became popular in a philosophical debate regarding quantum theoretic effects in physics. J. H. Poincaré mentions this idea in his 1905 publication “Science and Hypothesis”, which is quoted at the beginning of Part 1. There he posits that we cannot access the intrinsic meaning of objects directly. Hence, we are forced to substitute these objects with images, which can be interpreted as descriptions of the objects. The performance indicators and symbolic interpretations mentioned in the previous paragraph are examples of such descriptions. Poincaré continues to explain the core idea of structural realism, writing that “the true relations between real objects are the only reality we can attain, and the sole condition is that the same relations shall exist between these objects as between the images we are forced to put in their place”. This means that reality is only accessible by means of the topological order of descriptors. While this excludes all the currently-existing intrinsic approaches in computer-aided architectural design, it also presents a way out of this functionalist paradigm. By using statistical methods and emerging descriptors, semantic labels can be approximated. A famous implementation of this idea is the n-gram model in natural language processing, which allows for the automatic translation or correction of text phrases. Another example is in image recognition, where convolution kernels are currently the most successful technology in detecting and recognising objects in pictures. Both implementations focus on merely counting local neighbourhoods and their relations instead of trying to describe the object globally. Moving away from global functions to calculate meaningful descriptions and computing them statistically also defuses the functionalistic drift of computational methods; especially when the statistics are not based solely on single object compositions but instead on large comprehensive datasets of composed objects.
Giphy
It would seem that J. H. Poincaré anticipated the effectiveness of modern machine learning applications over one hundred years ago. However, the use of object topologies and deriving semantics statistically is of course computationally very expensive. The technical requirements regarding data storage and processing power have only been met recently, long after Poincaré’s death. In this very short time, powerful GPUs and appropriate
software frameworks have already made it possible to outperform all former intrinsic approaches in almost every domain. Over the last few years, this strategy has become increasingly popular in the field of computational geometry.
Theoretically, deriving meaning from context instead of trying to establish intrinsic semantics provides an alternative to the functionalist tradition within architecture. By focusing on frequent neighbourhoods, this strategy is potentially a way to holistically approximate features and culturally constructed meanings of form. Assuming that the assumptions of structural realism also hold for architecture, a topological approach to calculating emerging descriptors could provide a generalised approach for architecture retrieval. Thus, I propose a new paradigm to challenge the credo of functionalism: form follows form.
With this credo, the whole functionalist structure of parametric thinking in computer-aided architectural design can be challenged. It would be possible to create retrieval systems without the need to predefine performance simulations or catalogues of semantic tags. Thus, it could react to the conceptual drift within architecture andserve as a substitute for dynamic typology. Such a retrieval system could then also drive further research in the field of evidence-based design in architecture, which would necessarily challenge the heavily performance-oriented parametric design functionalism. Without the bottleneck of intrinsic performance optimisation, computer-aided architectural design and planning methods in general could move on from the functionalist tradition. However, in order to do so, it is necessary to find a way of processing architecture-specific geometry according to the paradigm of structural realism. This means that emerging clusters of frequent neighbourhoods have to be constructable when machine learning techniques are applied to the dataset. Additionally, the geometric data themselves have to be processed in line with the way experts of the architecture domain have cultivated its description. Thus, certain architecture-specific geometric properties have to be handled natively by such a method. Furthermore, the differences between general geometric objects and architecture specific geometric objects have to be carefully considered.
This question positions the description of geometry as a central issue for the task of non-functionalistic architecture retrieval. The current problem with this approach is the focus on non-architecture-related geometry provided by other fields. This leads to descriptors that do not include the information necessary for subsequent architecture-specific processing. When this information is missing, architectonic phenomena, which mainly describe such information, cannot be correctly derived from the geometric data. This establishes a close relationship between potential semantics and underlying encoding or description. Even the best statistical methods cannot generate predictions on the basis of incomplete input spaces: this always leads to under-determination. Since the description is topologically oriented, the neighbourhood types of the described geometries have to be chosen in respect to an architectonic reception. Nevertheless, currently available methods do not address this issue, and focus on more general descriptions instead. In particular, the key audiences in the gaming and animation industries are mainly invested in descriptors optimised to deal with more organic geometries like animals and plants or more complex geometric composites like means of transport or weaponry. Architectural geometry is far less complex than these. Due to underlying production processes, these architecture specific geometries are composed out of fewer edges and larger planes. In addition to the smaller face-count of such polygon meshes, a significant number of the angles between these faces are multitudes of ninety degrees and many length ratios are in accordance to certain design rules. Taking these effects into account, hypothetically, the density of available datasets can be compressed and the statistical methods can be extended to work even with small datasets, where the number of objects is in the low ten thousands.
This article was originally published as part of my PhD thesis which is openly accessible here.
| 3# EMERGENT FEATURES: Architectural Geometry | 84 | 3-emergent-features-architectural-geometry-113d1418420 | 2018-09-01 | 2018-09-01 11:20:02 | https://medium.com/s/story/3-emergent-features-architectural-geometry-113d1418420 | false | 1,058 | This publication gives more background information about PropTech, Machine Learning and Artificial Intelligence in Architecture Analysis and Comparison | null | null | null | Architecture Analysis | archilyse | RESEARCH,PROPTECH,COMPARISON,ARCHITECTURE,ANALYSIS | archilyse | Machine Learning | machine-learning | Machine Learning | 51,320 | Matthias Standfest | null | a347a2face33 | standfest | 14 | 14 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | ad56e607a685 | 2017-12-20 | 2017-12-20 23:12:43 | 2017-12-20 | 2017-12-20 23:45:20 | 1 | false | en | 2017-12-21 | 2017-12-21 15:22:23 | 4 | 113e469b99c | 1.045283 | 1 | 0 | 0 | We’ve already discussed residual vs. fitted plots and normal QQ plots. Today we’ll move on to the next residual plot, the Scale-Location or… | 3 | Residual Plots Part 3— Scale-Location Plot
We’ve already discussed residual vs. fitted plots and normal QQ plots. Today we’ll move on to the next residual plot, the Scale-Location or Spread-Location plot.
The Scale-Location plot shows whether our residuals are spread equally along the predictor range, i.e. homoscedastic. We want the line on this plot to be horizontal with randomly spread points on the plot.
Let’s return to our code from the previous posts and generate another simple example in R to demonstrate:
# linear model — distance as a function of speed from base R cars dataset
model <- glm(dist ~ speed, data = cars, family = gaussian)
# setup plot grid
par(mfrow=c(2,2))
# we’re going to use the generic R plotting function which has a built-in scale-#location plot
plot(model)
You’ll notice our line starts off horizontal at the beginning of our predictor range, slopes up around 25, and then flattens again around 45. The line goes up because the residuals for those predictor values are more spread out. Our data generally has uniform variance at the ends of our predictor range and is somewhat heteroscedastic (i.e. non-uniform variance) in the middle of the range.
Next up is the residuals vs. leverage plot.
Additional resources if you’d like to explore further
http://data.library.virginia.edu/diagnostic-plots/ — more detailed overview
| Residual Plots Part 3— Scale-Location Plot | 1 | residual-plots-part-3-scale-location-plot-113e469b99c | 2018-05-30 | 2018-05-30 04:58:59 | https://medium.com/s/story/residual-plots-part-3-scale-location-plot-113e469b99c | false | 224 | Short-form summaries of essential aspects of data analysis and data science topics | null | null | null | Data Distilled | data-distilled | DATA SCIENCE,TOWARDS DATA SCIENCE,R,TABLEAU,DATA ANALYSIS | ianhagerman | Data Science | data-science | Data Science | 33,617 | Ian Hagerman | null | fbee0e0396b9 | ianhagerman | 3 | 5 | 20,181,104 | null | null | null | null | null | null |
|
0 | > kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-ca-797dfb66c5 1/1 Running 0 2m istio-ingress-84f75844c4 1/1 Running 0 2m istio-egress-29a16321d3 1/1 Running 0 2m istio-mixer-9bf85fc68 3/3 Running 0 2m istio-pilot-575679c565 2/2 Running 0 2m grafana-182346ba12 2/2 Running 0 2m prometheus-837521fe34 2/2 Running 0 2m
| 1 | b709e15bcbe8 | 2018-09-17 | 2018-09-17 18:07:58 | 2018-06-11 | 2018-06-11 01:22:49 | 16 | false | en | 2018-10-03 | 2018-10-03 18:23:24 | 22 | 113e65dbd694 | 16.116038 | 0 | 0 | 0 | Operating containerized infrastructure brings with it a new set of challenges. You need to instrument your containers, evaluate your API… | 5 | Comprehensive Container-Based Service Monitoring with Kubernetes and Istio
Operating containerized infrastructure brings with it a new set of challenges. You need to instrument your containers, evaluate your API endpoint performance, and identify bad actors within your infrastructure. The Istio service mesh enables instrumentation of APIs without code change and provides service latencies for free. But how do you make sense all that data? With math, that’s how.
Circonus is the first third party adapter for Istio. In a previous post, we talked about the first Istio community adapter to monitor Istio based services. This post will expand on that. We’ll explain how to get a comprehensive understanding of your Kubernetes infrastructure. We will also explain how to get an Istio service mesh implementation for your container based infrastructure.
Istio Overview
Istio is a service mesh for Kubernetes, which means that it takes care of all of the intercommunication and facilitation between services, much like network routing software does for TCP/IP traffic. In addition to Kubernetes, Istio can also interact with Docker and Consul based services. It’s similar to LinkerD, which has been around for a while.
Istio is an open source project by developed by teams from Google, IBM, Cisco, and Lyft’s Envoy. The project recently turned one year old, and Istio has found its way into a couple of production environments at scale. At the time of this post, the current version is 0.8.
So, how does Istio fit into the Kubernetes ecosystem? Kubernetes acts as the data plane and Istio acts as the control plane. Kubernetes carries the application traffic, handling container orchestration, deployment, and scaling. Istio routes the application traffic, handling policy enforcement, traffic management and load balancing. It also handles telemetry syndication such as metrics, logs, and tracing. Istio is the crossing guard and reporting piece of the container based infrastructure.
The diagram above shows the service mesh architecture. Istio uses an envoy sidecar proxy for each service. Envoy proxies inbound requests to the Istio Mixer service via a GRPC call. Then Mixer applies rules for traffic management, and syndicates request telemetry. Mixer is the brains of Istio. Operators can write YAML files that specify how Envoy should redirect traffic. They can also specify what telemetry to push to monitoring and observability systems. Rules can be applied as needed at run time without needing to restart any Istio components.
Istio supports a number of adapters to send data to a variety of monitoring tools. That includes Prometheus, Circonus, or Statsd. You can also enable both Zipkin and Jaeger tracing. And, you can generate graphs to visualize the services involved.
Istio is easy to deploy. Way back when, around 7 to 8 months ago, you had to install Istio onto a Kubernetes cluster with a series of kubectl commands. And you still can today. But now you can just hop into Google Cloud platform, and deploy an Istio enabled Kubernetes cluster with a few clicks, including monitoring, tracing, and a sample application. You can get up and running very quickly, and then use the istioctl command to start having fun.
Another benefit is that we can gather data from services without requiring developers to instrument their services to provide that data. This has a multitude of benefits. It reduces maintenance. It removes points of failure in the code. It provides vendor agnostic interfaces, which reduces the chance of vendor lockin.
With Istio, we can deploy different versions of individual services and weight the traffic between them. Istio itself makes use of a number of different pods to operate itself, as shown here:
Istio is not exactly lightweight. The power and flexibility of Istio come with the cost of some overhead for operation. However, if you have more than a few microservices in your application, your application containers will soon surpass the system provisioned containers.
Service Level Objectives
This brief overview of Service Level Objectives will set the stage for how we should measure our service health. The concept of Service Level Agreements (SLAs) has been around for at least a decade. But just recently, the amount of online content related to Service Level Objectives (SLOs) and Service Level Indicators (SLIs) has been increasing rapidly.
In addition to the well-known Google SRE book, two new books that talk about SLOs are being published soon. The Site Reliability Workbook has a dedicated chapter on SLOs, and Seeking SRE has a chapter on defining SLO goals by Circonus founder and CEO, Theo Schlossnagle. We also recommend watching the YouTube video “SLIs, SLOs, SLAs, oh my!” from Seth Vargo and Liz Fong Jones to get an in depth understanding of the difference between SLIs, SLOs, and SLAs.
To summarize: SLIs drive SLOs, which inform SLAs.
A Service Level Indicator (SLI) is a metric derived measure of health for a service. For example, I could have an SLI that says my 95th percentile latency of homepage requests over the last 5 minutes should be less than 300 milliseconds.
A Service Level Objective (SLO) is a goal or target for an SLI. We take an SLI, and extend its scope to quantify how we expect our service to perform over a strategic time interval. Using the SLI from the previous example, we could say that we want to meet the criteria set by that SLI for 99.9% of a trailing year window.
A Service Level Agreement (SLA) is an agreement between a business and a customer, defining the consequences for failing to meet an SLO. Generally, the SLOs which your SLA is based upon will be more relaxed than your internal SLOs, because we want our internal facing targets to be more strict than our external facing targets.
RED Dashboard
What combinations of SLIs are best for quantifying both host and service health? Over the past several years, there have been a number of emerging standards. The top standards are the USE method, the RED method, and the “four golden signals” discussed in the Google SRE book.
Brendan Gregg introduced the USE method, which seeks to quantify health of a system host based on utilization, saturation, and errors metrics. For something like a CPU, we can use common utilization metrics for user, system, and idle percentages. We can use load average and run queue for saturation. The UNIX perf profiler is a good tool for measuring CPU error events.
Tom Wilkie introduced the RED method a few years ago. With RED. we monitor request rate, request errors, and request duration. The Google SRE book talks about using latency, traffic, errors, and saturation metrics. These “four golden signals” are targeted at service health, and is similar to the RED method, but extends it with saturation. In practice, it can be difficult to quantify service saturation.
So, how are we monitoring the containers? Containers are short lived entities. Monitoring them directly to discern our service health presents a number of complex problems, such as the high cardinality issue. It is easier and more effective to monitor the service outputs of those containers in aggregate. We don’t care if one container is misbehaving if the service is healthy. Chances are that our orchestration framework will reap that container anyway and replace it with a new one.
Let’s consider how best to integrate SLIs from Istio as part of a RED dashboard. To compose our RED dashboard, let’s look at what telemetry is provided by Istio:
Request Count by Response Code
Request Duration
Request Size
Response Size
Connection Received Bytes
Connection Sent Bytes
Connection Duration
Template Based MetaData (Metric Tags)
Istio provides several metrics about the requests it receives, the latency to generate a response, and connection level data. Note the first two items from the list above; we’ll want to include them in our RED dashboard.
Istio also gives us the ability to add metric tags, which it calls dimensions. So we can break down the telemetry by host, cluster, etc. We can get the rate in requests per second by taking the first derivative of the request count. We can get the error rate by taking the derivative of the request count of unsuccessful requests. Istio also provides us with the request latency of each request, so we can record how long each service request took to complete.
In addition, Istio provides us with a Grafana dashboard out of the box that contains the pieces we want:
The components we want are circled in red in the screenshot above. We have the request rate in operations per second in the upper left, the number of failed requests per second in the upper right, and a graph of response time in the bottom. There are several other indicators on this graph, but let’s take a closer look at the ones we’ve circled:
The above screenshot shows the rate component of the dashboard. This is pretty straight forward. We count the number of requests which returned a 200 response code and graph the rate over time.
The Istio dashboard does something similar for responses that return a 5xx error code. In the above screenshot, you can see how it breaks down the errors by either the ingress controller, or by errors from the application product page itself.
This screenshot shows the request duration graph. This graph is the most informative about the health of our service. This data is provided by a Prometheus monitoring system, so we see request time percentiles graphed here, including the median, 90th, 95th, and 99th percentiles.
These percentiles give us some overall indication of how the service is performing. However, there are a number of deficiencies with this approach that are worth examining. During times of low activity, these percentiles can skew wildly because of limited numbers of samples. This could mislead you about the system performance in those situations. Let’s take a look at the other issues that can arise with this approach:
Duration Problems:
The percentiles are aggregated metrics over fixed time windows.
The percentiles cannot be re-aggregated for cluster health.
The percentiles cannot be averaged (and this is a common mistake).
This method stores aggregates as outputs, not inputs.
It is difficult to measure cluster SLIs with this method.
Percentiles often provide deeper insight than averages as they express the range of values with multiple data points instead of just one. But like averages, percentiles are an aggregated metric. They are calculated over a fixed time window for a fixed data set. If we calculate a duration percentile for one cluster member, we can not merge that with another one to get an aggregate performance metric for the whole cluster.
It is a common misconception that percentiles can be averaged; they cannot, except in rare cases where the distributions that generated them are nearly identical. If you only have the percentile, and not the source data, you cannot know that might be the case. It is a chicken and egg problem.
This also means that you cannot set service level indicators for an entire service due to the lack of mergeability, if you are measuring percentile based performance only for individual cluster members.
Our ability to set meaningful SLIs (and as a result, meaningful SLOs) is limited here, due to only having four latency data points over fixed time windows. So when you are working with percentile based duration metrics, you have to ask yourself if your SLIs are really good SLIs. We can do better by using math to determine the SLIs that we need to give us a comprehensive view of our service’s performance and health.
Histogram Telemetry
Above is a visualization of latency data for a service in microseconds using a histogram. The number of samples is on the Y-Axis, and the sample value, in this case microsecond latency, is on the X-Axis. This is the open source histogram we developed at Circonus. (See the open source in both C and Golang, or read more about histograms here.) There are a few other histogram implementations out there that are open source, such as Ted Dunning’s t-digest histogram and the HDR histogram.
The Envoy project recently adopted the C implementation of Circonus’s log linear histogram library. This allows envoy data to be collected as distributions. They found a very minor bug in implementation, which Circonus was quite happy to fix. That’s the beauty of open source, the more eyes on the code, the better it gets over time.
Histograms are mergeable. Any two or more histograms can be merged as long as the bin boundaries are the same. That means that we can take this distribution and combine it with other distributions. Mergeable metrics are great for monitoring and observability. They allow us to combine outputs from similar sources, such as service members, and get aggregate service metrics.
As indicated in the image above, this log linear histogram contains 90 bins for each power of 10. You can see 90 bins between 100,000 and 1M. At each power of 10, the bin size increases by a factor of 10. This allows us to record a wide range of values with high relative accuracy without needing to know the data distribution ahead of time. Let’s see what this looks like when we overlay some percentiles:
Now you can see where we have the average, and the 50th percentile (also known as the median), and the 90th percentile. The 90th percentile is the value at which 90% of the samples are below that value.
Consider our example SLI from earlier. With latency data displayed in this format, we can easily calculate that SLI for a service by merging histograms together to get a 5 minute view of data, and then calculating the 90th percentile value for that distribution. If it is less than 1,000 milliseconds, we met our target.
The RED dashboard duration graph from our screenshot above has four percentiles, the 50th, 90th, 95th, and 99th. We could overlay those percentiles on this distribution as well. Even without data, we can see the rough shape of what the request distribution might look like, but that would be making a lot of assumptions. To see just how misleading those assumptions based on just a few percentiles can be, let’s look at a distribution with additional modes:
This histogram shows a distribution with two distinct modes. The leftmost mode could be fast responses due to serving from a cache, and the right mode from serving from disk. Just measuring latency using four percentiles would make it nearly impossible to discern a distribution like this. This gives us a sense of the complexity that percentiles can mask. Consider a distribution with more than two modes:
This distribution has at least four visible modes. If we do the math on the full distribution, we will find 20+ modes here. How many percentiles would you need to record to approximate a latency distribution like the one above? What about a distribution like the one below?
Complex systems composed of many service will generate latency distributions that can not be accurately represented by using percentiles. You have to record the entire latency distribution to be able to fully represent it. This is one reason it is preferable to store the complete distributions of the data in histograms and calculate percentiles as needed, rather than just storing a few percentiles.
This type of histogram visualization shows a distribution over a fixed time window. We can store multiple distributions to get a sense of how it changes over time, as shown below:
This is a heatmap, which represents a set of histograms over time. Imagine each column in this heatmap has a separate bar chart viewed from above, with color being used to indicate the height of each bin. This is a grafana visualization of the response latency from a cluster of 10 load balancers. This gives us a deep insight into the system behavior of the entire cluster over a week, there’s over 1 million data samples here. The median here centers around 500 microseconds, shown in the red colored bands.
Above is another type of heatmap. Here, saturation is used to indicate the “height” of each bin (the darker tiles are more “full”). Also, this time we’ve overlayed percentile calculations over time on top of the heatmap. Percentiles are robust metrics and very useful, but not by themselves. We can see here how the 90+ percentiles increase as the latency distribution shifts upwards.
Let’s take these distribution based duration maps and see if we can generate something more informative than the sample Istio dashboard:
The above screenshot is a RED dashboard revised to show distribution based latency data. In the lower left, we show a histogram of latencies over a fixed time window. To the right of it, we use a heat map to break that distribution down into smaller time windows. With this layout of RED dashboard, we can get a complete view of how our service is behaving with only a few panels of information. This particular dashboard was implemented using Grafana served from an IRONdb time series database which stores the latency data natively as log linear histograms.
We can extend this RED dashboard a bit further and overlay our SLIs onto the graphs as well:
For the rate panel, our SLI might be to maintain a minimum level of requests per second. For the rate panel, our SLI might be to stay under a certain number of errors per second. And as we have previously examined duration SLIs, we might want our 99th percentile for our entire service which is composed of several pods, to stay under a certain latency over a fixed window. Using Istio telemetry stored as histograms enables us to set these meaningful service wide SLIs. Now we have a lot more to work with and we’re better able to interrogate our data (see below).
Asking the Right Questions
So now that we’ve put the pieces together and have seen how to use Istio to get meaningful data from our services, let’s see what kinds questions we can answer with it.
We all love being able to solve technical problems, but not everyone has that same focus. The folks on the business side want to answer questions on how the business is doing. You need to be able to answer business-centric questions. Let’s take the toolset we’ve assembled and align the capabilities with a couple of questions that the business ask its SREs:
Example Questions:
How many users got angry on the Tuesday slowdown after the big marketing promotion?
Are we over-provisioned or under-provisioned on our purchasing checkout service?
Consider the first example. Everyone has been through a big slowdown. Let’s say Marketing did a big push, traffic went up, performance speed went down, and users complained that the site got slow. How can we quantify the extent of how slow it was for everyone? How many users got angry? Let’s say that Marketing wants to know this so that they can send out a 10% discount email to the users affected and also because they want to avoid a recurrence of the same problem. Let’s craft an SLI and assume that users noticed the slowdown and got angry if requests took more than 500 milliseconds. How can we calculate how many users got angry with this SLI of 500 ms?
First, we need to already be recording the request latencies as a distribution. Then we can plot them as a heatmap. We can use the distribution data to calculate the percentage of requests that exceeded our 500ms SLI by using inverse percentiles. We take that answer, multiply it by the total number of requests in that time window, and integrate over time. Then we can plot the result overlayed on the heatmap:
In this screenshot, we’ve circled the part of the heatmap where the slowdown occurred. The increased latency distribution is fairly indicative of a slowdown. The line on the graph indicates the total number of requests affected over time.
In this example, we managed to miss our SLI for 4 million requests. Whoops. What isn’t obvious are the two additional slowdowns on the right because they are smaller in magnitude. Each of those cost us an additional 2 million SLI violations. Ouch.
We can do these kinds of mathematical analyses because we are storing data as distributions, not aggregations like percentiles.
Let’s consider another common question. Is my service under provisioned, or over provisioned?
The answer is often “it depends.” Loads vary based on the time of day and the day of week, in addition to varying because of special events. That’s before we even consider how the system behaves under load. Let’s put some math to work and use latency bands to visualize how our system can perform:
The visualization above shows latency distribution broken down by latency bands over time. The bands here show the number of requests that took under 25ms, between 25 and 100 ms, 100–250ms, 250–1000, and over 1000ms. The colors are grouped by fast requests as shown in green, to slow requests shown in red.
What does this visualization tell us? It shows that requests to our service started off very quickly, then the percentage of fast requests dropped off after a few minutes, and the percentage of slow requests increased after about 10 minutes. This pattern repeated itself for two traffic sessions. What does that tell us about provisioning? It suggests that initially the service was over provisioned, but then became under provisioned over the course of 10–20 minutes. Sounds like a good candidate for auto-scaling.
We can also add this type of visualization to our RED dashboard. This type of data is excellent for business stakeholders. And it doesn’t require a lot of technical knowledge investment to understand the impact on the business.
Conclusion
We should monitor services, not containers. Services are long lived entities, containers are not. Your users doesn’t care how your containers are performing, they care about how your services are performing.
You should record distributions instead of aggregates. But then you should generate your aggregates from those distributions. Aggregates are very valuable sources of information. But they are unmergeable and so they are not well suited to statistical analysis.
Istio gives you a lot of stuff for free. You don’t have to instrument your code either. You don’t need to go and build a high quality application framework from scratch.
Use math to ask and answer questions about your services that are important to the business. That’s what this is all about, right? When we can make systems reliable by answering questions that the business values, we achieve the goals of the organization.
Subscribe to our newsletter to keep up with the latest innovations from Circonus.
Originally published at www.circonus.com on June 11, 2018.
| Comprehensive Container-Based Service Monitoring with Kubernetes and Istio | 0 | comprehensive-container-based-service-monitoring-with-kubernetes-and-istio-113e65dbd694 | 2018-10-03 | 2018-10-03 18:23:24 | https://medium.com/s/story/comprehensive-container-based-service-monitoring-with-kubernetes-and-istio-113e65dbd694 | false | 3,860 | Smarter Monitoring for Smarter Engineers | null | circonus | null | Circonus | circonus | null | circonus | Docker | docker | Docker | 13,343 | Circonus | null | edd9a28e8c78 | circonus | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-19 | 2018-09-19 03:21:07 | 2018-02-04 | 2018-02-04 22:53:35 | 1 | false | en | 2018-09-19 | 2018-09-19 03:23:08 | 7 | 11410443c8ff | 3.550943 | 0 | 0 | 0 | Google, Amazon, and Apple are contending in possibly one of the most significant technology wars to date. And the stakes are high. The… | 4 | Why digital assistants are the linchpin for the future of tech
Google, Amazon, and Apple are contending in possibly one of the most significant technology wars to date. And the stakes are high. The digital assistant — application programs that understand natural language voice commands and complete tasks — is the glue that brings together computing, services and more across disparate operating systems. They can make our personal and professional lives easier, and help streamline digital services and devices. Hence the digital assistant is both strategic and important for the aforementioned technology giants.
The big question, of course, is ‘which company’s digital assistant will dominate in the years to come?’ To answer this, we need to delve a little deeper into trends, capabilities, and other pros and cons of digital assistant services.
The digital assistant is the glue that brings together computing, services and more across disparate operating systems. Click To Tweet
Preface points
Microsoft’s digital assistant Cortana is available on Windows 10 devices and various mobile operating systems but is notably absent in the smart speaker market.
Apple’s HomePod smart speaker will launch in the coming days. As such, there is limited data to compare.
The brains of the digital assistant have a different name to the smart speaker itself (see below).
ASSISTANT NAME (Google)
Google Assistant
SMART SPEAKER NAME
Google Home
ASSISTANT NAME (Amazon)
Alexa
SMART SPEAKER NAME
Amazon Echo
ASSISTANT NAME (Apple)
Siri
SMART SPEAKER NAME
Apple HomePod
Current Market Share
Note — These charts do not include the Apple HomePod as it hadn’t launched at the time of publishing.
This statistic displays the market share of virtual/digital assistants in 2017, with a forecast for 2020, by company/product. Google Assistant had a share of about 25 percent in the intelligent digital assistant market in 2017. That share is forecasted to increase to 43 percent in three years time, due to the expansion of Google Assistant to lower price tiers.
This statistic displays the comparison between Amazon Alexa and Google Assistant in terms of their ability to answer questions in different categories. According to a survey done in 2017, Google Assistant outperformed Amazon Alexa on all categories by large margins.
Current capabilities
Integrations
Aside from accuracy, reliability, and ease of use, perhaps the most critical component for digital assistants is the ability to intelligently integrate with third-party services (those outside its own native ecosystem).
Actions on Google allows third-party developers to easily integrate their apps and services with the brains of Google Assistant. Skills is the Amazon equivalent. It’ll be interesting to see what Apple offers for HomePod as Apple tends to be quite protective, and limits how much developers can control its hardware and software.
Cost
Pricing matters. Google knows this, as does Amazon, but the cost of Apple’s HomePod is obscene at $500 (especially considering how late it is to market, and also how limited it is likely to be). But as we know all too well, Apple has a legion of loyal fans, many of whom are prepared to pay premium prices no matter what. Google and Amazon, on the other hand, are more willing to sacrifice their margins if it means attracting more users.
Competitive advantages
Google
Google owns search, which means (to a degree) it controls the Internet. It also has an extensive portfolio of online services that are used by billions of people every day — Gmail, Calendar, Maps, Photos, YouTube, Drive, and plenty more. The company has access to huge amounts of data, which it uses to make decisions and guide the direction of the business. Google also owns the Android OS, giving it a massive advantage in the mobile space.
So in summary, Google has a lot of leverage in the digital assistant market — perhaps the most out of all the players.
The Google Home and Google Home Mini.
Amazon
Amazon is the leader in e-commerce, generating billions of dollars from online shoppers around the world. It also dominates the eBook market with its Kindle bookstore. In addition, Amazon is the king of cloud and computing services with AWS (Amazon Web Services). The company is making headway into digital streaming and entertainment as well.
Amazon Echo.
Watch this hilarious Alexa Super Bowl commercial.
Apple
Apple is a behemoth in both hardware and software, and its users are (mostly) very loyal to the brand (regardless of pricing). Its hardline stance on maintaining a somewhat closed ecosystem provides the illusion that its products are more secure than its competitors, which greatly appeals to the business market. Apple’s products, I’m told, also work together seamlessly, providing the kind of user experience that consumers crave.
The Apple HomePod.
Google has a lot of leverage in the digital assistant market — perhaps the most out of all the players. Click To Tweet
Based on the above information (and admittedly some bias), I believe that Google will come out on top in the digital assistant race. Why? Because it has so many bases covered. It has the greatest reach and is leading the AI charge by miles. Amazon and Apple definitely have their own strengths, but I don’t believe — at least at this point — that they’re enough to put them in the lead.
Do you agree? Please share your comments.
Originally published at jyancey.me on February 4, 2018.
| Why digital assistants are the linchpin for the future of tech | 0 | why-digital-assistants-are-the-linchpin-for-the-future-of-tech-11410443c8ff | 2018-09-19 | 2018-09-19 03:23:08 | https://medium.com/s/story/why-digital-assistants-are-the-linchpin-for-the-future-of-tech-11410443c8ff | false | 888 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jeremy Yancey | Blogger | Author | Writer | Tech Geek | Learn More at jyancey.me | d0261ef230d7 | jeremyyancey | 202 | 400 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 700bae695f4b | 2017-11-08 | 2017-11-08 15:29:07 | 2017-11-14 | 2017-11-14 02:37:14 | 2 | false | en | 2017-11-14 | 2017-11-14 02:37:14 | 2 | 1142917e1525 | 0.930503 | 5 | 0 | 0 | With DSX’s enhanced invitation feature, you can now add team members more easily! People who do not have existing DSX accounts will be sent… | 4 | Supercharge your team’s collaboration with DSX invitations
With DSX’s enhanced invitation feature, you can now add team members more easily! People who do not have existing DSX accounts will be sent an email invitation prompting them to register.
Once they sign in and onboard to DSX, they will automatically be able to access the project you’ve invited them to work on(which includes the data assets, notebooks, models, and more inside)!
Invite your team
To get started, just go to your Collaborators tab in your DSX project and select ‘+ Add Collaborators’
Select different permission levels for your invitees and clearly see who already has a DSX account (or not)
Invited the wrong person by accident? Having second thoughts about that invitation? You can always see the status of that invitation in your Collaborators tab (pending or not) and be able to cancel and resend the invitation.
Stay flexible with your invitations!
Try out this refreshed invitation feature in your DSX account or sign up for an account today!
| Supercharge your team’s collaboration with DSX invitations | 12 | supercharge-your-teams-collaboration-with-dsx-invitations-1142917e1525 | 2018-02-16 | 2018-02-16 17:58:13 | https://medium.com/s/story/supercharge-your-teams-collaboration-with-dsx-invitations-1142917e1525 | false | 145 | Build smarter applications and quickly visualize, share, and gain insights | null | null | null | IBM Watson Data | ibm-data-science-experience | IBM,DATA SCIENCE,MACHINE LEARNING,DEEP LEARNING | IBMDataScience | Dsx | dsx | Dsx | 26 | Cecelia Shao | Product Lead @ Comet.ml where we’re building the Github for machine learning | 370d0382c596 | ceceliashao | 474 | 1,038 | 20,181,104 | null | null | null | null | null | null |
|
0 | # you will set the team project git folder in your local machine.
git clone https://github.com/haoapple/git-busters.git
# set your branch name and switch to the branch
git checkout -b {your branch name}
git pull origin master
git add .
git commit -m "describe your changes briefly here"
git push origin {your branch name}
| 6 | null | 2018-06-25 | 2018-06-25 04:13:35 | 2018-06-25 | 2018-06-25 12:01:01 | 5 | true | en | 2018-06-25 | 2018-06-25 12:01:01 | 4 | 1142cbe607be | 3.022013 | 4 | 0 | 0 | This is the second part of the series. Please read the previous article, “Git-busters | How to set your team repository right in GitHub “ | 4 | Git-busters | How to make your team repository errorless in GitHub
Photo by rawpixel on Unsplash
This is the second part of the series. Please read the previous article, “Git-busters | How to set your team repository right in GitHub “
As I emphasized in the previous article, setting a GitHub is very easy and intuitive. You will almost never face any error message using GitHub on your solo work. However, your hatred toward the git rapidly haunted on you once you join one of the team GitHub repository.
Always habit yourself to pull/commit/push your works and sync with teams update.
Part 1: how to set a team repository right for the team leader
Part 2: how team join a collaborative workspace and navigate
Part 3: how to handle GitHub error messages and fix it in your team repo
Steps for the team member, collaborators
Step 1) Join your team repo by clone it directly
Click the green button on your right side
Step 2) Each members will create their own branch in a local machine.
When you successfully set your branch, you will see “On branch {your branch name}” in the first line when you typed git status
Step 3) pull down the most updated work from the remote
Before you are stacking your own codes to the collaborative files, make sure to command the pulling git.
Again, get used to check and pull down the contents before you start your work. Otherwise, you will have double extra works to fix and merge codes later.
Step 4) Make your changes and push it back to the remote
This is an important part, please make sure to push it back to the remote branch once you finish your work.
Step 5) Ask for “pull request” to merge your changes into the master branch
In your team repository GitHub page, you will find ‘pull request’ menu and you can request pull request.
Select the pull request menu
Click the green pull request button
or in the branch menu, you will see ‘new pull request’ button next to your branch name. (captured from another repository of my team project)
In the branch menu, ‘new pull request’ button is available for you.
When you make ‘pull request’, make sure to leave your comments precisely what you changed so that the reviewers and other collaborators can capture your thought process easily.
Step 6) Once your “pull request” confirmed and merged by others, you will repeat the step 3 to 5 for your future collaborative work.
What is pull request?
“Pull requests let you tell others about changes you’ve pushed to a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before the changes are merged into the repository.”
- About pull requests, the official GitHub document
In my own definition, a pull request is a discussion panel review platform and procedure to talk between the teams whether to accept the changes or not.
Additional source: Pull Request Tutorial
In the series of the two articles, I shared how to set the team repository and invite members to the collaborative workspace which will be working under the error-free. In the next article, I would like to share how to handle GitHub error messages and fix your problems. Let’s git-busters!
| Git-busters | How to make your team repository errorless in GitHub | 143 | git-busters-how-to-make-your-team-repository-errorless-in-github-1142cbe607be | 2018-06-25 | 2018-06-25 13:58:52 | https://medium.com/s/story/git-busters-how-to-make-your-team-repository-errorless-in-github-1142cbe607be | false | 580 | null | null | null | null | null | null | null | null | null | Github | github | Github | 7,846 | Kihoon Sohn | Data Scientist | Patient Listener | 8e4c1dbe157c | kihoon.sohn | 14 | 14 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 74b541a0e929 | 2018-09-27 | 2018-09-27 03:34:34 | 2018-09-27 | 2018-09-27 19:48:36 | 3 | false | en | 2018-10-03 | 2018-10-03 15:48:04 | 0 | 1143ed51ca30 | 3.757547 | 9 | 0 | 0 | Is there a cryptocurrency that will change the game? | 5 | Issues of Cryptocurrencies in the Middle East
Is there a cryptocurrency that will change the game?
Blockchain and Cryptocurrency are constantly reaching a new level of popularity and acceptance in various parts of the world, every day. Many industries are constantly discussing and exploring the application of these two technologically advanced and futuristic financial trends.
What does it mean? Simple, many businesses now want to incorporate Blockchain and Cryptocurrency in their financial business operations. Some of them are already doing it and examples include some popular brand names like Amazon and eBay.
Therefore, it becomes important for businesses and individuals to understand both these concepts.
What is Blockchain?
It is actually a secure digital ledger used to keep all types of records related to Cryptocurrency transactions. This is the best possible definition of Blockchain for a layman to understand. It is highly secure as compared to the traditional ledger in use in our banking system. It is mainly because every single transaction has to be approved by thousands of nodes and all relevant records are available in all blocks related to the chain. Therefore, it is called a chain-of-blocks, Blockchain. It is various types that you don’t need to explore unless you are a business owner who actually needs it.
Let’s now understand Cryptocurrency:
In simple words, a Cryptocurrency is a digital form of currency. It is cryptographically secure. A Cryptocurrency is always based on the type of Blockchain it is developed using.
Coming to the point, a plenty of Cryptocurrencies are doing rounds in the market. Some popular ones include Bitcoin, Ripple, Dogecoin and Litecoin etc. Bitcoin was the first Cryptocurrency and it changed the way financial operations/transactions are carried out nowadays. Rest of the Cryptocurrencies came into existence way later than Bitcoin.
Now which Cryptocurrency is changing the game?
Bitcoin is at its peak for a long time. Even other Cryptocurrencies are fast reaching their peak. So they no longer have what it takes to be the game changer and next level technology-based financial revolution. That’s right!
But hope lies eternal! It does not mean there is no solution to this problem! I am saying this because there is now a Cryptocurrency that has all the ingredients to be the next level game changer and technological financial revolution. This is CryptoRiyal.
What is CryptoRiyal?
It is a Cryptocurrency that is based on SmartRiyal. Now, SmartRiyal? In layman’s terms, it is a cryptographically secure Blockchain based platform to carry out all CryptoRiyal transactions and maintain relevant records. Talking about Cryptography, it is an encryption technique used to secure the identity of users and the details of transactions.
It is very much different from all other Cryptocurrencies doing rounds in the market. It is because The Government of Saudi Arabia plans on using it as the official currency of Neom City, a $500 million ambitious project of Crown Prince Mohammad Bin Salman.
Moreover, Since, The Government of Saudi Arabia has decided to incorporate Blockchain in its entire governance system, CryptoRiyal has answers to all the challenges involved during and even after the process.
Let’s see the challenges it can answer and how?
First of all, its own growth is a challenge that needs to be answered effectively. The only effective answer to this challenge is the beginning of ICOs (Initial Coin offerings) to seek funds for its design, development, and implementation as a financial solution at ground level to help people and businessmen carry out financial transitions to fulfill their needs.
Once this challenge is answered through investment from The Saudi Arabian Government, Public-Private investment funds and individual investor from Middle-East and other parts of the world, it will be ready to be traded, converted, transferred and distributed through the SmartRiyal Platform.
It will effectively happen with assistance sought from professional data distributors to make sure it becomes an effective medium of financial transactions for everyone.
At later stages, Incorporation of AI technology, MLT (Machine Learning Techniques) will also be required to understand users’ behavior through valuable insights. These insights can be collected using IoT Sensors in this whole process.
Once all this happens easily, CryptoRiyal will be on its way to trading on the SmartRiyal platform for value appreciation to benefit initial investors.
It will be accepted as a monetary standard and means of financial investment. More importantly, various industries will get rid of the dependence on traditional flat money. Some of these industries will be as mentioned below:
Agriculture and food
Entertainment
Biotech
Energy and water
Media
Mobility
Advanced Manufacturing
But again, it will need a significant amount of data about those industries. Therefore, the role of data distributors, IoT sensors, AI technology and machine learning techniques will ensure this data for CryptoRiyal to get cemented as the financial medium of investment on ground level.
As for investors, provided they have the patience for long-term, they can be eligible to purchase services, shares or products proportional to their investments as its value appreciates and will be stable after reaching its peak.
That’s the way CryptoRiyal works in everyone’s favor!
Think about it!
| Issues of Cryptocurrencies in the Middle East | 413 | issues-of-cryptocurrencies-in-the-middle-east-1143ed51ca30 | 2018-10-03 | 2018-10-03 15:48:04 | https://medium.com/s/story/issues-of-cryptocurrencies-in-the-middle-east-1143ed51ca30 | false | 850 | Cryptoriyal ICO and SmartRiyal news | null | cryptoriyal | null | Cryptoriyal | cryptoriyal | null | cryptoriyal | Bitcoin | bitcoin | Bitcoin | 141,486 | CryptoRiyal | null | b7f1b91e9d07 | cryptoriyal | 31 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-10 | 2018-05-10 16:58:21 | 2018-05-10 | 2018-05-10 17:05:37 | 1 | false | en | 2018-05-10 | 2018-05-10 17:15:47 | 2 | 11450fedfb9c | 1.85283 | 2 | 1 | 0 | Last Tuesday On May 8, Google’s CEO launched the new AI assistant in Google’s 2018 conference in California. This is for sure a new… | 5 | Comprehensive agreement on the Iranian nuclear program (Lausanne, April 2, 2015).
Which Is More Dangerous? Artificial intelligence or Natural Stupidity.
Last Tuesday On May 8, Google’s CEO launched the new AI assistant in Google’s 2018 conference in California. This is for sure a new advancement in the world of technology regarding the trend of artificial intelligence. On the same day, in the same country, in Washington DC, US president Donald Trump announced his decision about the E3+3 agreement with Iran, withdrawing US from the Joint Comprehensive Plan of Action, known as JCPOA. Regardless of the matter that he did or did not have the right to do so, his decision could be viewed as a great example of Natural Stupidity. It is somehow ironic that these two events happened on the same day and reveals how far the politicians are from global technological advancements.
The purpose of AI is to make life easier for everyone. Google’s new AI assistant is using a natural speech pattern to help people on phone calls such as phone reservations that any kind of business may have. Surprisingly, world’s leader in artificial intelligence and probably world’s leader in natural stupidity are coming from the same country. United States ambivalent leader has clearly confirmed that he does not respect international agreements. Stepping out of Paris Climate Agreement and Iran Nuclear Deal has given Trump a reputation as a dealbreaker! It seems that his policy is causing the US to withdraw from the world.
On the day after US withdrawal from the nuclear deal, some Iranian parliament members angered with Trump’s decision burnt the symbolic JCPOA paper in the public courthouse of parliament. These reactions are also novel examples of natural stupidity when the Iranian government needs to unify the nation to face the probable crisis. These cases have determined that there is still a tight competency between politicians all around the world in terms of natural stupidity.
Believe it or not, a country with 6800 nuclear warheads (US) after lobbying by a 200 nuclear warheads owner (Israel) is withdrawing the nuclear deal with a country without nuclear warheads (Iran). While the IAEA had confirmed several times that Iran was in compliant with the restriction it had agreed to in 2015, there is no reasonable proof from Trump’s side to violate the deal unless he seems unhappy not seeing his own sign under the agreement. As a consequence of this withdrawal, the United States is losing an ally in the Iranian people and particularly the young generation who are now showing their dismay using #untr_US_table . Furthermore, this decision has an obvious message for the world’s leaders not to rely on agreements made by US politicians.
| Which Is More Dangerous? Artificial intelligence or Natural Stupidity. | 3 | artificial-intelligence-vs-natural-stupidity-11450fedfb9c | 2018-05-11 | 2018-05-11 19:21:24 | https://medium.com/s/story/artificial-intelligence-vs-natural-stupidity-11450fedfb9c | false | 438 | null | null | null | null | null | null | null | null | null | Iran | iran | Iran | 6,132 | Soroush Alavi | null | 356c4e567ce | SoroushAlavi | 11 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-01 | 2018-06-01 14:39:58 | 2018-06-01 | 2018-06-01 16:26:39 | 3 | false | en | 2018-06-10 | 2018-06-10 18:35:32 | 0 | 11452d8af9d9 | 2.263208 | 5 | 0 | 0 | Artificial Intelligence(AI), a term which makes most of the manufacturing and software sector employees dread. Although AI is taking over… | 5 | Why must AI not be feared?
Artificial Intelligence(AI), a term which makes most of the manufacturing and software sector employees dread. Although AI is taking over most of the fields, these are the forefronts which will be taken over first.
But AI is the future and we can’t run away from it. As the nature of human civilization suggests, a change is always unwelcome and is faced by fear and protests, same was the case with computers and is the case with AI.
Misconception
The fear of AI in people is mostly because of misconception. AI will bring with itself many jobs as augmenting it with human intelligence will make humans more productive. At present we are far away from full automation and therefore no job crisis will be there instead AI will help increasing manufacture and therefore solve the resource crisis in the world.
Future Market Scenario
The future market will change as low skill level jobs will be fully automated but high skill jobs will be in ever-increasing demand. Research and innovation will gain new heights as more brainpower will work towards finding solutions to complex problems rather than doing physical labor. There will be a boost in manufacturing as efficiency will increase manifold. Therefore the coming years will be the golden years for human civilization.
Decentralization
Blockchain
The amount of data being generated in the age of INTERNET is tremendous. The one with control of largest amount of data will rule the world. However BLOCKCHAIN is the solution to this centralization of power. The technology of blockchain will keep big internet players from gaining control over world politics thus saving us from another colonial era.
The AI Bubble
It’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every venture capital to want to say that I am an AI investor. But AI investing is not for the novice. For people who try to get into early stage of AI without understanding the technology, they will lose their shirt. And there will be many bubbles that start to burst towards the end of the year. However AI is not going anywhere, there will be many successful cases pushing AI forward but many cases of bankruptcy too which will gradually slow down the investment in AI as investors will become more cautious in lending their hard earned money.
Conclusion
Transformers and Terminators are just fantasies and are not going to come alive in the near future. However cyber warfare will replace the traditional warfare. The one with control over data will rule the world and influence world politics as was done by the Russians in the United States and Cambridge Analytica in India.
| Why must AI not be feared? | 21 | why-must-ai-not-be-feared-11452d8af9d9 | 2018-08-02 | 2018-08-02 16:03:22 | https://medium.com/s/story/why-must-ai-not-be-feared-11452d8af9d9 | false | 454 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | TechnoReview | Love Technology, Write Technical | 6525919c3669 | technoreview | 30 | 4 | 20,181,104 | null | null | null | null | null | null |
Subsets and Splits