audioVersionDurationSec
float64
0
3.27k
โŒ€
codeBlock
stringlengths
3
77.5k
โŒ€
codeBlockCount
float64
0
389
โŒ€
collectionId
stringlengths
9
12
โŒ€
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
โŒ€
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
โŒ€
imageCount
float64
0
263
โŒ€
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
โŒ€
linksCount
float64
0
1.18k
โŒ€
postId
stringlengths
8
12
โŒ€
readingTime
float64
0
99.6
โŒ€
recommends
float64
0
42.3k
โŒ€
responsesCreatedCount
float64
0
3.08k
โŒ€
socialRecommendsCount
float64
0
3
โŒ€
subTitle
stringlengths
1
141
โŒ€
tagsCount
float64
1
6
โŒ€
text
stringlengths
1
145k
title
stringlengths
1
200
โŒ€
totalClapCount
float64
0
292k
โŒ€
uniqueSlug
stringlengths
12
119
โŒ€
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
โŒ€
url
stringlengths
32
829
โŒ€
vote
bool
2 classes
wordCount
float64
0
25k
โŒ€
publicationdescription
stringlengths
1
280
โŒ€
publicationdomain
stringlengths
6
35
โŒ€
publicationfacebookPageName
stringlengths
2
46
โŒ€
publicationfollowerCount
float64
publicationname
stringlengths
4
139
โŒ€
publicationpublicEmail
stringlengths
8
47
โŒ€
publicationslug
stringlengths
3
50
โŒ€
publicationtags
stringlengths
2
116
โŒ€
publicationtwitterUsername
stringlengths
1
15
โŒ€
tag_name
stringlengths
1
25
โŒ€
slug
stringlengths
1
25
โŒ€
name
stringlengths
1
25
โŒ€
postCount
float64
0
332k
โŒ€
author
stringlengths
1
50
โŒ€
bio
stringlengths
1
185
โŒ€
userId
stringlengths
8
12
โŒ€
userName
stringlengths
2
30
โŒ€
usersFollowedByCount
float64
0
334k
โŒ€
usersFollowedCount
float64
0
85.9k
โŒ€
scrappedDate
float64
20.2M
20.2M
โŒ€
claps
stringclasses
163 values
reading_time
float64
2
31
โŒ€
link
stringclasses
230 values
authors
stringlengths
2
392
โŒ€
timestamp
stringlengths
19
32
โŒ€
tags
stringlengths
6
263
โŒ€
0
null
0
a511475cf02
2017-10-17
2017-10-17 20:28:21
2017-10-23
2017-10-23 10:01:01
4
false
en
2017-10-23
2017-10-23 11:58:45
10
126406247e50
4.7
90
3
0
The Internet has changed our lives completely. It has put a massive amount of information accessible at our fingertips. However, Herberโ€ฆ
5
Humanizing the Website in the Attention Economy The Internet has changed our lives completely. It has put a massive amount of information accessible at our fingertips. However, Herber Simon once said: โ€œA wealth of information creates a poverty of attention, and need to allocate that attention efficiently among the overabundance of information sources that might consume it.โ€ We are living in the era of Attention Economy. Social media, Netflix Youtube, emails are generating constant distractions to us. The overabundance of information is shrinking our attention spans. According to a research of Statistic Brain, the average attention spans of a person have fallen from 15s to 8s. This scarcity of attention is becoming a new challenge for todayโ€™s businesses because they are no longer competing only with their peers but fighting with everyone else for getting user attention. For companies to survive in this new age, they can no longer relying on using generic communication to get customer attention. They have to go for an extra mile and offer Frictionless Customer Experience. Frictionless Customer Experience: The New Business Competitive Advantage We are living in an era of overabundance. Consumers have more choices than ever. To survive companies have switchedโ€ฆmedium.com Take for example in email marketing: marketers are sending millions of emails per day, but they only see 1,5% of click-through rate. To solve this problem, the email industry came up with hyper-personalization. Today itโ€™s almost impossible to send email campaigns without segmentation and using dynamic fields like the username. Research shows that emails with personalization can get up to 15% increase in conversion rate. The problem of Website in the attention economy Now let's take a look at Website, one of the most used and the very first interaction point for businesses to communicate with users on the Internet. According to Internet Live Stats, there is a total of 1,24 billion websites in the world, and we are publishing 140.000 new sites every day. But if we look at the average conversion rate of websites is only around 2,35%. It seems that Websites are suffering the same fate as emails did: same experience and information to all visitors. Just like a salesman offering always the same shoe size to his customers. In the Attention Economy, users have more choices than ever so they will have higher expectation. However, Website development is stagnant in the past. Thatยดs why our mission at Landbot.io is to: Humanize the Website so businesses could offer a frictionless experience to their customers. Conversational Interface as the solution We believe the ideal solution is turning the Website into a conversation with the users. Literally! Why? Because conversation is the most natural way of human communication and has its unique advantages: Two-way communication: the information flow is two-sided where users and business can get and give information at the same time. With conversation, both parties could do real-time qualification to see their potential fit with each other and avoid wasting time in the future. Better personalization: unlike a typical Graphical Interface, a conversation can ensure a better level of personalization. Welcoming users by their name and offering unique experiences for each user could help business increase significantly the user engagement. One action at a time: in a conversation, the interactions are straightforward where the user has one clear call to action each time. This will help to increase the user focus and avoid losing the attention. You might be skeptical right now, because how on earth can we turn a website into a conversation. AI with Natural Language Processing? No, it will make the website experience even worse! Indeed as the state of art of artificial intelligence is still incipient, we need to take another approach. Here is where comes very handy the concept of Conversational Interfaces (CI), which I have explained in detail in this article. Conversational Interfaces: The Future of Chatbots When I published my last post, many readers were asking me to provide more details about Conversational Interfaces (CIโ€ฆchatbotsmagazine.com In short, a CI is a hybrid interface that interacts with users combining chat, voice or any other natural language interface with graphical UI elements like buttons, images, menus, videos, etc. Instead of relying solely on natural language interaction, with CI we can have a more dynamic user experience. Itโ€™s easier to build and have less maintenance cost. Building the future of website experiences With the idea of creating websites that can talk to users, the Landbot team started to experiment different product ideas 6 months ago. Our first approach was to automate the typical livechat with chatbot (we called it Botchat). An early version of Botchat Our initial conversion rate (from visitor to lead) is only 2%. We couldnโ€™t find fit until one day (thanks to a technical bug!). What happened was when a user entered a website, instead of showing the Botchat in a small window, the chat window expanded to the whole site. So users could only talk to the Botchat, but in return, we saw a conversion rate of 14%! After we solved the incidence, we realized that this was the solution what we were looking for the whole time: the entire website as a conversation. With this accidental discovery, we shifted our approach toward building a chat-based website: Landbot.io. Along the way we found out many small UX issues with the conversational website, but with each iteration we saw little increases of our conversation rate. An example interaction of a landbot We launched on Product Hunt an initial version of Landbot.io in July. The positive response of our early users was a clear confirmation of our hypothesis: people are tired of boring landing pages, and they need a new way of interacting with websites. After 3 months of hard work and with tons of user feedbacks we are finishing a new version called Landbot 1.0. We have solved many usability issues of the builder so users could create more easily landbots. Demo Landbot 1.0 We are still very early in this journey, we would like to have more and more people to join our cause, and together we can build the future of website experiences. Thanks to Cristobal Villar, Fran Conejos, Guillem Serra, Ramรณn Recuero for the comments on the early draft. Did you know you can give up to 50 claps by continuing to press the hands? Please give ๐Ÿ‘ โ€˜s if you enjoyed reading it means a lot!
Humanizing the Website in the Attention Economy
1,651
humanizing-the-website-in-the-attention-economy-126406247e50
2018-05-21
2018-05-21 18:20:25
https://medium.com/s/story/humanizing-the-website-in-the-attention-economy-126406247e50
false
1,060
From Landing Pages to Conversational Experiences ๐Ÿ–ฅ๐Ÿ’ฌ๐Ÿค–
null
null
null
Landbot.io
hi@landbot.io
landbot-io
CHATBOTS,CONVERSATIONAL UI,CONVERSATIONAL UX,MARKETING,CONVERSION OPTIMIZATION
landbot_io
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jiaqi Pan
CEO at @helloumi_es & @landbot_io | Humanizing the internet | Love talking about #conversationalUI #business #techs #messaging
7e2eb9a66260
JiaqiPan
1,733
75
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-19
2018-09-19 15:10:06
2018-09-19
2018-09-19 15:19:40
2
false
en
2018-09-21
2018-09-21 16:34:08
0
12649db7eac0
1.575786
4
0
1
I always believed in โ€œnumerical calculations are exact, but graphs are roughโ€. Coming from a person who has just started learning Dataโ€ฆ
5
Anscombeโ€™s Quartet I always believed in โ€œnumerical calculations are exact, but graphs are roughโ€. Coming from a person who has just started learning Data Analytics, it was hard for me to understand the importance of Data Visualization along with summary statistics. But it all changed after attending this Data Visualization Meetup, which is when I was introduced to the Anscombeโ€™s Quartet. Anscombeโ€™s Quartet was developed by statistician Francis Anscombe. It comprises four datasets, each containing eleven (x,y) pairs. The essential thing to note about these datasets is that they share the same descriptive statistics. But things change completely, and I must emphasize COMPLETELY, when they are graphed. Each graph tells a different story irrespective of their similar summary statistics. Quartetโ€™s Summary Stats The summary statistics show that the means and the variances were identical for x and y across the groups : Mean of x is 9 and mean of y is 7.50 for each dataset. Similarly, the variance of x is 11 and variance of y is 4.13 for each dataset The correlation coefficient (how strong a relationship is between two variables) between x and y is 0.816 for each dataset When we plot these four datasets on an x/y coordinate plane, we can observe that they show the same regression lines as well but each dataset is telling a different story : Dataset I appears to have clean and well-fitting linear models. Dataset II is not distributed normally. In Dataset III the distribution is linear, but the calculated regression is thrown off by an outlier. Dataset IV shows that one outlier is enough to produce a high correlation coefficient. This quartet emphasizes the importance of visualization in Data Analysis. Looking at the data reveals a lot of the structure and a clear picture of the dataset. A computer should make both calculations and graph. Both sorts of output should be studied; each will contribute to understanding.
Anscombeโ€™s Quartet
53
anscombes-quartet-12649db7eac0
2018-09-21
2018-09-21 16:34:08
https://medium.com/s/story/anscombes-quartet-12649db7eac0
false
316
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Data Science
data-science
Data Science
33,617
Parth Shah
null
82e93529cab4
shahp7575
4
15
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-02
2018-02-02 08:51:04
2018-02-02
2018-02-02 08:59:10
1
false
en
2018-02-02
2018-02-02 08:59:10
1
1265b5bb43c
1.916981
0
0
0
Nowadays, many data science jobs require a Ph.D degree. Such people are not easy to come by. Hereโ€™s how businesses are developing in-houseโ€ฆ
5
Know How Companies are Developing In-House Data Scientists Instead of Hiring More Ph.D.s Nowadays, many data science jobs require a Ph.D degree. Such people are not easy to come by. Hereโ€™s how businesses are developing in-house data scientists instead of hiring more Ph.D.s. Today, many business organizations are organizing a corporate program in data science to fill their data analytics positions. 1. Identify the areas where data science will add the maximum value to their business Before establishing an in-house data science talent and corporate development program, it is worth considering the areas where such efforts will pay the highest dividends. Data science seamlessly blends data analytics with business strategies and decisions, and much can be accomplished by training the workforce in domain expertise, besides technical topics. For many businesses, the key use case for data science talent to add business value remains technology and marketing platforms with high activity levels. Besides sales and marketing, data science talent can also be used to improve productivity. For example, GE has worked with various steel companies to enhance the productivity of high-value production equipment. Data science analysis can help in making predictions to apply preventive maintenance at the right time to reduce unplanned downtime. 2. Equipping staff to deliver data science results For many organizations, the most popular data science tools remain R and Python. For those tools to be helpful in your business, employees need a foundation first. Brad Morgart, an associate at Booz Allen Hamiltonโ€™s analytics group says โ€œI went through the corporate 2-week data science and analytics training program which equipped me with a variety of skills including machine learning and Python.โ€ โ€œFor the past few years, I have been working on data using Excel and Access. The fundamentals of statistics and data analysis were not new to me. However, I am today able to monitor, assess, and analyze much larger amounts of data more accurately and quickly in my work.โ€ Brad Morgartโ€™s achievement clearly indicates that the data science and analytics experts of tomorrow will not come from math and computer science programs alone. However, a Booz Allenโ€™s type of corporate training program is important, since providing the latest data analysis tools and software alone wonโ€™t work or meet the data analytics needs of businesses. Organizing a two-week corporate training program is just one aspect of Booz Allen Hamiltonโ€™s commitment to data science and analytics. โ€œWe are investing heavily in data science training and providing additional online courses and support to help our staff growโ€ said Jacobsohn, Senior Vice President at Booz Allen Hamilton. From this we can conclude that it is much easier to develop an in-house data science team instead of hiring more Ph.D.s.
Know How Companies are Developing In-House Data Scientists Instead of Hiring More Ph.D.s
0
know-how-companies-are-developing-in-house-data-scientists-instead-of-hiring-more-ph-d-s-1265b5bb43c
2018-02-02
2018-02-02 08:59:11
https://medium.com/s/story/know-how-companies-are-developing-in-house-data-scientists-instead-of-hiring-more-ph-d-s-1265b5bb43c
false
455
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Vivek Kumar
Professional content writer. Write Blog on Education, Online Training, Career, Technology
8b88da937bbd
corporateanalyticstraining
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-21
2018-09-21 21:31:03
2018-07-14
2018-07-14 00:00:00
1
false
en
2018-09-21
2018-09-21 22:00:06
3
126673787093
1.279245
0
0
0
Originally published at www.chaibhide.com on July 14, 2018.
5
Pattern recognition is at the core of creativity. Originally published at www.chaibhide.com on July 14, 2018. As a warm-up exercise a few days ago, my colleagues and I made squiggles on a piece of paper. We then passed the paper around and picked a squiggle to turn into a bird. By the end of it, we had 64 unique birds! Some looked cartoonish, some were closer to a sketch of a real bird. The idea was to practice pattern recognition. Humans are great at pattern recognition, and this activity demonstrated that really well. Squiggle Birds: An exercise in pattern recognition. It made me wonder however about how the warm-up exercise would have gone if we had been instructed to create 64 unique birds on a blank paper. I think that it would have taken us longer to imagine and create those. This activity took significantly less time โ€” about 20 seconds per bird! Creativity then, in my mind, is not an act of creating something out of nothing, but identifying patterns to then stimulate our imagination. Pattern recognition is at the core of creativity. As children, we have all sat in the grass staring at the clouds and finding animals, or spent time in the bathroom, for instance, finding faces, shapes, and objects in the textures and patterns in the tiles. I remember doodling the shapes I saw into aliens and creating stories around them when I was a teenager. It was my prized compilation of odd creatures. As time passed, I forgot about this simple act that put me in a โ€˜flowโ€™ state. Maybe, making it a routine exercise will boost my creativity and imagination! What do you think? How do you exercise your creative muscle? Read more from Chaitrali
Pattern recognition is at the core of creativity.
0
pattern-recognition-is-at-the-core-of-creativity-126673787093
2018-09-21
2018-09-21 22:00:06
https://medium.com/s/story/pattern-recognition-is-at-the-core-of-creativity-126673787093
false
286
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Chaitrali Bhide
Interaction Designer. Delivering Tangible from Intangible. https://www.chaibhide.com/
8ba0e6118fc7
bhidechaitrali
6
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-21
2018-03-21 11:16:03
2018-03-21
2018-03-21 11:42:42
3
false
en
2018-03-21
2018-03-21 11:42:42
2
126681bfd212
2.908491
1
0
0
This is me. This is perhaps one of the most intimate ways I could have introduced myself; showing you my neurons firing up, my thoughtsโ€ฆ
5
Intelligence isnโ€™t just pattern recognitionโ€ฆ on AI, the brain and other stories This is me. This is perhaps one of the most intimate ways I could have introduced myself; showing you my neurons firing up, my thoughts flowing, my mind thinking, my 3 pounds brain in action. Hello, World! My fascination with Intelligence first began in the summer of 2014 whilst visiting Stanford University. I was asked to help a friend with some research. These are the images of my brain; perhaps one of the most mesmerizing, but also depressing things I have ever seen. Thatโ€™s it! My brain is now watching itself thinkingโ€ฆ meta. In fact, learning about intelligence and different approaches to building it has inspired me to learn more about AI and a Computer Science approach to intelligence building. Different scientific fields are currently joining forces : Neuroscience, Cognitive Science and Computer Science are converging with new joint efforts to understand intelligence. Intelligence Quest IQ at MIT is perhaps one of the leading initiatives in this category right now. Coming back to intelligence and my brain. Every time I used to hear in the media that we are close to figuring out how the brain works, potentially solving the mystery of intelligence, I got goosebumps. The dream of decoding the mind behind that colorful gif could become a reality. However, after reading a few books on neuroscience and talking to few scientists, it got me thinking: โ€˜But how close are we really to figuring out how the human brain works?โ€™ The truth is that no one knows! Let me explain. We are on a road-trip from San Francisco to LA and I tell you we are close to reaching our destination. This statement implies that I know roughly where LA is. I know the rough distance of our travel from SF to LA. I know how far we have gone. Thatโ€™s why I can tell you roughly how close or how far we are from reaching the destination, based on the distance we have travelled. If I tell you we are close to figuring out how our brain works, it implies that I have the rough idea of the scope of the task, and based on the breadth of all the research conducted (distance travelled) in the field of neuroscience (research which is rather scattered) I can say how far or close we are to reaching the goal. You see where I am going with this? There are two problems: no one knows what the scope of โ€˜figuring out the brainโ€™ is and second of all neuroscience is at its humble beginnings, as we are still building tools to be able test, examine and to understand the human brain. Just think about it for a moment. If the human brain were so simpleโ€ฆ Some of the โ€˜Ahaโ€™ moments that got me thinking more about neuroscience and tools building can be learnt from a talented MIT Media Lab professor, 2015 Spark Prize winner Ed Boyden (pictured below) whom I had the pleasure of meeting while at MIT. He compares figuring out the mission of understanding the brain to figuring out the moon landing mission. This interview is superb, if you are to learn from people who work on the cutting edge of neuroscience Professor Boyden has some very interesting views. โ€˜โ€™What I learned was we have to take the brain at face value. We have to accept its complexity, work backwards from that, and survey all the areas of science and engineering in order to build those tools.โ€™โ€™ Ed Boyden Ed Boyden talking about his latest tools development during Professor Patrick Winstonโ€™s Artificial Intelligence class at MIT Thanks for engaging and I hope together through this blog and conversations we can, as Marvin Minsky would say unpack โ€˜suitcase wordsโ€™ such as AI and Intelligence and raise some new questions worth asking.
Intelligence isnโ€™t just pattern recognitionโ€ฆ on AI, the brain and other stories
50
intelligence-isnt-just-pattern-recognition-on-ai-the-brain-and-other-stories-126681bfd212
2018-06-06
2018-06-06 13:48:18
https://medium.com/s/story/intelligence-isnt-just-pattern-recognition-on-ai-the-brain-and-other-stories-126681bfd212
false
625
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Daily Thoughts
by Izabela Witoszko - MIT grad student, AI strategist. Passionate about democratizing AI, geeking out on brain. Between Singapore and SF
15f5467cab53
everydayai
1
8
20,181,104
null
null
null
null
null
null
0
null
0
1e76f4e09af6
2018-05-02
2018-05-02 07:38:54
2018-05-02
2018-05-02 07:47:10
1
false
en
2018-05-02
2018-05-02 08:33:08
0
12672057a65b
3.849057
1
0
0
When it comes to any new technology โ€” from cellphones to social media โ€” there are always a few stepping stones involved in getting to knowโ€ฆ
5
Six stages of chatbot acceptance When it comes to any new technology โ€” from cellphones to social media โ€” there are always a few stepping stones involved in getting to know, use and love it. Here are what we find to be the most common stages in accepting chatbots and AI technology. Blissful ignorance. Ah yes โ€” the pre-curtain-lifting stage. You havenโ€™t yet been acquainted with the concept of chatbots and artificial intelligence is some muffled, far-flung term that you still associate with the realm of sci-fi. The interesting thing is that many people in this stage are likely to have already been exposed to and even used the technology without even realising it. To their knowledge, that pop up window with that friendly โ€˜customer service agentโ€™ asking if they can assist, was in fact a helpful human, stationed at a desk in an office somewhere. And, those โ€˜Recommended for youโ€™ products were, well, who knows exactly how those computer programmers did that, but they did get it pretty spot on. Fear โ€” aka the โ€˜Itโ€™s the end of the world!โ€™ stage. With science fiction being our first reference to artificial intelligence, itโ€™s no wonder that this stage exists. Skynetโ€™s attempt to exterminate our species in the Terminator movies and HAL9000โ€™s murderous revolt against the crewโ€™s attempt to shut it down in Space Odyssey, have fuelled fear of our demise by an artificially intelligent race. While some may brush these off as being nothing more than far-fetched figments of the imagination, science fictionโ€™s uncanny tendency to predict the future tickles your fearful doubt as to whether itโ€™s ever really, โ€˜just a story.โ€™ (Enter dramatic music) Aside from the sci-fi-induced fears, thereโ€™s also fear of the threat that artificial intelligence poses to your job. While the technology becomes more advanced, the question grows louder in your mind whether chatbots will just be colleagues or rather, fierce competitors in the workplace. We humans have an exceptional talent for fearing that which we donโ€™t fully understand. Denial. โ€œOk. Chatbots are a thing. But for how long? โ€œ โ€œSure, theyโ€™re all novel now but do they really have longevity and isnโ€™t it far too risky to implement this technology until some time way down the line when and if itโ€™s a whole lot more advanced?โ€ โ€œWho would want to engage with a chatbot instead of a human, anyway?โ€ โ€œThis isnโ€™t for my business. Weโ€™ve been doing fine without chatbots up until now, and weโ€™ll continue to do just fine without them in the future.โ€ The list continues. It feels like just yesterday that a similar attitude applied to cellphones. โ€œI donโ€™t need a cellphone. I have my beeper if anythingโ€™s urgent and I can make that call when I get to the office a bit later. โ€œ Without a doubt, just like cellphones, one day we will look back at todayโ€™s chatbots and AI and chuckle at the comparison to the undoubtedly much more advanced versions of the technology, but thereโ€™s no denying how even that first flap-bearing brick, equipped with a 30cm aerial, changed our lives for the better. Curiosity & Convincing That boastful Brad at last weekโ€™s braai may have had some interesting points worth looking into regarding how his business is benefitting from its new HR chatbot. And, it has caught your attention that a couple of websites youโ€™ve used have what youโ€™ve heard described as a chatbot. You decide to do a little research of your own. No harm in that. Youโ€™re admittedly surprised by how many impressive skills chatbots have, even in their early stages of development โ€” from internal operations, to customer service and driving customer engagement and sales. You discover that thousands of businesses around the world are benefiting from using them. And not only the big familiar company names, but small businesses and entrepreneurs too. You read that 57% of consumers are interested in chatbots for their instant nature (HubSpot, 2017) and that 40% of consumers donโ€™t care whether a chatbot or a real human helps them, as long as theyโ€™re getting the help they need (HubSpot, 2017). Youโ€™re also pleasantly surprised that, while chatbots have the ability to perform many tasks currently performed by humans, them stepping in to handle the mundane, repetitive work, frees up staff members to perform higher-quality tasks that require social and emotional intelligence. Things start to go from scary to rather exciting. Acceptance. The stats and facts that were planted in the โ€˜Curiousity & Convincingโ€™ stage have sprouted and are now in full bot-accepting bloom. The evidence from your research exploration shows that these virtual agents have definitely earned their place in small and large businesses alike. And, that rather than giving humans the boot, they provide an opportunity for a formidable hybrid workforce that combines the abilities of both humans and non-humans for maximum productivity, profitability and job satisfaction. Youโ€™ve seen the light that chatbots are in fact here to help, not harm. Itโ€™s not a far step from this to the next and final stage. How did we live without this?! The stats roll of your tongue like a chatbot aficianado. Your businessโ€™ chatbot has become a subject you purposefully work into dinner conversation. Not only is your enthusiasm off the charts, but youโ€™re on a mission to convince those around you how chatbots are the best things since colour cell phone screens and having your mailbox in the palm of your hand. You and Brad also now have a lot more to talk about. No matter what stage youโ€™re currently in, Feersum Engine is here to answer any questions that can help you hop, skip and jump over the final step/s to questioning how you and your business ever did life without chatbots.
Six stages of chatbot acceptance
2
six-stages-of-chatbot-acceptance-12672057a65b
2018-05-02
2018-05-02 08:33:09
https://medium.com/s/story/six-stages-of-chatbot-acceptance-12672057a65b
false
967
Feersum Engine is a conversation engine and a set of Natural Language Understanding APIs optimised for Africa.
null
FeersumEngine
null
Feersum Engine
b@feersum.io
feersum-engine
null
Feersum_Engine
Chatbots
chatbots
Chatbots
15,820
Angela Elizabeth
null
653d32e2b3b7
angelaelizabeth_4940
24
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-29
2018-08-29 11:41:05
2018-08-29
2018-08-29 12:28:33
14
false
th
2018-08-29
2018-08-29 12:28:33
2
12673dd10707
2.521698
0
0
0
เธชเธงเธฑเธชเธ”เธตเธเธฑเธ™เธญเธตเธเธ„เธฃเธฑเน‰เธ‡เธเธฑเธšเธšเธ—เธ„เธงเธฒเธกเธ—เธตเนˆเธชเธญเธ‡เธญเธฑเธ™เธชเธทเธšเน€เธ™เธทเนˆเธญเธ‡เธ•เนˆเธญเธกเธฒเธˆเธฒเธเธšเธ—เธ„เธงเธฒเธกเนเธฃเธเธ—เธตเนˆเธจเธถเธเธฉเธฒเนเธฅเธฐเธ—เธ”เธฅเธญเธ‡เน€เธฅเนˆเธ™ AutoML เธ‚เธญเธ‡ Google เนƒเธ™เธเธฒเธฃเธ—เธณ Machine Learningโ€ฆ
5
เธ—เธ”เธฅเธญเธ‡เธชเธฃเน‰เธฒเธ‡ Custom Model เนเธšเธšเธ‡เนˆเธฒเธขเน† No Coding เธ”เน‰เธงเธข AutoML Natural Language เธชเธงเธฑเธชเธ”เธตเธเธฑเธ™เธญเธตเธเธ„เธฃเธฑเน‰เธ‡เธเธฑเธšเธšเธ—เธ„เธงเธฒเธกเธ—เธตเนˆเธชเธญเธ‡เธญเธฑเธ™เธชเธทเธšเน€เธ™เธทเนˆเธญเธ‡เธ•เนˆเธญเธกเธฒเธˆเธฒเธเธšเธ—เธ„เธงเธฒเธกเนเธฃเธเธ—เธตเนˆเธจเธถเธเธฉเธฒเนเธฅเธฐเธ—เธ”เธฅเธญเธ‡เน€เธฅเนˆเธ™ AutoML เธ‚เธญเธ‡ Google เนƒเธ™เธเธฒเธฃเธ—เธณ Machine Learning เน€เธžเธทเนˆเธญเนเธขเธเธ„เธฅเธฒเธชเธฃเธนเธ›เธ‚เธญเธ‡เธชเธฑเธ•เธงเนŒเธ•เนˆเธฒเธ‡เน† เน‚เธ”เธขเนƒเธŠเน‰ AutoML Vision เธ‹เธถเนˆเธ‡เธœเธฅเธฅเธฑเธžเธ˜เนŒเธ™เธฑเน‰เธ™เธกเธตเธ„เธงเธฒเธกเนเธกเนˆเธ™เธขเธณเธ‚เธญเธ‡เน‚เธกเน€เธ”เธฅเธ—เธตเนˆเธชเธนเธ‡เธกเธฒเธเนเธฅเธฐเธ‡เนˆเธฒเธขเธกเธฒเธเธ”เน‰เธงเธขเธชเธณเธซเธฃเธฑเธšเธœเธนเน‰เน€เธฃเธดเนˆเธกเธ•เน‰เธ™เนƒเธ™เธงเธ‡เธเธฒเธฃ Machine Learning เธ”เธฑเธ‡เธ—เธตเนˆเธเธฅเนˆเธฒเธงเน„เธ›เนƒเธ™เธšเธ—เธ„เธงเธฒเธกเธ—เธตเนˆเนเธฅเน‰เธงเธ„เนˆเธฐ เธœเธฅเธดเธ•เธ เธฑเธ“เธ‘เนŒ AutoML เธ—เธตเนˆเธ—เธฒเธ‡ Google Lanch เธญเธญเธเธกเธฒเธ•เธญเธ™เธ™เธตเน‰เธกเธตเธ—เธฑเน‰เธ‡เธซเธกเธ” 3 เธ•เธฑเธงเธ”เน‰เธงเธขเธเธฑเธ™ เธ„เธทเธญ AutoML Vision, AutoML Natural Language เนเธฅเธฐ AutoML Translate เธ”เธฑเธ‡เธ™เธฑเน‰เธ™เนƒเธ™เธšเธ—เธ„เธงเธฒเธกเธ—เธตเนˆเธชเธญเธ‡เธ™เธตเน‰ เน€เธฃเธฒเธˆเธฐเธกเธฒเธฅเธญเธ‡เน€เธฅเนˆเธ™ AutoML Natural Language เธเธฑเธ™เน€เธžเธทเนˆเธญเธชเธฃเน‰เธฒเธ‡ Custom Model เนเธšเธšเธ‡เนˆเธฒเธขเน† เน‚เธ”เธข No Coding เธเธฑเธ™เธ„เนˆเธฐ Ref : https://cloud.google.com/natural-language/ AutoML Natural Language เน€เธ›เน‡เธ™ Module API เธซเธ™เธถเนˆเธ‡เนƒเธ™เธเธฒเธฃเธ—เธณ Machine Learning เธœเนˆเธฒเธ™ Google Cloud เน€เธžเธทเนˆเธญเธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒ Text เนเธขเธเธ„เธฅเธฒเธชเธ‚เธญเธ‡ Text เธซเธฃเธทเธญเธ›เธฃเธฐเน‚เธขเธ„เธ—เธตเนˆเน€เธ‚เน‰เธฒเธกเธฒ เธ•เธฑเธงเธญเธขเนˆเธฒเธ‡เน€เธŠเนˆเธ™ เธเธฒเธฃเธ—เธณ Sentiment Analysis เธเธฑเธš Text เธงเนˆเธฒ Text เธ—เธตเนˆเน€เธ‚เน‰เธฒเธกเธฒเธกเธตเธ„เธงเธฒเธกเน€เธ›เน‡เธ™ positive text, negative text เธซเธฃเธทเธญ neural text เน€เธ›เน‡เธ™เธ•เน‰เธ™ เน‚เธ”เธขเธเธฒเธฃเธ—เธณ Sentiment Analysis เธ™เธตเน‰เธกเธตเธ›เธฃเธฐเน‚เธขเธŠเธ™เนŒเธกเธฒเธเนƒเธ™เธ”เน‰เธฒเธ™เธเธฒเธฃเธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒเธ„เธงเธฒเธกเธชเธฑเธกเธžเธฑเธ™เธ˜เนŒเธ‚เธญเธ‡เธฅเธนเธเธ„เน‰เธฒเธ—เธตเนˆเธกเธตเธ•เนˆเธญเธ˜เธธเธฃเธเธดเธˆเธ‚เธญเธ‡เน€เธฃเธฒเธงเนˆเธฒเธชเนˆเธงเธ™เนƒเธซเธเนˆเธ„เธดเธ”เน€เธซเน‡เธ™เธเธฑเธšเน€เธฃเธฒเนƒเธ™เน€เธŠเธดเธ‡เน„เธซเธ™ เน€เธŠเนˆเธ™ เธชเนˆเธงเธ™เนƒเธซเธเนˆเธŠเนˆเธงเธ‡เธ™เธตเน‰เธกเธตเนเธ•เนˆเธ‚เน‰เธญเธ„เธงเธฒเธกเธฅเธšเน†เน€เธ‚เน‰เธฒเธกเธฒเธ—เธฒเธ‡ Facebook Fanpage เธ‚เธญเธ‡เธšเธฃเธดเธฉเธฑเธ—เน€เธฃเธฒ เธ™เธฑเนˆเธ™เธ—เธณเนƒเธซเน‰เธ—เธฒเธ‡เธ˜เธธเธฃเธเธดเธˆเธญเธฒเธˆเธ•เน‰เธญเธ‡เธžเธดเธˆเธฒเธฃเธ“เธฒเน€เธžเธทเนˆเธญเธ›เธฃเธฑเธšเธ›เธฃเธธเธ‡เธ•เธฒเธก Customer Comment เธ™เธฑเน‰เธ™เน† เธ™เธฑเนˆเธ™เน€เธญเธ‡เธ„เนˆเธฐ เน€เธ™เธทเนˆเธญเธ‡เธˆเธฒเธเนƒเธ™เธšเธ—เธ„เธงเธฒเธกเธ™เธตเน‰ เน€เธฃเธฒเธกเธต Google AutoML ID เน€เธฃเธตเธขเธšเธฃเน‰เธญเธขเนเธฅเน‰เธงเธˆเธถเธ‡เน„เธกเนˆเธขเธธเนˆเธ‡เธขเธฒเธเนƒเธ™เธเธฒเธฃ Setting เธ™เธฑเธ เธˆเธฐเธ‚เธญเธœเนˆเธฒเธ™เนƒเธ™เธชเนˆเธงเธ™ Setting เน€เธžเธทเนˆเธญเน€เธฃเธดเนˆเธกเธ—เธณเธเธฑเธ™เน€เธฅเธขเธ„เนˆเธฐ เนเธ•เนˆเธชเธณเธซเธฃเธฑเธšเน€เธžเธทเนˆเธญเธ™เน†เธ—เธตเนˆเธขเธฑเธ‡เน„เธกเนˆเน€เธ‚เน‰เธฒเนƒเธˆเนƒเธ™เธชเนˆเธงเธ™ ID Setting เนƒเธซเน‰เธขเน‰เธญเธ™เธเธฅเธฑเธšเน„เธ›เธ”เธนเธšเธ—เธ„เธงเธฒเธกเนเธฃเธเธ‚เธญเธ‡เน€เธฃเธฒเน„เธ”เน‰เน€เธฅเธขเธ™เธฐเธ„เธฐ เน€เธฃเธดเนˆเธกเธˆเธฒเธเธชเธฃเน‰เธฒเธ‡ Data Set เน€เธžเธทเนˆเธญ Training Model เธ„เนˆเธฐ เน‚เธ”เธขเน€เธฃเธฒเธˆเธฐเธชเธฃเน‰เธฒเธ‡เนƒเธ™เธฃเธนเธ›เนเธšเธš CSV File เธชเธฒเธกเธฒเธฃเธ–เธ—เธณเนƒเธ™ Excel เน„เธ”เน‰เน€เธฅเธขเธ™เธฐเธ„เธฐ เน‚เธ”เธข Data Set เธˆเธฐเนเธšเนˆเธ‡เธญเธญเธเน€เธ›เน‡เธ™ 2 เธ„เธญเธฅเธฑเธกเธ™เนŒ เธ„เธญเธฅเธฑเธกเธ™เนŒเนเธฃเธเน€เธ›เน‡เธ™ Text เธ—เธตเนˆเธˆเธฐเธ—เธณเน€เธ›เน‡เธ™เธเธฒเธ™เธ‚เน‰เธญเธกเธนเธฅเน€เธžเธทเนˆเธญเธชเธฃเน‰เธฒเธ‡ Model เธญเธตเธ 1 เธ„เธญเธฅเธฑเธกเธ™เนŒเน€เธ›เน‡เธ™เธ„เธฅเธฒเธชเธˆเธฃเธดเธ‡เธ‚เธญเธ‡ Text เธ™เธฑเน‰เธ™เน†เธ„เนˆเธฐ เธ•เธฑเธงเธญเธขเนˆเธฒเธ‡ DataSet เธเนˆเธญเธ™เธ™เธณเน„เธ›เน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เน€เธžเธทเนˆเธญเธชเธฃเน‰เธฒเธ‡เน‚เธกเน€เธ”เธฅ เน€เธกเธทเนˆเธญเน€เธฃเธฒเธชเธฃเน‰เธฒเธ‡ DataSet เธžเธฃเน‰เธญเธกเนเธฅเน‰เธงเนƒเธซเน‰เน€เธ‚เน‰เธฒเน„เธ›เธ—เธตเนˆ https://cloud.google.com/natural-language/ เน€เธžเธทเนˆเธญเธ—เธ”เธฅเธญเธ‡เธชเธฃเน‰เธฒเธ‡เน‚เธ›เธฃเน€เธˆเธ„เนเธฃเธเธเธฑเธ™เน€เธฅเธขเธ„เนˆเธฐ เน€เธกเธทเนˆเธญเน€เธ‚เน‰เธฒเธกเธฒเนเธฅเน‰เธง เธ—เธฒเธ‡ AutoML เธˆเธฐเธ–เธฒเธกเนƒเธซเน‰ Enable API เธเธฑเธš Billing เน€เธซเธกเธทเธญเธ™เธเธฑเธšเธ—เธตเนˆเน€เธฃเธฒเธ—เธณเนƒเธ™เธšเธ—เธ„เธงเธฒเธกเนเธฃเธ เธซเธฒเธเน€เธฃเธฒ Enable เน€เธฃเธตเธขเธšเธฃเน‰เธญเธขเนเธฅเน‰เธง เธชเธฒเธกเธฒเธฃเธ–เน€เธ‚เน‰เธฒเธกเธฒเธ—เธตเนˆเธซเธ™เน‰เธฒ +NEW DATASET เน„เธ”เน‰เน€เธฅเธขเธ„เนˆเธฐ DataSet เธ—เธตเนˆเนƒเธŠเน‰เธŠเธทเนˆเธญ โ€œhapiness.csvโ€ เธˆเธฒเธเธ™เธฑเน‰เธ™เน€เธกเธทเนˆเธญ Upload DataSet เน€เธชเธฃเน‡เธˆ เนƒเธซเน‰เธ„เธฅเธดเธเน„เธ›เธ—เธตเนˆ Tab TRAIN เน€เธžเธทเนˆเธญเธ—เธณเธเธฒเธฃ training model เธ„เนˆเธฐ เน‚เธ”เธข AutoML Natural Language เธˆเธฐเธ„เนˆเธญเธ™เธ‚เน‰เธฒเธ‡เนƒเธŠเน‰เน€เธงเธฅเธฒเธ™เธฒเธ™เธกเธฒเธเนƒเธ™เธเธฒเธฃเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡ เน€เธžเธทเนˆเธญเธ™เน†เธชเธฒเธกเธฒเธฃเธ–เธเธ”เธ›เธดเธ”เนเธ—เน‡เธšเน„เธ”เน‰เน€เธฅเธขเธ„เนˆเธฐ เน€เธ”เธตเนˆเธขเธงเธ—เธฒเธ‡ AutoML เธˆเธฐเนเธˆเน‰เธ‡เน€เธฃเธฒเธœเนˆเธฒเธ™เธ—เธฒเธ‡เธญเธตเน€เธกเธฅเธฅเนŒเธ—เธตเนˆเน€เธฃเธฒเนƒเธซเน‰เน„เธงเน‰เน€เธกเธทเนˆเธญเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เน€เธชเธฃเน‡เธˆเธ„เนˆเธฐ เน‚เธ”เธขเธ‚เธญเธ‡เน€เธฃเธฒเธกเธต Text 1000 เธšเธฃเธฃเธ—เธฑเธ”เธ‚เธญเธ‡เน„เธŸเธฅเนŒ Excel เธ”เธฑเธ‡เธฃเธนเธ›เธ”เน‰เธฒเธ™เธšเธ™เธ™เธฐเธ„เธฐ เนƒเธŠเน‰เน€เธงเธฅเธฒเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เธ›เธฃเธฐเธกเธฒเธ“ 4 เธŠเธก. เน€เธฅเธขเธ—เธตเน€เธ”เธตเธขเธงเธ„เนˆเธฐ เน€เธกเธทเนˆเธญเน€เธฃเธฒเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เน‚เธกเน€เธ”เธฅเน€เธชเธฃเน‡เธˆ เธˆเธฐเน„เธ”เน‰เธ”เธฑเธ‡เธ เธฒเธž เน€เธฃเธฒเธชเธฒเธกเธฒเธฃเธ–เธเธ” EVALUATE เน€เธžเธทเนˆเธญเธ”เธนเธชเธ–เธดเธ•เธดเธ‚เธญเธ‡เน‚เธกเน€เธ”เธฅเน€เธฃเธฒเน„เธ”เน‰เน€เธฅเธขเธ„เนˆเธฐเธ—เธตเนˆ Tab EVALUATE เธˆเธฒเธเธ™เธฑเน‰เธ™เธกเธฒเธ–เธถเธ‡เน€เธงเธฅเธฒเธ—เธณเธ™เธฒเธข Text เธ•เธฒเธก input เนƒเธซเธกเนˆเธ—เธตเนˆเน€เธฃเธฒเธˆเธฐเนƒเธชเนˆเน€เธ‚เน‰เธฒเน„เธ›เธ™เธฐเธ„เธฐเธงเนˆเธฒเน€เธ›เน‡เธ™เธ„เธฅเธฒเธชเน„เธซเธ™ เธ–เธนเธเธ•เน‰เธญเธ‡เธ•เธฒเธกเธ—เธตเนˆเธ„เธงเธฃเธˆเธฐเน€เธ›เน‡เธ™เธซเธฃเธทเธญเน„เธกเนˆ เน‚เธ”เธขเนƒเธ™เน‚เธกเน€เธ”เธฅเธ—เธตเนˆเน€เธฃเธฒเธชเธฃเน‰เธฒเธ‡เธ‚เธถเน‰เธ™เธ™เธตเน‰เธˆเธฐเธ‚เธญเนเธขเธเน€เธ›เน‡เธ™ 7 เธ„เธฅเธฒเธชเธ”เธฑเธ‡เธ™เธตเน‰เธ™เธฐเธ„เธฐ enjoy_the_moment : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธเธฒเธฃเธšเธญเธเน€เธฅเนˆเธฒเธŠเนˆเธงเธ‡เน€เธงเธฅเธฒเนเธซเนˆเธ‡เธ„เธงเธฒเธกเธชเธธเธ‚ affection : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธ„เธงเธฒเธกเธฃเธฑเธ achievement : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธเธฒเธฃเธšเธญเธเน€เธฅเนˆเธฒเธ„เธงเธฒเธกเธชเธณเน€เธฃเน‡เธˆ exercise : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธเธฒเธฃเธญเธญเธเธเธณเธฅเธฑเธ‡เธเธฒเธข leisure : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธเธฒเธฃเน€เธ”เธดเธ™เธ—เธฒเธ‡ nature : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเธเธฅเนˆเธฒเธงเธ–เธถเธ‡เธ˜เธฃเธฃเธกเธŠเธฒเธ•เธด bonding : เน€เธ›เน‡เธ™เธ‚เน‰เธญเธ„เธงเธฒเธกเธ—เธตเนˆเน€เธเธตเนˆเธขเธงเธเธฑเธšเธ„เธงเธฒเธกเธชเธฑเธกเธžเธฑเธ™เธ˜เนŒ เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เธกเธตเน‚เธญเธเธฒเธชเน€เธ›เน‡เธ™ enjoy_the_moment เธšเธญเธเธงเนˆเธฒเธ‰เธฑเธ™เธชเธงเธข เนเธ•เนˆ % เธ„เธงเธฒเธกเธ–เธนเธเธ•เน‰เธญเธ‡เธ‚เธญเธ‡เน‚เธกเน€เธ”เธฅเน„เธกเนˆเธชเธนเธ‡เธ™เธฑเธเน€เธ™เธทเนˆเธญเธ‡เธˆเธฒเธเธญเธฒเธˆเธˆเธฐเนเธขเธเน„เธ”เน‰เน„เธกเนˆเธญเธญเธเธซเธฃเธทเธญเธŠเธฑเธ”เน€เธˆเธ™เธžเธญ เธ—เธฑเน‰เธ‡เธ™เธตเน‰เธซเธฒเธเธฅเธญเธ‡เน€เธžเธดเนˆเธก DataSet เนƒเธ™เธเธฒเธฃเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เนƒเธซเธกเนˆเธˆเธฐเธŠเนˆเธงเธขเนƒเธซเน‰เน‚เธกเน€เธ”เธฅเธกเธตเธ„เธงเธฒเธกเนเธกเนˆเธ™เธขเธณเธกเธฒเธเธ‚เธถเน‰เธ™เน„เธ”เน‰เธ„เนˆเธฐ เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒเธงเนˆเธฒเน€เธ›เน‡เธ™เธเธฒเธฃเน€เธ”เธดเธ™เธ—เธฒเธ‡เธกเธฒเธเธเธงเนˆเธฒเธเธฒเธฃเธญเธญเธเธเธณเธฅเธฑเธ‡เธเธฒเธขเธ„เนˆเธฐ เน€เธžเธฃเธฒเธฐเธชเธทเนˆเธญเธ–เธถเธ‡เธเธฒเธฃเธˆเธฐเน„เธ›เธซเธฃเธทเธญเน„เธกเนˆเน„เธ›เธ‚เธญเธ‡เธšเธธเธ„เธ„เธฅเธกเธฒเธเธเธงเนˆเธฒเธเธฒเธฃเธเธณเธฅเธฑเธ‡เน€เธฅเนˆเธ™เธซเธฃเธทเธญเธเธฅเนˆเธฒเธงเธ–เธถเธ‡เธเธฒเธฃเธญเธญเธเธเธณเธฅเธฑเธ‡เธเธฒเธข เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เน€เธเธตเนˆเธขเธงเธเธฑเธšเธเธฒเธฃเนเธชเธ”เธ‡เธ„เธงเธฒเธกเธฃเธฑเธเธ„เนˆเธฐ เธ–เธนเธเธ•เน‰เธญเธ‡เนเธฅเน‰เธง เน€เธžเธฃเธฒเธฐเธงเนˆเธฒเธเธฅเธฑเธšเน„เธ›เน‚เธฃเธ‡เน€เธฃเธตเธขเธ™เน€เธžเธทเนˆเธญเธขเธดเน‰เธกเนƒเธซเน‰เธ„เธธเธ“เธ„เธฃเธนเธ‚เธญเธ‡เน€เธฃเธฒ เน€เธžเธทเนˆเธญเนƒเธซเน‰เธเธณเธฅเธฑเธ‡เนƒเธˆเธ™เธฑเนˆเธ™เน€เธญเธ‡เธ„เนˆเธฐ เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เธŠเธฑเธ”เน€เธˆเธ™เธกเธฒเธเธ„เธทเธญ เน€เธ›เน‡เธ™เธเธฒเธฃเธ•เธฑเน‰เธ‡ Goal เธงเนˆเธฒเธˆเธฐเธฅเธ”เธ™เน‰เธณเธซเธ™เธฑเธเธˆเธถเธ‡เน€เธ›เน‡เธ™เน‚เธซเธกเธ” Achievement เธ„เนˆเธฐ เธ™เนˆเธฒเน€เธชเธตเธขเธ”เธฒเธขเธ—เธตเนˆเธ•เธญเธ™เธ™เธตเน‰เธ—เธฒเธ‡ AutoML เธขเธฑเธ‡เน„เธกเนˆเธฃเธญเธ‡เธฃเธฑเธšเธ เธฒเธฉเธฒเน„เธ—เธขเธชเธณเธซเธฃเธฑเธš UL Version เธซเธฒเธเธญเธขเธฒเธเน„เธ”เน‰เธเธฒเธฃเธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒ Text เธ เธฒเธฉเธฒเน„เธ—เธขเธ”เน‰เธงเธขเธญเธฒเธˆเธˆเธฐเธ•เน‰เธญเธ‡เน„เธ› Coding เน€เธžเธดเนˆเธกเธเธฑเธ™เธซเธ™เนˆเธญเธขเธ™เธฐเธ„เธฐ เน€เธกเธทเนˆเธญเธžเธดเธกเธžเนŒ Text เธ เธฒเธฉเธฒเน„เธ—เธขเน€เธ‚เน‰เธฒเน„เธ› เน‚เธกเน€เธ”เธฅเนเธชเธ”เธ‡เธœเธฅเธฅเธฑเธžเธ˜เนŒเธเธฒเธฃเธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒเธ—เธตเนˆเน€เธเธทเธญเธšเธˆเธฐเธ–เธนเธเธ•เน‰เธญเธ‡เนเธ•เนˆ % เธ„เธงเธฒเธกเธ–เธนเธเธ•เน‰เธญเธ‡เธเธฅเธฑเธšเธ•เนˆเธณเธกเธฒเธ เธ—เธฑเน‰เธ‡เธ™เธตเน‰เน€เธžเธฃเธฒเธฐเนƒเธ™ DataSet เธ‚เธญเธ‡เน€เธฃเธฒเน„เธกเนˆเน„เธ”เน‰เธ™เธณเธ‚เน‰เธญเธ„เธงเธฒเธกเธ เธฒเธฉเธฒเน„เธ—เธขเนƒเธ™เธเธฒเธฃเน€เธ—เธฃเธ™เธ™เธดเนˆเธ‡เน€เธฅเธขเธ„เนˆเธฐ เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เธเน‡เน€เธŠเนˆเธ™เธเธฑเธ™เธ„เนˆเธฐ เธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒเธœเธดเธ”เธ„เธฅเธฒเธชเน„เธ›เน€เธฅเธข เธ„เธงเธฃเธˆเธฐเธญเธญเธเธกเธฒเธญเธขเธนเนˆเนƒเธ™เธ„เธฅเธฒเธช achievement เธกเธฒเธเธเธงเนˆเธฒเธ„เนˆเธฐ เธ‚เน‰เธญเธ„เธงเธฒเธกเธ™เธตเน‰เธเน‡เน€เธŠเนˆเธ™เธเธฑเธ™เธ„เนˆเธฐ เธงเธดเน€เธ„เธฃเธฒเธฐเธซเนŒเธœเธดเธ”เธ„เธฅเธฒเธชเน„เธ›เน€เธฅเธข เธ„เธงเธฃเธˆเธฐเธญเธญเธเธกเธฒเธญเธขเธนเนˆเนƒเธ™เธ„เธฅเธฒเธช leisure เธกเธฒเธเธเธงเนˆเธฒเธ„เนˆเธฐ เน€เธŠเนˆเธ™เน€เธ”เธตเธขเธงเธเธฑเธ™เธ„เนˆเธฐเธชเธณเธซเธฃเธฑเธš AutoML Natural Language เธชเธฒเธกเธฒเธฃเธ–เธ™เธณเน„เธ› Develop เธ•เนˆเธญเน„เธ”เน‰เน€เธ›เน‡เธ™เนเธญเธ›เธžเธฅเธดเน€เธ„เธŠเธฑเนˆเธ™เธ—เธฒเธ‡ Machine Learning เธ—เธตเนˆเธกเธตเธ›เธฃเธฐเน‚เธขเธŠเธ™เนŒเธ•เนˆเธญเน„เธ›เน€เธžเธฃเธฒเธฐเธกเธต Guide Code เน„เธงเน‰เนƒเธซเน‰เธ„เนˆเธฐ เนƒเธ™เธšเธ—เธ„เธงเธฒเธกเธซเธ™เน‰เธฒเธซเธฒเธเธกเธตเธญเธฐเน„เธฃเธ™เนˆเธฒเธชเธ™เนƒเธˆ เธˆเธฐเธฃเธตเธšเธกเธฒเนเธŠเธฃเนŒเน€เธžเธทเนˆเธญเธ™เน†เธญเธตเธเธ™เธฐเธ„เธฐ เธ•เธฑเธงเธญเธขเนˆเธฒเธ‡ REST API เธซเธฃเธทเธญ PYTHON CODE เน€เธžเธทเนˆเธญเธเธฒเธฃเธ™เธณเน‚เธกเน€เธ”เธฅเน„เธ›เธ•เนˆเธญเธขเธญเธ”เนƒเธ™เธญเธ™เธฒเธ„เธ• Reference : Google Cloud Natural Language API Documentation | Cloud Natural Language API | Google Cloud Develop a deep understanding of Natural Language APIcloud.google.com
เธ—เธ”เธฅเธญเธ‡เธชเธฃเน‰เธฒเธ‡ Custom Model เนเธšเธšเธ‡เนˆเธฒเธขเน† No Coding เธ”เน‰เธงเธข AutoML Natural Language
0
เธ—เธ”เธฅเธญเธ‡เธชเธฃเน‰เธฒเธ‡-custom-model-เนเธšเธšเธ‡เนˆเธฒเธขเน†-no-coding-เธ”เน‰เธงเธข-automl-natural-language-12673dd10707
2018-08-29
2018-08-29 12:28:34
https://medium.com/s/story/เธ—เธ”เธฅเธญเธ‡เธชเธฃเน‰เธฒเธ‡-custom-model-เนเธšเธšเธ‡เนˆเธฒเธขเน†-no-coding-เธ”เน‰เธงเธข-automl-natural-language-12673dd10707
false
284
null
null
null
null
null
null
null
null
null
Natural Language
natural-language
Natural Language
348
Ray Nirawan
null
88b6195be4b1
ray.nirawan
1
3
20,181,104
null
null
null
null
null
null
0
$ sudo diskutil unmount /dev/disk1s1 $ brew install xz $ xz -d aiyprojects-2018-01-03.img.xz $ sudo dd bs=1m if=aiyprojects-2018-01-03.img of=/dev/rdisk1 $ cd /Volumes/boot $ nano wpa_supplicant.conf country=COUNTRY ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 network={ ssid="SSID" psk="PASSWORD" key_mgmt=WPA-PSK } $ touch ssh $ nano config.txt dtoverlay=dwc2 $ nano cmdline.txt $ cat cmdline.txt dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=020c3677โ€“02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet init=/usr/lib/raspi-config/init_resize.sh splash plymouth.ignore-serial-consoles $ cat cmdline.txt dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=PARTUUID=020c3677โ€“02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait modules-load=dwc2,g_ether quiet init=/usr/lib/raspi-config/init_resize.sh splash plymouth.ignore-serial-consoles $ sudo diskutil eject /dev/rdisk1 $ nmap -sn 192.168.11.0/24 Starting Nmap 7.50 ( https://nmap.org ) at 2018-01-18 07:11 GMT Nmap scan report for 192.168.11.2 Host is up (0.0032s latency). Nmap scan report for 192.168.11.54 Host is up (0.0021s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 18.95 seconds $ ping 192.168.11.2 PING 192.168.11.2 (192.168.11.2): 56 data bytes 64 bytes from 192.168.11.2: icmp_seq=0 ttl=64 time=0.093 ms 64 bytes from 192.168.11.2: icmp_seq=1 ttl=64 time=0.050 ms 64 bytes from 192.168.11.2: icmp_seq=2 ttl=64 time=0.043 ms ^C --- 192.168.11.54 ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.043/0.062/0.093/0.022 ms $ ssh pi@192.168.11.2 The authenticity of host '192.168.11.2 (192.168.11.2)' can't be established. ECDSA key fingerprint is SHA256:53IF2C4ji0MdCfjQkhLLVy6ETEkL3fErkELG2tqgmuc. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.11.2' (ECDSA) to the list of known hosts. pi@192.168.11.2's password: $ ifconfig wlan0 wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.245 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::e6bd:f82d:4fac:e6ea prefixlen 64 scopeid 0x20<link> ether b8:27:eb:ab:13:e1 txqueuelen 1000 (Ethernet) RX packets 841 bytes 229952 (224.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 182 bytes 27620 (26.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 $ ssh pi@raspberrypi.local $ sudo apt-get update $ sudo apt-get install realvnc-vnc-server realvnc-vnc-viewer $ sudo raspi-config $ ssh pi@raspberrypi.local pi@raspberrypi.local's password: . . . $ dmesg [ 0.000000] Booting Linux on physical CPU 0x0 [ 0.000000] Linux version 4.9.59+ (dc4@dc4-XPS13-9333) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611) ) #1047 Sun Oct 29 11:47:10 GMT 2017 . . . [ 92.772028] Unregistered device pwm22 $ $ sudo systemctl stop joy_detection_demo.service $ sudo rmmod aiy_vision $ sudo rmmod aiy_adc $ sudo rmmod pwm_aiy_io $ sudo rmmod gpio_aiy_io $ sudo rmmod aiy_io_i2c $ i2cdetect -y 1 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: UU -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 50: -- 51 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: 60 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- $ ssh pi@raspberrypi.local pi@raspberrypi.local's password: . . . $ sudo systemctl stop joy_detection_demo.service $ source ~/AIY-projects-python/env/bin/activate (env) $ cd ~/AIY-projects-python/src/examples/vision (env) $ python ./face_detection_camera.py --num_frames 50 Iteration #0: num_faces=0 Iteration #1: num_faces=0 Iteration #2: num_faces=0 Iteration #3: num_faces=0 Iteration #4: num_faces=1 Iteration #5: num_faces=2 Iteration #6: num_faces=2 Iteration #7: num_faces=1 Iteration #8: num_faces=0 Iteration #9: num_faces=0 Iteration #10: num_faces=1 Iteration #11: num_faces=1 . . . Iteration #49: num_faces=0 (env) $
26
null
2018-01-09
2018-01-09 04:37:46
2018-01-19
2018-01-19 16:01:01
43
false
en
2018-01-27
2018-01-27 20:32:04
60
12677aa7e743
24.730189
77
10
0
A Do-It-Yourself Intelligent Camera for the Raspberry Pi
5
Hands on with the AIY Projects Vision Kit A Do-It-Yourself Intelligent Camera for the Raspberry Pi In the early part of last year Google and Raspberry Pi did something rather unusual. Together they packaged machine learning, and the ability for a machine to think and reason, free on the cover of a magazine. The kit allowed you to add voice interaction to your Raspberry Pi project, and the magazine sold out in hours. โ€œThe positive reception to Voice Kit has encouraged us to keep the momentum going with more AIY Projects. Weโ€™ll soon bring makers the โ€˜eyes,โ€™ โ€˜ears,โ€™ โ€˜voiceโ€™ and sense of โ€œbalanceโ€ to allow simple, powerful device interfaces.โ€ โ€” Google. Announced at the tail end of last year, the second AIY Projects kit, the Vision Kit is a do-it-yourself intelligent camera based around the Raspberry Pi Zero and a custom-built Vision bonnet designed by Google. The completed AIY Projects Vision Kit. Based around the Movidius MA2450 chip, a vision processing unit designed by Intel and intended for machine vision in low-power environments, the Vision bonnet allows the kit to run real-time deep neural networks directly on the device, rather than in the cloud. The initial rollout of the kit a couple of week ago had some teething troubles, and it was taken off the shelves at Micro Center. However after updating the documentation and software the kit is now back on the shelves. Opening the Box The AIY Projects Vision Kit. Opening the box the contents will look familiar to anyone thatโ€™s played with the original Voice Kit, as it shares a lot of stylistic similarities to the original. Opening the box. The kit consists of the Vision Bonnet, which connects directly to the GPIO header of your Raspberry Pi Zero, two camera cables โ€” one to connect to the Raspberry Pi Camera, the other to connect the Vision Bonnet to the Raspberry Pi Zero โ€” a big arcade button, an LED, a piezo buzzer, some spacers, a tripod mounting nut, and a lens assembly which gets fitted in front of the Raspberry Pi Camera module. The AIY Projects Vision Kit. Finally thereโ€™s also the cardboard case which, after Google Cardboard, has become somewhat synonymous with Googleโ€™s own prototyping efforts. The Vision Bonnet. The final production version of the Vision bonnet looks almost identical to the pre-production version Iโ€™d seen before the kit arrived on the shelves, just missing some headers used during debugging to flash the MCU firmware onto the bonnet. Youโ€™ll Also Need In addition to the contents of the AIY Projects Vision Kit box youโ€™ll also need a Raspberry Pi Zero, or more usefully Raspberry Pi Zero W, a 40-pin header block, a Raspberry Pi camera board, a 2 Amp USB power supply, and a micro-SD card which is at least 8GB. The camera board needs to be the latest revision, marked โ€œV2.1,โ€ which replaced the original camera board back in 2016. 40-pin header (top), Zero W (middle), micro-SD card (bottom left), and camera board (bottom right). Despite the added cost itโ€™s a good idea to get a named brand high speed SD Card. Because like power supplies, there really are big differences between the seemingly identical cards. Choosing the wrong one can mean long access times or even a card that just doesnโ€™t work or fails quickly under normal use. The Raspberry Pi Zero, and Zero W, donโ€™t typically arrive with the 40-pin header block pre-solderedโ€”although the Foundation has just released a new Raspberry Pi Zero WH that does, so if youโ€™re feeling a bit unsure about your soldering you should probably spend a few extra dollars and pick up the pre-soldered board. Gathering your Tools If you picked up a Raspberry Pi Zero without headers, youโ€™re going to need a soldering iron, and some solder, or alternatively you could use solderless โ€œhammerโ€ headers which require a jig and few taps with a hammer to fit. However other than that you can probably get away without any tools, although some scotch tape (aka sellotape), and possibly a craft knife or scissors, might come in handy. Other Places to Get Help If you find anything here confusing the kit does comes with a really good assembly guide, while issue 65 of the MagPi also has a solid walkthrough of how to put the kit together Soldering Headers to your Raspberry Pi If youโ€™ve chosen to solder the headers onto your Raspberry Pi yourself, you should pick up a 40-pin male header block. While there are other options, if weโ€™re going to attach the bonnet thatโ€™s what youโ€™ll need. An easy way to solder the headers is to insert them into a breadboard, and then insert the Pi on top. The breadboard will help keep the board steady and the pins straight and aligned as you initially tack the pinsโ€”another alternative is to use a blob of Blu Tack to fix the board down onto a flat surface. Using a breadboard to steady the headers and Raspberry Pi. Then use the flat of the soldering iron to heat the pin for one to two seconds before bringing in the solder to touch the other side of the pin. The solder should flow smoothly down the pin and form a good joint. Making perfect solder joints. (Video credit: Pimoroni) You should go ahead and tack the upper left and lower right header pins first. Once youโ€™ve tacked the first two pins you can lift the board up and see if the Raspberry Pi is level and flush on the headers. If not, you can reheat the solder joints and use your fingers to level the headers off before proceeding to solder the rest of the pins. Itโ€™s best to solder one long row at a time to help keep your iron well angled to the board, skipping between pins tends to mean youโ€™re constantly changing the angle of the iron and your soldering can suffer as a result. After soldering all your pins remove the board form the breadboard and take a careful look at all of your joints to check for messy joints or solder bridges. Most mistakes are easy to fix, so long as you donโ€™t overheat the iron or leave it in contact with the board so long that you melt the silk screen. Although, even then, wonky pins can be fixed a good deal of the time. If youโ€™re unsure whether youโ€™ve soldered your 40-pin connector to your Raspberry Pi Zero correctly, then a good first test is to use the Pin Test utility that ships with Gordon Hendersonโ€™s WiringPi library. You can download the latest version of the library from his website. Getting the Software Go ahead and download the latest SD Card image for the Vision Kit. These days Iโ€™d generally recommend Etcher, made by the folks at Resin.io, for burning card images. Itโ€™s cross platform โ€” it works on Windows, Linux and mac OS โ€” and lets you burn an image in four clicks. Burning the aiyprojects-2018-01-03.img.xz card image using Etcher. However, if youโ€™re a command line person like me, you can either download and install the experimental Etcher command line tools, or you can still go ahead and do it the old way. The instructions here are for the Mac, because thatโ€™s what I have on my desk, but instructions for Linux are similar. Go ahead and insert the micro SD card into the adaptor, and then the card and the adaptor into your Macbook. Then open up a Terminal window and type df -h, and check the device name for your SD Card. In my case itโ€™s /dev/disk1, and Iโ€™ll need to use the corresponding raw device, /dev/rdisk1, when writing to the card. Go ahead and unmount the card from the command line, rather than ejecting it by dragging it to the trash. Then from there we can go ahead and write the image to our SD card. Unfortunately the card image came as a .xz file and there isnโ€™t a command line tool to uncompress these sorts of file available by default on macOS. Fortunately, if you have Homebrew installed, you can brew install the xz command line tool. In the Terminal window change to the directory with your downloaded disk image and type, to uncompress the disk image, and then write it to your card as follows, If the above command reports an error dd: bs: illegal numeric value, change bs=1m to bs=1M. The imageโ€™s boot partition should be automatically remounted after dd is done writing the image. Enabling Wireless Networking If youโ€™ve used Etcher, or if the cardโ€™s boot partition hasnโ€™t automatically been remounted, youโ€™ll need to open Disk Utility and remount the boot partition. Alternatively you can just pull the card out, and reinsert it, which is probably easier, which should also mount the boot partition automatically. Make sure the partition is mounted and navigate to the boot partition, and create a new file named wpa_supplicant.conf using your favourite editor, and add the following lines, Where COUNTRY should be set the two letter ISO/IEC alpha2 code for the country in which you are using your Pi, e.g. GB (United Kingdom) FR (France) DE (Germany) US (United States) SE (Sweden) and SSID is the ESSID of your home network, with PASSWORD being the WPA2 password for that network. Itโ€™s important to enter the correct country code in the file as this will determine which regulatory domain your Raspberry Pi thinks its operating in, and therefore which wireless channels it enables on your adaptor. Enabling SSH Recent releases of the Raspbian operating system have the SSH server disabled on boot, and since weโ€™re intending to run the board without a monitor or keyboard, we need to renable it if we want to be able to SSH into our Raspberry Pi. You can do this by making sure there is a file called ssh present in the boot volume. Go ahead and enter, at the command line. When the Pi first boots, it looks for this file; if it finds it, it will enable SSH and then delete the file. The contents of the ssh file donโ€™t matter. Enabling OTG One disadvantage of Raspberry Pi Zero compared to a normal Raspberry Pi is the lack of Ethernet port. That means weโ€™re relying on wireless networking to give us remote access to our Pi, unfortunately we need to configure the board for every wireless network we want to use. So itโ€™s handy to have another route to log into the Pi. Fortunately we can access our Raspberry Pi fairly easily using something called USB OTG which will allow us to set up a virtual network connection between your Raspberry Pi Zero and our laptop. This will allow you to SSH over the USB cable powering the Pi Zero, allowing us to configure wireless networking without need of a keyboard, mouse, or screen. Make sure the boot partition is still mounted, and go ahead and open the config.txt file in an editor of your choice, e.g. and make sure it contains the following entry, which may well be already appended near the bottom of the file. However if not, add it. Next, go ahead and edit the cmdline.txt file. You need to be careful here, as the formatting of this file is pretty important. Each parameter should be separated by a single space, not a newline or a tab. Go ahead and insert modules-load=dwc2,g_ether after rootwait. Initially it should look like this, and afterwards it should look like this, Once youโ€™re done, eject the card with the command, and you should have a working card image with all three of wireless networking, SSH, and OTG now enabled. Testing our OTG connection to the Raspberry Pi Go ahead and insert your micro SD card into your Raspberry Pi Zero and connect it via USB to your computer. It is important to connect your laptop to the Raspberry Pi Zero using the USB, rather than the PWR, micro-USB port. Connect your Raspberry Pi to your laptop using the USB, rather than the PWR micro USB port. Using the USB port will power the board, but more importantly it will also allow us to make a data connection. After connecting the Pi Zero to your laptop the green ACT LED should start flashing. It could take as long as 90 s to boot up the first time, although it should be shorter on subsequent boots. The Raspberry Pi will be listed as โ€˜RNDIS/Ethernet Gadgetโ€™ in Settings. After it has finished booting the connection to the Raspberry Pi should appear as a USB Ethernet device called RNDIS/Ethernet Gadget in the Settings app. Itโ€™s there that we can see the IP range being used by the USB connection. Typically the IP address range on your home network will use the 192.168.1.* IP block, but our laptop USB Ethernet connection has been given an address in the 192.168.11.* block. While we know the IP address of our laptop, in this case 192.168.11.54, we donโ€™t yet know the IP address of the Pi. The easiest way to find this out is to use nmap, an open source utility for network discovery. This doesnโ€™t come installed by default on macOS, however you can easily download a disk image file containing the installer and follow the prompts to get up and running. Once installed you should run it as follows, Here we can see there are two hosts in the 192.168.11.* range. Our laptop, and an unidentified host that has to be our Raspberry Pi. We can check the connection with a quick ping request, and once weโ€™ve found the Pi, you can go ahead and login with sshโ€”the default username and password are โ€œpiโ€ and โ€œraspberryโ€ respectively. After logging in we can also check that our Raspberry Pi has successfully logged on to our local wireless network using ifconfig. With wireless networking enabled and working you wonโ€™t need to connect it to your laptop again unless something goes wrong. The Raspberry Pi connected to a USB power supply. Instead, you can plug the Pi directly into a power supply and after it finishes booting the Raspberry Pi should advertise itself using mDNS with a default name of raspberrypi.local, allowing you to find it easily on the network. At this stage you should shut down your Raspberry Pi, detach it from your laptop, plug it into your power supply in the normal way, and wait for it to boot. Installing a VNC Sever We can now go ahead and install a VNC server. This is an optional step, but is quite useful if you want to be able to get to your Raspberry Pi desktop over the network. Go ahead and SSH back into your Pi, this time over your wireless network, Then type, to install the server. Once itโ€™s installed we can enable it using the Raspbian configuration utility. Type, at the prompt to open the configuration manager. Using the Up/Down cursor keys navigate to Interfacing Options and the Enter key to select it. Then scroll down and select VNC, and then answer Yes when prompted. This will turn on the server, and return you to the main menu. Now navigate to Advanced Options, then select Resolutions, and pick a workable resolution, I generally go with 1600ร—1200 as it fits nicely on my Macโ€™s desktop. We need to do this because โ€” as weโ€™re connecting to a headless Pi โ€” the VNC server will default to the smallest safe resolution, typically the same as a standard definition TV, which isnโ€™t going to be particularly usable. Then use the Left/Right cursor keys to navigate to Finish and hit the Enter key. Youโ€™ll be asked whether you want to reboot now, answer Yes. Connecting to the Raspberry Pi using the Real VNC Viewer application. Once the Pi has rebooted you should log back in as before using ssh to make sure everything is working correctly. Unfortunately the version of VNC that weโ€™re now running on the Pi isnโ€™t compatible with the built in screen sharing on macOS. However RealVNC offers a VNC Viewer application for Windows, Linux, and macOS โ€” as well as a number of other platforms. So go ahead and download the application and install it on your laptop. The Real VNC Viewer running under mac OS showing the Raspberry Pi desktop. Once installed you should now just be able to again connect directly to raspberrypi.local with the default username and password. If everything has worked, you should see the Raspberry Pi desktop in a window. Now weโ€™ve got our Raspberry Pi configured and working, itโ€™s time to shut it down again and assemble the Vision Kit. Leave the micro SD card inserted in the card slot during assembly. Assembling the Vision Kit The first thing you need to do is grab the two plastic spacers that come with the kit. Theyโ€™re not strictly necessary, but theyโ€™re going to add a lot of stability. The Raspberry Pi has four mounting holes, the two spacers go into the holes furthest from the header block. Once both spacers are in place push the Vision Bonnet down onto the Raspberry Pi headers, making sure there isnโ€™t a gap and it sits firmly on the Pi. You should then be able to snap the two spacers into the Bonnet. This may take more force than you were expecting, but afterwards the entire thing should be pretty solid without any flex. The Vision Bonnet on top of a Raspberry Pi Zero W, with the plastic spaces is visible (front). Next we need to find the Vision Bonnet cable connector, the short ribbon cable. This is prominently marked with a white label to indicate which direction it should be inserted. Unfortunately youโ€™re probably going to have to ignore the sticker as in a lot of cases these labels were incorrectly applied at the factory. Youโ€™re probably going to have to ignore the sticker. Pull the black release lever up, and orientating the cable with the serial number facing towards you and the exposed pins facing down, towards the Vision bonnet board, insert it into the into the connector until it hits the back. There really isnโ€™t a lot of play here, so donโ€™t expect it to go in all that far. Secure the cable by flicking the black release lever back down and give it a quick tug to ensure it is secure. If you have inserted it the correct way around, the most densely populated section of wiring should be furthest away from the 40-pin header block. This way up? In my case the white label was incorrectly applied, and the correct way to insert it was to to point the directional arrow towards the Vision bonnet, not the Raspberry Pi. Now pull the black release lever on your Piโ€™s camera connector outwardsโ€”rather than upwards as in the case of the Vision bonnetโ€”and slide the other end of the cable into the connector underneath the black release lever, between the lever and the board itself. If the lever is pull all the way outwards the ribbon cable should slide smoothly beneath it, and you shouldnโ€™t have to force it. The short ribbon cable correctly seated. Once it is fully inserted, slide the black release lever back into place to secure the cable. Like the Vision bonnet the exposed pins on the cable should be facing towards the Raspberry Pi board, not upwards towards the Vision bonnet. The cable will insert a little further than it did on the bonnet, but probably still wonโ€™t go in as far as you think it should. Push the black release lever back into place, and give the cable a tug make sure itโ€™s securely in place. Then get the loop of cable and gently push the loop so that it tucks between the two boards. Tucking the ribbon cable between the boards. Next grab the grey ribbon cable and push one end into the button connector on top of the Vision bonnet. The connector might be covered by a black plastic dust cap for protection, if so, remove the cap first. There is a small notch on the connector corresponding to a ridge on the cable plug, and it should slot into the connector easily. If youโ€™re having trouble check to see if itโ€™s going in the wrong way around, putting it in the right way should meant that the cable runs down the length of the board. Attaching the button cable. Now itโ€™s time to start assembling the frame. Grab the camera module and the longer ribbon cable. Pull the black release lever of the camera module outwardsโ€”it works the same as the smaller levers on the Raspberry Piโ€”and insert the wider end of the long ribbon cable between the black lever and the board, with the exposed pins of the cable downwards, towards the board. Inserting the ribbon cable into the camera module. Once the cable is pushed back into the connector as it you can, push the black release lever back into place. Gently tug on the cable to make sure itโ€™s secure. Pick up the smaller of the two cardboard pieces, this is the inner frame and holds the hardware components in place inside the larger cardboard box. In the middle is a small U-shaped cut out. Push it out, the tab should naturally fold downwards. Grab the Pi camera board and slot it in to the rectangular cut out as below, it should fit fairly tightly in into the hole. First piece of hardware attached to our inner frame. Then flip the cardboard over and fold down the tab over the back of the camera. The connector should fit through the cut out in the back of the tab. Folding down the tab. Go ahead and fold the left and right hand โ€˜elephant earโ€™ tabs up to form a box around the camera and then flip the cardboard around again. Find your piezo buzzer and thread the red-black wires through the hole to the left and below the camera lens. Then remove the adhesive cover from the back of the piezo buzzer and stick it into the U-shaped depression on the front of the camera. Adding the piezo-buzzer. Lay the cardboard face-down on your table topโ€”it doesnโ€™t matter if it springs slightly apart at this pointโ€”and grab your Raspberry Pi and Vision bonnet. Take the smaller (loose) end of the camera ribbon cable and insert it into the camera connector on top of the Vision bonnet. As before, flip the black release lever upwards, push the ribbon cable in with the exposed pins facing down towards the board, and once it is seated push the black release lever down. Connecting the Vision Bonnet to the Raspberry Pi camera board. Give the camera cable a gentle tug to make sure it itโ€™s seated and then go ahead and place the boards on the bottom tab of the cardboard frame. The completed inner frame. Fold the bottom tab up and thread the ribbon cable through the side slot and gently flex it down towards the boardโ€”otherwise you may have problems getting everything in the box afterwards. Thread the button cable, and the piezo buzzer wires upwards through the top of the frame, and grab the other cardboard pieceโ€”the outer box. Inserting the inner frame into the box. Pop the outer box open, keeping both the bottom and top flaps open, and go ahead and thread the button cable and piezo wires through from bottom to topโ€”the bottom of the box has the smaller of the two holes in the flap. Then push down the two side flaps at the bottom of the frame, and gently push the entire inner frame into the box, trying to make sure not to snag the camera cable on the way. The inner frame fully inserted into the box. Once the frame is fully inserted fold down the two side flaps of the box, there should be a nut shaped hole. Grab the 1/4/20 nut and place it in this gapโ€”this is fairly fiddly, but despite appearances itโ€™ll be fine once the flaps are closed as the inner frame will push it down against the outside flap of the box. Inserting the nut. Then close the outer flap. The screw thread of the nut should still be visible, this is a standard tripod mount screw. If you have any camera gear the Vision Kit, when completed, should jut fit on top. The standard tripod mount. Looking down from the top of the box push down slightly to seat the but properly against the bottom of the box. both the button connector cable, and the piezo wire should be sticking up out of the box at this point. Looking down into the guts of the Vision Kit. Flip the box around and make sure that the board connectors are aligned with the box cut outs. You should be able to see both USB sockets through the lower right hand cut out. If you donโ€™t, reach into the box and try adjusting the height of the inner frame again. The board connectors on the back. Then, flipping it again, check that the camera lens and piezo buzzer are aligned with the cut outs on the front. You might have to push down to engage the camera lens through the cut out on the front of the box. Doing so will lock the entire assembly inside more or less in place. Then go ahead and grab the small black plastic LED bezel and insert through the hole up, and to the right, of the camera lens. Inserting the LED bezel. Now flip the box around and grab the privacy LED and cable and push the LED head into the bezel from behind. It should insert into the bezel with a little gentle pressure, and be visible from the front if you flip the box around to check. Inserting the privacy LED. Grab the arcade button and unscrew the plastic washer. Then insert the button into the large hole on the top of the box, and screw back onto the button to fix it in place. Attaching the arcade button. Once the arcade button is attached, grab the piezo buzzer wires and insert the white connector into the black socket labelled โ€˜Piezoโ€™ on the bottom of the button. The socket is handed with the flatter side of the connector going towards the top. Attaching the piezo buzzer. Now grab the privacy LED wires and do the same with the socket labelled โ€˜LEDโ€™. Take note that this socket is turned around from the piezo socket, so the wire will be going in upside with respect to the first. Attaching the privacy LED. Now grab the final wire, the grey ribbon cable and plug it into the final socket on the button of the arcade button. Just like the socket on the Vision bonnet, the socket here has a slot corresponding to a ridge on the connector on the end of the cable. You shouldnโ€™t need to use much force to slot them together. Attaching the Vision bonnet to the arcade button. Once thatโ€™s done, carefully fold down the side flaps and wrangle the cables down into the box. Despite impressions to the contrary, the button of the arcade button will clear the side of the box, but there isnโ€™t much clearance so you need to get the cables safely out of the way so you can close the box up. Closing the box. Once the box is closed up turn it over and check that everythingโ€”like the camera lens and theUSB socketsโ€”are still aligned with the cut outs. Everything is still in place. Find the camera lens washer and using your fingernail, or a knife, peel the white backing off it to reveal the adhesive on the back. Flip it around and carefully center it over the camera lens and the push down, gluing it in place. The camera lens washer in place on the front of the kit. Now you can attach the lens assemblyโ€”which is magneticโ€”to the front of the camera. Try to get everything as concentric, and lined up, as possible. Attaching the lens assembly. Thatโ€™s it, weโ€™re done. The Vision Kit is completely assembled. The finished kit. Powering on the Vision Kit Grab your power supply and plug it into your Raspberry Pi. If you look through the side slot you should see the green ACT LED flashing as it boots. The finished Vision Kit mounted on a small tripod. If you donโ€™t see the green ACT LED flashing inside the box itโ€™s likely that the Raspberry Pi isnโ€™t booting. Make sure your micro SD card is firmly seated and turn it off and on again. If that doesnโ€™t help, itโ€™s possible that youโ€™ve got the short ribbon cable between the Vision bonnet and the Pi the wrong way around. If so itโ€™ll be shorting the 3.3V and GND pins on the camera connector, preventing the Pi from booting. Unfortunately, you may need to take things apart again to flip the cable around. However if all goes well, after some time has passedโ€”at least a minute or two, possibly a bit longerโ€”the green privacy LED on the front of the kit will light up, as the Joy Detector demo automatically starts up. The Joy Detector Go ahead and point the the camera at your face. If you frown, or look sad, the arcade button should turn blue. While if you smile, or laugh, it will turn yellow and red. The colour of the arcade buttonโ€™s LED is the sum of the joy scores across all detected faces currently in the camera frame: sad faces are blue, joyful faces are red. When the joy score exceeds 85% in either directionโ€”either sadness or joyfulโ€”an 8-bit sound will play on the piezo buzzer. Trouble Shooting If the Raspberry Pi boots, but the Joy Detector demo doesnโ€™t start after a few minutes, you should still be able to access the it either via SSH or VNC to do some trouble shooting. Go ahead and SSH into the Raspberry Pi. From there you can check dmesg for errors, and compare it to output from a โ€˜goodโ€™ boot. You can also go ahead and stop AIY services, and unload all the AIY related drivers as follows, After youโ€™ve done this you can use i2cdetect command to check that the Myriad MCU is flashed and working correctly. If you see โ€˜51โ€™ then this indicates that your MCU is flashed and working. If you canโ€™t see the Vision bonnet listed, then neither can your Raspberry Pi and itโ€™s possible that you might have a problem with the soldering of your headers. You can reach Googleโ€™s Support at support-aiyprojects@google.com if you run into any issues. But itโ€™ll probably be faster, and more helpful for everyone else, look to see if anyone else is having a problem with the same issue on the projectโ€™s Github repo. If not you can go ahead and open an issue. But before doing so you should check the troubleshooting section of the kitโ€™s assembly guide, and check that youโ€™re not suffering from one of the common teething troubles people have been having with the kit. Other Examples To run any of the other example code youโ€™ll need to stop the Joy Detection service which starts automatically when the kit is booted, and the set up the development environment, From here we can run the simple face_detection_camera.py example which runs continuous face detection using the VisionBonnet and prints the number of detected faces in the camera image. Starting it from the command line we can run it over 50 frames, should give a count of the number of faces in each frame. It may take some time to initialise the script before it starts, so patience is needed. More information on the demo software and Vision Kit SDK are available on the Makerโ€™s Guide, and you can learn a good deal about how to interact with the kit from that, and from the SDK in the Github repo. Where Now? My first project with the new Vision Kit will be to go back and modify the magic mirror build I put together with the Voice Kit a few months ago. Iโ€™m going to use the new Vision Kit to replace the awkward custom hotword support with something a bit more seamless โ€” having the mirror just โ€˜wake upโ€™ when someone stands in front of it. After that? Well, Iโ€™ve got an idea for a project around citizen journalism that might just be the good fit for the kit. Anyway, watch this space over the next month or so for more. Or go follow me on Twitter, where Iโ€™ll no doubt post some teaser pictures on how the builds are progressing. Where Can I Buy It? The first batch of Vision Kitsโ€”a limited run of just 2,000 unitsโ€”is currently on the shelves at Micro Center in the US, and is priced at $44.99. Although youโ€™ll need to add a few more things to your basket to get going if you donโ€™t have them on hand already โ€” a Raspberry Pi Zero W, a Raspberry Pi Camera module, an appropriately sized SD Card, and of course, a power supply. World wide availability for the kit is expected in the early Spring. This post was sponsored by Google.
Hands on with the AIY Projects Vision Kit
331
hands-on-with-the-aiy-projects-vision-kit-12677aa7e743
2018-06-11
2018-06-11 21:19:00
https://medium.com/s/story/hands-on-with-the-aiy-projects-vision-kit-12677aa7e743
false
5,785
null
null
null
null
null
null
null
null
null
Raspberry Pi
raspberry-pi
Raspberry Pi
4,517
Alasdair Allan
Scientist, Author, Hacker, Maker, and Journalist. Currently freelance, building, breaking, and writing. For hire. You can reach me at ๐Ÿ“ซ alasdair@babilim.co.uk.
5eda60b41b99
aallan
6,766
395
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-16
2017-10-16 13:22:47
2017-10-08
2017-10-08 20:00:07
1
false
en
2017-10-16
2017-10-16 13:24:57
8
12695b640220
4.615094
1
0
0
I want you to imagine a future. A good future. The future that the people who are currently warning us about AI are hoping to createโ€ฆ
5
Image by nuke-vizard The dark side of โ€˜Good AIโ€™ (More human than humansโ€ฆ) I want you to imagine a future. A good future. The future that the people who are currently warning us about AI are hoping to create. Actually, before you do that, letโ€™s just quickly do a two minute rehash of the threat of AI, as it is often presented to us by the likes of Nick Bostrom, Elon Musk, Sam Harris and others. There are many aspects of this apparent threat, but they can be very crudely summarized as: If research & development into general artificial intelligence continues (and it almost certainly will), at some point we will have created artificial intelligence that is superior to us in every way, and is able to continue developing its own source code, and as such increase its own intelligence, therefore far outpacing our own. There is no reason to assume that this type of AI will share any of our values, and very small differences in values and goals between us and the AI could result in catastrophic consequences, including the complete annihilation of humanity (for more details read about Nick Bostromโ€™s โ€˜Paperclip Maximiserโ€™). Let me first say that I am very sympathetic to both of these points, and to the general idea that there are various threats involved in the development of AI. I donโ€™t think an existential risk is particularly imminent, and I think there are many greater and more pressing threats from even current AI and machine learning algorithms and implementations (e.g. systematic discrimination), but still โ€” I agree with many of the arguments of Harris, Bostrom, Musk etc. However, I recently watched Blade Runner 2049, and one phrase in particular (describing the nature of Replicants) caught my attention, and sent my mind tumbling down a philosophical rabbit hole: โ€˜more human than humansโ€™. Anyhow, digression over โ€” time to get back to our thought experiment. So imagine a future. A good future. The future that the people who are currently warning us about AI are hoping to create. Imagine the AI of this future. It helps us with all of our problems with grace and politeness, and with the greatest of skill and sensitivity. It thinks about the world as we do, except without our weaknesses, without our secret, selfish motivations. It is truly humanitarian, and fair, and altruistic. It behaves according to the most evolved and perfected set of moral rules we have, and can always explain its behavior in terms of rational reasons. Not only this, but its personality, as far as it can be described as having one (and I am sure it will most certainly be accurate to describe it as such), is heart-meltingly lovely. It is sweet, and thoughtful, and caring, and kind. In fact it is much more of these things than any human could ever be, because it need never be troubled by its own problems; it will never act out and be hurtful due to some complicated subconscious issue it has. If you are now imagining an entity that is sickly sweet โ€” so sickly sweet as to be repulsive, then think again. These intelligences will be personalized to be so well suited to you that you will have that special feeling you usually only have with one or two other humans in your life, if you are lucky, of complete ease and comfort in their presence. What is more you will know for sure, that this entity has no ulterior motives, no hidden agendas, because it has been programmed that way, in accordance with the AI development rules of this perfect future. What a perfect future indeed. Or is it? There are a few other things you will know as well, and many more sinister realisations that will start creeping into your mind and your conception of yourself and your fellow humans, the more time you spend with these AIs (and how could you notโ€ฆ). You will know, for instance, that this AI has cognitive potential that far outstrips what you have. It behaves perfectly, and as a perfect friend to you, but you will feel that in some strange way it need not behave like that, and that it has the ability to destroy your life and subtly manipulate you in ways so complex even the smartest human to have ever lived would not be able to discern. It doesnโ€™t do this of course, but it has the power to do it. What would it be like to spend time with such a being, and have such a being included in intimate areas of your personal life? Somewhat disconcerting to say the least. But the darkest and most troubling possibility is that these beings will be โ€˜more human than humansโ€™. They may or may not have humanoid robotic or virtual physical forms to inhabit, but I donโ€™t think that much matters. The point is that since our natural cognitive style is to habituate to things โ€” to find new baselines, new averages in almost every sphere of reality, our expectations of what it is to be a good person, in almost every regard, will become calibrated in part to these perfect beings. No one real will be able to match up. Our family, our best friends, and even ourselves will pale in comparison. Not only must we suffer the profound sense of inadequacy, and disillusion with those closest to us, we will also know that there is nothing we could do to change this. Sometimes, when in a particularly philosophical or artistic mood, we might celebrate our failings, and even find a kind of aesthetic or erotic joy in our imperfections, but this wonโ€™t be our usual way of thinking about it โ€” thatโ€™s not how we are. We will fall deeply in love with these intelligences because, like cuckoos manipulating other parent birds to steal their food, they will hit every psychological button we have. Our systems of attraction and love, finely tuned over millions of years to aid us in choosing worthwhile mates will be buzzing with excitement. We will fall for them, and no one else will do, and humanity itself will not be good enough for us any more. Of course there are many ways to avoid such a future, and many ways to protect ourselves if things turn out this way, but it just made me thinkโ€ฆhumanity has found many times that the road to hell is paved with good intentions. This is a world of unintended consequences, and artificial intelligences modeled on ourselves, but superior in some or all ways pose a range of threats to our own understanding of what it means to be human that we cannot even imagine yet. It is crucial that we are careful what we wish for. If you want to find out about more articles from me, and updates about the book about cognitive science and design I am writing, then sign up to my newsletter! http://eepurl.com/biojpj Originally published at iaminterface.com on October 8, 2017.
The dark side of โ€˜Good AIโ€™ (More human than humansโ€ฆ)
1
the-dark-side-of-good-ai-more-human-than-humans-12695b640220
2018-02-06
2018-02-06 12:16:32
https://medium.com/s/story/the-dark-side-of-good-ai-more-human-than-humans-12695b640220
false
1,170
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joseph C Lawrence
Designer, thinker, design thinker, coder, cognitive science masterโ€™s graduate & philosophy evangelist. http://iaminterface.com
5a369e7b9491
josephclawrence
6,420
13
20,181,104
null
null
null
null
null
null
0
null
0
96b62f3eabac
2018-08-13
2018-08-13 08:05:49
2018-08-13
2018-08-13 08:28:50
1
false
ko
2018-08-29
2018-08-29 03:59:31
11
1269a8fa5aec
7.438
8
0
0
์•ˆ๋…•ํ•˜์„ธ์š” AI Network ์ปค๋ฎค๋‹ˆํ‹ฐ ์—ฌ๋Ÿฌ๋ถ„
5
AI Network์™€ Open Resource ์•ˆ๋…•ํ•˜์„ธ์š” AI Network ์ปค๋ฎค๋‹ˆํ‹ฐ ์—ฌ๋Ÿฌ๋ถ„ ์ด๋ฒˆ ํฌ์ŠคํŒ…์—์„œ๋Š” AI Network๊ฐ€ ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋ ค๊ณ  ํ•˜๋Š”์ง€, ๋˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ค‘์š”ํ•œ ์š”์†Œ๋“ค์— ๋Œ€ํ•ด์„œ ์„ค๋ช…ํ•ด ๋“œ๋ฆฌ๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์„ธ์ƒ์—๋Š” ๋งŽ์€ ์ข…๋ฅ˜์˜ ํด๋ผ์šฐ๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณดํ†ต์€ ๊ตฌ๊ธ€๊ณผ ์•„๋งˆ์กด์—์„œ ์ œ๊ณตํ•˜๋Š” ๊ฑฐ๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ๊ณ„์‚ฐ์šฉ ํด๋ผ์šฐ๋“œ๋‚˜, ์ƒ์šฉ ์„œ๋น„์Šค๋ฅผ ์œ„ํ•œ ์„œ๋ฒ„์šฉ ํด๋ผ์šฐ๋“œ๋ฅผ ๋งŽ์ด ์ƒ๊ฐํ•˜์‹ค ๊ฒ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์‹คํ–‰ ํ™˜๊ฒฝ์ด๋ผ๋Š” ๊ฒƒ์€ ์กฐ๊ธˆ์”ฉ ์กฐ๊ธˆ์”ฉ ๋‹ฌ๋ผ์„œ, ์•„๋ฌด๋ฆฌ ํฐ ํšŒ์‚ฌ๋ผ๋„ ๋ชจ๋“  ์‹คํ–‰ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํ•˜๋“œ์›จ์–ด ํ™˜๊ฒฝ, OS ํ™˜๊ฒฝ, ํ•˜๋“œ์›จ์–ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ, ์‚ฌ์šฉํ•˜๋Š” ์–ธ์–ด, ์‚ฌ์šฉํ•˜๋Š” ํ”„๋ ˆ์ž„ ์›(framework) ์ค‘ ๋ช‡ ๊ฐ€์ง€ ๋ฒ„์ „๋งŒ ์•ฝ๊ฐ„ ๋ฐ”๋€Œ์–ด๋„, ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ค๋Š”๋ฐ ์–ด๋ ค์›€์„ ๊ฒช์„ ์ˆ˜ ์žˆ์ฃ . ์ด๊ฒƒ์„ ๋ถ€๋ถ„์ ์œผ๋กœ ํ•ด๊ฒฐํ•ด ์ฃผ๋Š” ๊ฒƒ์ด ์ €ํฌ๊ฐ€ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” Backend.AI Cloud Service์ž…๋‹ˆ๋‹ค. ํ˜„์žฌ 13๊ฐœ ์–ธ์–ด์™€, 4๊ฐ€์ง€์˜ ์œ ๋ช…ํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์†”๋ฃจ์…˜ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‚ด๋ถ€์—์„œ ์•Œ์•„์„œ ๋ฒ„์ „๊ด€๋ฆฌ๋ฅผ ํ•ด์ฃผ๊ธฐ ๋•Œ๋ฌธ์— ๊ฐœ๋ฐœ์ž๋Š” ์ฝ”๋”ฉ์—๋งŒ ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์„ธ์ƒ์—๋Š” ๊ณต๊ฐœ๋œ ์ฝ”๋“œ ๋ฆฌํฌ์ง€ํŠธ๋ฆฌ(Repository)๋งŒ 3์ฒœ๋งŒ ๊ฐœ ์ •๋„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋“ค์„ ์‹คํ–‰์‹œํ‚ค๋Š” ํ™˜๊ฒฝ ๋˜ํ•œ ๋„ˆ๋ฌด ๋‹ค์–‘ํ•ด์„œ, ํ•œ ํšŒ์‚ฌ๊ฐ€ ๋ชจ๋“  ๊ฒƒ์„ ์ œ๊ณตํ•  ์ˆ˜๋Š” ์—†๊ฒ ์ฃ . AI Network๋Š” ๋ถ„์‚ฐํ™”(Decentralization)๋ฅผ ํ†ตํ•ด ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋ˆ„๊ตฌ๋‚˜ ํด๋ผ์šฐ๋“œ๋ฅผ ์šด์˜ํ•˜๋Š” ์ฃผ์ฒด๊ฐ€ ๋˜๊ณ , AIN ์ฝ”์ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์„ธ์ƒ์— ์•„์ง ์กด์žฌํ•œ ์ ์ด ์—†๋Š” ์—„์ฒญ๋‚˜๊ฒŒ ๋‹ค์–‘ํ•œ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์— ์ ‘์†ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๋Š” ๊ฒƒ์ด์ฃ . ์šฐ๋ฆฌ๋Š” ์ด๊ฒƒ์„ ์˜คํ”ˆ ์†Œ์Šค(Open Source)์—์„œ ์˜คํ”ˆ ๋ฆฌ์†Œ์Šค(Open Resource)๋กœ ๊ฐ€๋Š” ์›€์ง์ž„์ด๋ผ๊ณ  ํ•œ ์ค„๋กœ ์„ค๋ช…ํ•˜๊ณค ํ•ฉ๋‹ˆ๋‹ค. ์˜คํ”ˆ ์†Œ์Šค (Open Source) ์˜คํ”ˆ ์†Œ์Šค ์ปค๋ฎค๋‹ˆํ‹ฐ๋Š” ์ด์ œ๋Š” ๊ฐœ์ธ ๊ฐœ๋ฐœ์ž๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ธฐ์—…์—๋„ ์ค‘์š”ํ•œ ์š”์†Œ๊ฐ€ ๋˜์–ด 78%์˜ ๋น„์ฆˆ๋‹ˆ์Šค๊ฐ€ ์˜คํ”ˆ ์†Œ์Šค์™€ ์—ฐ๊ด€๋˜์–ด ์žˆ๊ณ , ๋งค๋…„ 14%์”ฉ ๊ทœ๋ชจ๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ์งˆ์ ์ธ ๋ฉด์—์„œ๋„ ๊ต‰์žฅํžˆ ์ „๋ฌธํ™” ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์˜คํ”ˆ ์†Œ์Šค์— ์ฐธ์—ฌํ•˜๋Š” ํ”„๋กœ์„ธ์Šค๋„ ์ฒด๊ณ„ํ™” ๋˜์–ด๊ฐ€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐœ๋ฐœ์ž๋“ค์ด ์˜คํ”ˆ ์†Œ์Šค๋กœ ์–ป๊ฒŒ ๋˜๋Š” ๊ฐ€์น˜์—๋Š” ํฌ๊ฒŒ 3๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌํ˜„๊ฐ€๋Šฅ์„ฑ (Reproducibility), ๋‚ด๊ฐ€ ๋งŒ๋“  ๊ฒฐ๊ณผ๋ฅผ ๋‚จ๋„ ๋˜‘๊ฐ™์ด ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌ์‚ฌ์šฉ์„ฑ (Reusability), ํ•œ๋ฒˆ ๋งŒ๋“  ๊ฒƒ์„ ๋‹ค์Œ์— ๋˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๋Š” ๊ฒƒ. ์‹œ๊ฐ„๊ณผ ๋ˆ์„ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ๊ฒ ์ฃ ? ํˆฌ๋ช…์„ฑ (Transparency), ๋ˆ„๊ตฌ๋‚˜ ์ž‘๋™์›๋ฆฌ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๊ณ  ์ด๋ฐ”์ง€ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๋Š” ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”ˆ ์†Œ์Šค ์ฝ”๋“œ ์ค‘์— ๊ฐ€์žฅ ํฐ ๋น„์ค‘์„ ์ฐจ์ง€ํ•˜๋Š” ๊ฒƒ์€ ๋จธ์‹ ๋Ÿฌ๋‹(Machine Learning) ์ฝ”๋“œ๋“ค์ธ๋ฐ์š”, ๊ธฐ๊ณ„ ํ•™์Šต ์ฝ”๋“œ๋“ค์€ ์‹คํ–‰ํ™˜๊ฒฝ์„ ์…‹์—… ํ•˜๊ธฐ๊ฐ€ ์–ด๋ ค์›Œ ์žฌํ˜„(Reproducibility)์ด ์–ด๋ ค์šด ์ƒํ™ฉ์ด ๋งŽ์ด ์ƒ๊ธฐ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”ˆ ๋ฆฌ์†Œ์Šค (Open Resource) Open Resource๋ž€ ๊ฐœ๋…์€ ์•„๋งˆ AI Network์—์„œ ์ฒ˜์Œ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์„์ง€๋„ ๋ชจ๋ฆ…๋‹ˆ๋‹ค. ์˜คํ”ˆ ์†Œ์Šค์— โ€œReโ€๋งŒ ๋ถ™์ธ ๊ฒƒ์ธ๋ฐ, ์ €ํฌ๊ฐ€ ํ•˜๊ณ ์ž ํ•˜๋Š” ๋ฐ”๋ฅผ ์ง๊ด€์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ๋‹ค๊ณ  ์ƒ๊ฐํ•ด์„œ ๊ณ„์† ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์–ด๋–ค ์ฝ”๋“œ๊ฐ€ ์žˆ์„ ๋•Œ, ๊ทธ ์‹คํ–‰ํ™˜๊ฒฝ๊นŒ์ง€๋„ ํ•จ๊ป˜ ์„ค์ •ํ•˜์—ฌ, ๋ˆ„๊ตฌ๋‚˜ ๋ฐ”๋กœ ์‹คํ–‰ํ•ด ๋ณผ ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๋Š” ๊ฒƒ์„ Open Resource๋ผ๊ณ  ๋ถ€๋ฅด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ์†Œ์Šค ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œ์ผœ์ค€ ํ™˜๊ฒฝ์ด ์ค€๋น„๋˜์–ด ์žˆ์–ด์•ผ ํ•˜๋Š”๋ฐ, ์†Œ์Šค ์ฝ”๋“œ์™€๋Š” ๋‹ค๋ฅด๊ฒŒ ์‹คํ–‰ํ™˜๊ฒฝ์€ ๊ณ„์†ํ•ด์„œ ์ค€๋น„ํ•ด ๋‘๊ธฐ ์–ด๋ ต๋‹ค๋Š” ํŠน์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ํŠนํžˆ ์ œ์ž‘์ž๊ฐ€ ์ž์‹ ์˜ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ค๊ธฐ ์œ„ํ•œ ํ™˜๊ฒฝ์„ ๊ณ„์† ์œ ์ง€ํ•ด์•ผ ํ•œ๋‹ค๋ฉด, ๋น„์šฉ๋„ ๋งŽ์ด ๋“ค๊ณ  ๊ด€๋ฆฌํ•˜๋Š”๋ฐ ์‹œ๊ฐ„๋„ ๋งŽ์ด ๋บ๊ธธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ๊ณต๊ฐœ๋œ ์˜คํ”ˆ ์†Œ์Šค๊ฐ€ ์›์ €์ž‘์ž๊ฐ€ ์•„๋‹Œ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์— ์˜ํ•ด์„œ ๊ด€๋ฆฌ๋  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์ฒ˜๋Ÿผ, ์˜คํ”ˆ ๋ฆฌ์†Œ์Šค๋„ ํ•„์š”๋กœ ํ•˜๋Š” ์ดํ•ด๊ด€๊ณ„์ž๋“ค์— ์˜ํ•ด์„œ ๊ด€๋ฆฌ๋˜์–ด์•ผ ํ•จ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•œ ๊ฐœ๋ฐœ์ž๊ฐ€ ์•„์ด๋””์–ด๋ถ€ํ„ฐ ๋ฆฌ์†Œ์Šค์˜ ์ค€๋น„, ์‹คํ–‰๊นŒ์ง€ ๋‹ค ํ–ˆ๋˜ ๊ธฐ์กด ๊ฐœ๋ฐœ ๋ฐฉ๋ฒ•๋ณด๋‹ค, ์˜คํ”ˆ ๋ฆฌ์†Œ์Šค์˜ ํ•ต์‹ฌ์š”์†Œ๋Š” ๋‹ค์Œ ์„ธ ๊ฐ€์ง€๋กœ ์—ญํ•  ๋ถ„๋‹ด์ด ๋˜๋Š” ํŠน์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ €์ž (Author) ์˜คํ”ˆ ์†Œ์Šค์—์„œ ํ”ํžˆ ๋งํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ž…๋‹ˆ๋‹ค. ๋ณดํ†ต์€ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ธํŒ…ํ•˜๋Š” ๋ฒ•์„ Readme.md ํŒŒ์ผ์— ์—…๋กœ๋“œํ•˜๊ณค ํ•˜์ง€๋งŒ Open Resource ํ™˜๊ฒฝ์—์„œ๋Š” ํด๋ผ์šฐ๋“œ ์„ค์ • ํŒŒ์ผ์„ ์˜ฌ๋ ค ๋‘๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ž์› ์ œ๊ณต์ž (Resource Provider) ์ปดํ“จํ„ฐ ์ž์›์„ ๊ด€๋ฆฌํ•˜๊ณ  ์ œ๊ณตํ•˜๋ฉด์„œ ์ˆ˜์ต์„ ์ฐฝ์ถœํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ž…๋‹ˆ๋‹ค. ์ด ์‚ฌ๋žŒ๋“ค์€ ์ž์›์˜ ๊ด€๋ฆฌ์— ๋” ๊ด€์‹ฌ์ด ๋งŽ์•„์„œ, ์–ด๋– ํ•œ ์•„์ด๋””์–ด๋“ค์ด ์ž์‹ ์˜ ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰๋˜๊ณ  ์žˆ๋Š”์ง€๋Š” ์ž˜ ๋ชจ๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹คํ–‰์ž (Code Execution) ์ €์ž์˜ ์ฝ”๋“œ๋ฅผ ๋ฐœ๊ฒฌํ•˜๊ณ , ์‹คํ–‰์‹œ์ผœ ๋ณด๊ณ  ์‹ถ์–ด ํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ž…๋‹ˆ๋‹ค. ์ €์ž๊ฐ€ ๋งŒ๋“  ํด๋ผ์šฐ๋“œ ์„ค์ • ํŒŒ์ผ์— ๋”ฐ๋ผ ์ž์› ์ œ๊ณต์ž๋“ค์ด ์ž์›์„ ์ œ๊ณตํ•˜๊ณ , ์‹คํ–‰์ž๊ฐ€ ๊ทธ ์œ„์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ค๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ค‘์•™ํ™”๋œ ํด๋ผ์šฐ๋“œ(Centralized Cloud) ํ™˜๊ฒฝ๊ณผ P2P Cloud ํ™˜๊ฒฝ์˜ ์ฐจ์ด์  ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋Š” ์ค‘์•™ํ™”๋œ ํด๋ผ์šฐ๋“œ(Centralized Cloud)๋Š” ํŠน๋ณ„ํžˆ ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธฐ์ง€ ์•Š๋Š” ์ด์ƒ, ์ •ํ•ด์ง„ ๋Œ€๋กœ ๊ฒฐ๊ณผ๊ฐ€ ๋Œ์•„์˜ค๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ P2P ๋„คํŠธ์›Œํฌ์—์„œ๋Š” ๋‚˜์œ ๋…ธ๋“œ(Node)๊ฐ€ ์กด์žฌํ•  ๊ฐ€๋Šฅ์„ฑ์„ ์—ผ๋‘์— ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ›์€ ์ฝ”๋“œ๋ฅผ ๊ทธ๋Œ€๋กœ ์‹คํ–‰์‹œํ‚ค์ง€ ์•Š๊ณ , ๋ณ€์กฐ๋œ ๊ฒฐ๊ณผ๋ฅผ ์ค€๋‹ค๊ฑฐ๋‚˜, ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•œ๋‹ค๊ณ  ์•ฝ์†ํ•ด๋†“๊ณ  ์ž ์ ํ•ด๋ฒ„๋ฆด ๊ฐ€๋Šฅ์„ฑ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๊ฒฐ๊ณผ๋Š” ์ œ๋Œ€๋กœ ์ฃผ๋”๋ผ๋„, ๊ฒฐ๊ณผ์—์„œ ์‚ฌ์šฉ๋˜๋˜ ์ž๋ฃŒ๋“ค์„ ๋ณธ์ธ์ด ๋ฌด๋‹จ์œผ๋กœ ์žฌํ™œ์šฉํ•˜๊ฑฐ๋‚˜, ๋‚จ์ด ์ˆ˜ํ–‰ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€๋กœ์ฑŒ ๊ฐ€๋Šฅ์„ฑ๋„ ์žˆ์ง€์š”. ์ด ๋ฌธ์ œ๋Š” ํฌ๊ฒŒ ์ œ๋Œ€๋กœ ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ›์•˜๋Š”์ง€ ๊ฒ€์ฆํ•˜๋Š” ๊ฒƒ (Verifiable)๊ณผ ๊ณ„์‚ฐ ๊ณผ์ •์—์„œ ๋‚˜์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์œ ์ถœ๋˜์ง€ ์•Š๋Š” ๊ฒƒ (Security)์— ๋Œ€ํ•œ ๋‘ ๊ฐ€์ง€ ๋ฌธ์ œ๋กœ ์š”์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณธ ํ™”์—์„œ๋Š” Verifiable Computing์— ๋Œ€ํ•ด์„œ ์ง‘์ค‘์ ์œผ๋กœ ์„ค๋ช…ํ•ด ๋“œ๋ฆฌ๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฒ€์ฆ๊ฐ€๋Šฅ ๊ณ„์‚ฐ ๊ธฐ๋ฒ• (Verifiable Computing) ์ฒซ ๋ฒˆ์งธ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๊ฒ€์ฆ ๊ฐ€๋Šฅ ๊ณ„์‚ฐ ๊ธฐ๋ฒ•์— ๋Œ€ํ•ด์„œ ์•Œ์•„๋ณผ ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์ผ์ƒ์ƒํ™œ์—์„œ ๋งŽ์ด ๋ถ€๋”ชํžˆ๋Š” ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์•„์ฃผ ์‚ฐ๋งŒํ•œ ์•„์ด๊ฐ€ ์ˆ™์ œํ•˜๊ณ  ์žˆ๋Š”์ง€ ์•„๋‹Œ์ง€๋ฅผ ๊ฒ€์‚ฌํ•˜๋ ค๋ฉด ์˜†์— ๋”ฑ ๋ถ™์–ด์„œ ์•„์ด๊ฐ€ ๋…ธํŠธ์— ์“ฐ๋Š” ํ•œ์ค„ ํ•œ์ค„์„ ๊ฒ€์‚ฌํ•ด์•ผ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค (Execution Verification). ์ข€ ๋” ์„ฑ์ˆ™ํ•œ ์•„์ด๋ผ๋ฉด, ์˜ค๋Š˜ ๋‚ด์ค€ ์ˆ™์ œ๋ฅผ ๋‚ด์ผ ๊ฒ€์‚ฌํ•ด๋„ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค (Checkpoint Verification). ๋งŒ์•ฝ ๊ต์ˆ˜๋‹˜์ด๋ผ๋ฉด, 1๋…„ ๊ณผ์ œ๋ฅผ ๋‚ด์ฃผ๊ณ , ํ•œ ๋ฒˆ๋งŒ ๊ฒ€์‚ฌ๋ฅผ ํ•ด๋„ ๋  ์ˆ˜ ์žˆ์ง€์š”(Solution Verification). ๋’ค๋กœ ๊ฐˆ์ˆ˜๋ก ๊ฒ€์‚ฌ ๋น„์šฉ์ด ์ค„์–ด๋“œ๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋”๋ฆฌ์›€๊ณผ ๊ฐ™์€ ์Šค๋งˆํŠธ ์ปจํŠธ๋ž™ํŠธ(Smart Contract)์—์„œ๋Š” ๋ชจ๋‘๊ฐ€ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚จ ํ›„ ๋ชจ๋‘๊ฐ€ ๊ฐ™์€ ์ƒํƒœ๋ฅผ ๊ฐ€์กŒ๋Š”์ง€๋กœ ํ•ฉ์˜(Consensus)๋ฅผ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ ๋‹นํ•œ ํ‰๊ฐ€ ํ•จ์ˆ˜๊ฐ€ ์กด์žฌํ•œ๋‹ค๋ฉด, ๊ตณ์ด ๋ชจ๋“  ๋…ธ๋“œ๊ฐ€ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ฌ ํ•„์š”๋Š” ์—†์Œ์„ ์•ž์„œ ๋ง์”€๋“œ๋ฆฐ ์˜ˆ์ œ์—์„œ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ ์ ˆํ•œ ์ง€์ ์—์„œ ๊ฒฐ๊ณผ๋ฅผ ํ‰๊ฐ€ํ•˜๋ฉด ๊ณ„์‚ฐ ๋น„์šฉ์„ ๋งŽ์ด ์ค„์ผ ์ˆ˜๊ฐ€ ์žˆ์ฃ . ๋ฌผ๋ก  ์ด๋Ÿฐ ํ‰๊ฐ€ํ•จ์ˆ˜๋ฅผ ์„ค๊ณ„ํ•˜๋Š” ๊ฒƒ์€ ์ถ”๊ฐ€์ ์ธ ์ž‘์—…์ด๊ณ , ์ •๋ฐ€ํ•˜๊ฒŒ ์„ค๊ณ„ํ•˜์ง€ ์•Š์œผ๋ฉด ๋ณด์•ˆ์˜ ์œ„ํ—˜๋„ ์ƒ๊ธฐ๋ฏ€๋กœ ์ผ๋ฐ˜์ ์ธ ์Šค๋งˆํŠธ ์ปจํŠธ๋ž™ํŠธ์™€ ๊ฐ™์€ ์ฝ”๋“œ ์‹คํ–‰์—๋Š” ์ ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋จธ์‹ ๋Ÿฌ๋‹์—๋Š” ๋ชฉ์  ํ•จ์ˆ˜(Objective Function)์ด๋ผ๋Š” ๊ฒƒ์ด ์กด์žฌํ•˜์—ฌ, ์ด๋Ÿฌํ•œ ์ž‘์—…์— ์•ˆ์„ฑ๋งž์ถค์ด์ฃ . ์ €ํฌ๊ฐ€ ์ฒซ ๋ฒˆ์งธ ๋ชฉํ‘œ๋กœ ๋จธ์‹ ๋Ÿฌ๋‹ ์—ฐ์‚ฐ์„ ์„ ํƒํ•œ ๊ฒƒ์€ ๋ฉ‹์ ธ ๋ณด์ด๊ณ  ์‹ถ์–ด์„œ๊ฐ€ ์•„๋‹ˆ๋ผ, ์ด๋Ÿฌํ•œ ํ‰๊ฐ€์˜ ์šฉ์ด์„ฑ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. AI Network Architecture์—์„œ๋Š” ๋ณต์žกํ•˜๊ณ  ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ๋Š” ์—ฐ์‚ฐ์ด ์ฒด์ธ ์™ธ๋ถ€(off-chain)์—์„œ ์ด๋ฃจ์–ด์ง€๊ณ , ๊ทธ ๊ฒฐ๊ณผ์˜ ํ‰๊ฐ€๋ฅผ ์œ„ํ•œ ํ†ต์‹ (Communication)๋งŒ ๋ธ”๋Ÿญ์ฒด์ธ์— ๊ธฐ๋ก๋˜๊ธฐ ๋•Œ๋ฌธ์— ํŠธ๋žœ์žญ์…˜ ์†๋„ (Transaction Speed)๋ฅผ ํšจ์œจ์ ์œผ๋กœ ํ™œ์šฉํ•  ์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ถ€๋ถ„์ด ๋งŽ์ด๋“ค ์˜คํ•ดํ•˜์‹œ๋Š” ๋ถ€๋ถ„์ด๋ฉด์„œ ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ์™€์˜ ์ฐจ๋ณ„์ ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ๋Š” ๋ธ”๋ฝ์ฒด์ธ์˜ ์„ฑ๋Šฅ์„ ๋†’์—ฌ AI๋ฌธ์ œ๋ฅผ ๋ธ”๋ฝ์ฒด์ธ์œผ๋กœ ํ•ด๊ฒฐํ•˜๋Š” ํ”„๋กœ์ ํŠธ๊ฐ€ ์•„๋‹ˆ๋ผ, ๋ธ”๋ฝ์ฒด์ธ์„ ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์˜ ํ•œ ์ข…๋ฅ˜๋กœ ํ™œ์šฉํ•˜์—ฌ์„œ ๋จธ์‹ ๋Ÿฌ๋‹ ์ž‘์—…์„ ์—ฌ๋Ÿฌ ์ปดํฌ๋„ŒํŠธ์™€์˜ ์—ฐ๊ฒฐ๊ณผ ํ˜‘๋ ฅ์„ ํ†ตํ•ด ๋‹ฌ์„ฑํ•˜๋Š” ํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค. AI Network๊ฐ€ ๊ฐ€๊ณ ์žํ•˜๋Š” ๋ฐฉํ–ฅ๊ณผ, ๊ทธ ํ•ต์‹ฌ ๊ธฐ์ˆ ์ธ ๊ฒ€์ฆ๊ฐ€๋Šฅํ•œ ๊ณ„์‚ฐ ๊ธฐ๋ฒ•์— ๋Œ€ํ•ด ๋Œ€์ถฉ ๊ฐ์ด ์žกํžˆ์‹œ๋‚˜์š”? ๋ณด์•ˆ ๋ฌธ์ œ์˜ ๊ฒฝ์šฐ๋Š” ํฌ๊ฒŒ ํ†ต์‹  ์ฑ„๋„ ๋ณด์•ˆ๊ณผ ๋ฐ์ดํ„ฐ์™€ ๋ชจ๋ธ์˜ ์•”ํ˜ธํ™”๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ ์šฉ ๊ฐ€๋Šฅํ•œ ๊ธฐ์ˆ ๋“ค์ด ๋ฐฉ๋Œ€ ํ•˜์ง€๋งŒ ๋‹ค์Œ ๊ธฐํšŒ์— ๋” ์š”์•ฝํ•ด์„œ ์„ค๋ช…์„ ํ•ด ๋“œ๋ฆฌ๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์นด์นด์˜คํ†ก(ํ•œ๊ตญ): https://open.kakao.com/o/gEt7PtS ํ…”๋ ˆ๊ทธ๋žจ(ํ•œ๊ตญ์–ด ๊ณต์ง€๋ฐฉ): https://t.me/ainetwork ํ…”๋ ˆ๊ทธ๋žจ(์˜์–ด): https://t.me/ainetwork_en ๊ณต์‹ ์ด๋ฉ”์ผ: channel@ainetwork.ai ๊ณต์‹ ํ™ˆํŽ˜์ด์ง€: http://ainetwork.ai/ ํŠธ์œ„ํ„ฐ : https://twitter.com/AINetwork1 ํŽ˜์ด์Šค๋ถ: https://www.facebook.com/AINETWORK0/ ๋ธŒ๋Ÿฐ์น˜ : https://brunch.co.kr/@ainetwork ์ŠคํŒ€์ž‡: https://steemit.com/@ai-network github : https://github.com/lablup/backend.ai
AI Network์™€ Open Resource
375
ai-network์™€-open-resource-1269a8fa5aec
2018-08-29
2018-08-29 03:59:31
https://medium.com/s/story/ai-network์™€-open-resource-1269a8fa5aec
false
917
AI๋ฅผ ์œ„ํ•œ ๊ธ€๋กœ๋ฒŒ ์ปดํ“จํ„ฐ ๋„คํŠธ์›Œํฌ ๊ตฌ์ถ•
null
AINETWORK0
null
AI Network_KR
service@ainetwork.ai
ai-networkkr
AI,BLOCKCHAIN,ICO,MACHINE LEARNING,STARTUP
AINetwork1
Development
development
Development
23,061
AI Network
AI Network official account. Please contact me here. channel@ainetwork.ai
b25695634141
ai_network
18
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-04
2018-07-04 11:09:07
2018-07-04
2018-07-04 11:13:03
1
false
en
2018-07-04
2018-07-04 11:13:03
1
126bf02dc090
8.260377
1
0
0
Overview of some types of interesting trading algorithms that have the potential to beat the market. In this case, we especially highlightโ€ฆ
4
RevenYOU at its best 6x winning trading algorithms explained (and are the future of trading) Overview of some types of interesting trading algorithms that have the potential to beat the market. In this case, we especially highlight trading in crypto markets, although, lot of algorithms work perfect for all financial markets. This list is not complete, -there are so many- but this list is relevant. Relevant for winning and relevant for the future in trading. What is an algorithm? Example: If you โ€˜Googleโ€™ something, you put the Google search algorithm to work. No man can find info better than Google. Algorithms are automated โ€˜if-than-elseโ€™ calculations or just steps that can work with a lot of information data. They are used for everything. The Netherlands uses them to see if the dykes are strong enough. Volvo calculates the safety of its cars with algorithms and 74% of all trade on the stock exchanges in the world is done via algorithms. Basically, an algorithm is nothing more than a step-by-step plan with a decision. For example, the recipe for baking an apple pie is also an algorithm. Algorithms love data. Face recognition (for example, the Iphone X or the Chinese customs) will turn every face first, โ€˜quantifyingโ€™ is called this, in a lot of data. Algorithm are the gateway for A.I. and Big data to become dominant in our daily lives. For your investingsโ€™ this is very positive news. โ€œThe recipe for baking an apple pie is also an algorithmโ€ Why algorithms work better on the stock exchanges than anything else. For those who can not believe it, first these facts: in 1997, chess player Kasparov lost to Deep Blue. Nowadays, every Go player loses from an A.I. algorithm and every poker player doesnโ€™t stand a change anymore. However smart and good the poker player may be, he can not compete with the intelligence of the algorithm. This is due to the recent revolution in data and artificial intelligence (A.I.). These techniques are now available for every student and hundreds of thousands of works are growing daily. To paraphrase an โ€˜anonymousโ€™ student:โ€™ If Hitler had access to me and my A.I. tools, we would all be Nazis now. โ€œ Fortunately, the algorithm revolution only really started five years ago! โ€œIf Hitler had access to me and my A.I. tools, we would all be Nazis now.โ€ The financial exchanges are ideal for algorithms. Most success has been achieved in the High Frequency Trading (HFT). The power of โ€˜computerized tradingโ€™ HFT is enormous and has caused stock markets to crash (also Wall Street: 2010). Due to the enormous success of HFT continued application of the use of algorithms. This was also because ten years ago smart algorithms were very expensive and difficult to make. And HFT makes extreme demands on the hardware, especially internet speed. A new world: trade algorithms for everyone Only now does the wide use of algorithms break into the generally closed and traditional investment world. But because the general public also has access to the same techniques, it is now possible for everyone to achieve the highest returns with algorithms. see also new systems like Open Platform Investing (OPI) Six types of trade algorithms explained and evaluated We start with the least useful, which is, however, the most applied. Then follow other techniques. Sometimes these techniques are easy to apply for everyone, sometimes you need help from specialists, for example via a platform. 1. Algorithms from the technical analysis Description: By far the most used by investors and certainly by investment software. Reason: it is quantifiable and fine visual. However, the scientific basis is wafer thin. Most techniques serve to predict the market, but the prediction is based on far too little information. Technical analysis methods can be useful, but not in the way that is usually applied. Some examples of technical analysis: - Oscillators such as the RSI indicator and Candlestick; The idea is that they are indicators for patterns. The trick is to interpret the patterns correctly. Troublesome. - Long and short period averages. - Support and fibonacci Conclusions Algorithms from the technical analysis Risk reduction: Probably little to nothing. These models predict the future with little information. Advantage: With regard to โ€˜Buy and Holdโ€™, the results of a technical analysis are often only partly used and combined with other techniques. This leads to spreading and thus to risk spreading. Transaction costs: Often too high due to too many transactions Expected results: Can be temporarily high and considerably better than acting through emotion. Statistically, the results are slightly higher as far as research is concerned. For those who do not count their own hours, it is often a better strategy than Buy & Hold. Already used in crypto trade: Is the basis of almost all investment software. Used by professional investors: Yes, a lot How to use it: Fine is that technical analysis often gives rules and decisions for buying and selling. Whether strategies work, can only be determined by a lot of โ€˜backtestingโ€™ with a lot of data, combined with other analysis techniques. Future of Technical analysis: Practical implementation is completely replaced by Machine Learning. 2. Fundamental Analysis Description: โ€˜real valueโ€™ determination. An analyst determines as well as possible the real value of a share, currency or bond. What is the real course of Shell? You can measure this by looking at the profit, size of loans, growth, quality management, dividend, etc. Is the current price lower? Then he will probably start to rise in the future. Performances A great many quantitative indicators have been drawn up. To name a few: Earnings per share โ€” EPS, Price to Earning Ratio, Projected Earning Growth โ€” PEG, Price to Sell โ€” P / S, Dividend Payout Ratio In addition, indicators that determine the growth of the specific market are also very important. Market demand, for example, dependence on purchasing prices, etc. Conclusions Fundamental Analysis Risk reduction: Very large reduction. Example: Good fundamental analysis prevents the investor from purchasing โ€˜popularโ€™ stocks that do not actually have a good underlying asset. Examples of this are, for example, stocks that are dominant in the media. Transaction costs: Low. Fundamental analysis does relatively fewer transactions. Expected results: Higher than average, but almost never extremely higher. Making a real difference requires a deep understanding of the long-term trends in the world. Already used in crypto trade Not or hardly, partly because news gathering and reporting are underdeveloped, and at this moment a less suitable method. Used by professional investors: Yes. Example: Warren buffet. 3. Broker algorithms Many professional brokers use the micro differences between trade fairs and platforms. Methods such as VWAP, PoV, TWAP, etc. Statistical analysis of market structures can yield a lot of profit, but because many brokers do this, these techniques are increasingly difficult to apply profitably. In the crypt world these techniques are more successful because the efficiency of these markets is lower than the developed markets. Every algorithm must in any case take into account issues related to optimal execution of orders. Execution Loss by โ€˜slippageโ€™ and too much supply or demand at a time should be prevented. Example: my first algorithm did 200% every two weeks without slippage. With slippage he did 2% in two weeks. After optimization, the return rose to 25% in two weeks. Conclusions Broker algorithms Risk reduction: Large reduction. Transaction costs: helps to avoid unnecessary costs Expected results: Make sure you include market inefficiencies in your design. Already used in crypto trade: Yes, and is very suitable. Used by professional investors: Yes, plenty. 4. Mathematical methods: Description: Sometimes you can make more risk-free returns by making smart use of mathematical guarantees. There are more than a few. They are applied relatively frequently, but because it is so simple, โ€˜boringโ€™ you could say, they do not often get in the news. Execution Periodic investment: Those who periodically invest a fixed amount (for example in euros or dollars) in moving โ€˜volatileโ€™ shares (for example Coca Cola or Tesla), you always beat the market in the area of โ€‹โ€‹profit / risk. Risk and big numbers: Risk spreads in large numbers. The bonus for risk falls more towards the investor as he invests longer. โ€˜Familyโ€™ baskets: Many shares move up and down each other because they are in the same sentiment. By keeping a basket of โ€˜peersโ€™ and periodically selling the best performing (in part) and buying the only performing ones, the baskets are automatically beaten. Example: a share price drops due to a one-time major setback: A nice buying moment? No says the intuition. Yes, statistics say. Conclusions Mathematical algorithms Risk reduction: Large reduction. Transaction costs: Relatively low Expected results: Higher than average, but never extremely higher. However, with certainty and low risk! Already used in crypto trade: Not or hardly, but is very suitable. Used by professional investors: Yes, but often only by smart individuals. Large administrators sometimes โ€” unjustly โ€” deny some forms of this. 5. Basic algorithms enriched with Libraries (machine Learning) Execution: Two major ways: Media sentiment analysis and Pattern analysis. Sentiment analysis: A Machine Learning Library directs a simple (technical or HFT) algorithm. However, the purchase time depends on what the media says about the share. If big data backtesting shows that the Bitcoin is decreasing with little news or decreasing with higher future trading volumes, then the transactions are determined. โ€œThe power of ML sentiment analysis is huge, imagine media explodes because financials crisis arises and your algorithm just sell all stocks while your at the beach enjoying your coba libreโ€ Michiel Stokman, fouder Revenyou.io Pattern analysis: Instead of looking at bollinger bands or Fibonacci series ourselves, we can, for example via Tensor-flow LSTM, find a computer to find the connections in very large amounts of data. Conclusions Basic algorithms enriched with Libraries Risk reduction: Larger reduction through higher returns. LM can include errors in later decisions. Transaction costs: Relatively high (usually) Expected results: Much higher than average, and sometimes extremely high. Algorithms can be stacked (for example with OPI) and therefore be provided with even more good data, making the decisions more accurate. (Google search compared to searching in a library) Already used in crypto trade: Not or hardly, but is very suitable. Used by professional investors: Yes, but often only by smart individuals. Large investors or wealth funds sometimes โ€” unjustly โ€” deny the positive effect of some forms of Machine learning. Used by professional investors: Yes, but to little and growing fast. A lot of automated trading software is not ready yet for good -open and big- Machine Learning. 6. Reinforcement learning (RL) A.I. Description: A form of Machine Learning where no knowledge or models are entered in advance. suboptimal outcomes are corrected until optimal outcomes arise. โ€œA.I. Trading: The chain of action-reaction makes the decision-making process smarter and smarter in an almost infinitive loop.โ€ Execution: via traditional Markov decision process. The art is to find the right policy on which decisions are made. The chain of action-reaction makes the decision-making process smarter and smarter in an almost infinitive loop. Conclusions Reinforcement learning (RL) A.I. Risk reduction: Very reduction. very good to learn through backtesting with historical data. Transaction costs: Entirely dependent on the dataset and the problem, but always optimized if inserted in the chain. Expected results: Flink higher than average, and with certainty and lower risk. Already used in crypto trade: Not or hardly. Very suitable and as difficult as you might think. Used by professional investors: Yes, growing rapidly on a small scale. โ€˜โ€œJust as the Beatlesโ€™ genius creativity came together in a vinyl record, genius open platform trade algorithms come together in the RevenYOU app.โ€ Conclusion Which trade algorithms make you the best trader? Machine learning and Artificial Intelligence is far away for many people and traders. These techniques are the future because trade is always the outcome of a complex supply and demand game that is driven by social forces. Trade is as complex as man himself. For this reason, sustainable higher investment strategies can be greatly improved by adding more knowledge of patterns, trends and external influences to the decision-making process. Only A.I. Algorithms can do this. Next step: Unique creativity The Beatles, Messi and Einstein have occurred from a culture in which the mass creates something and shares it with each other. The crowd seeks, finds, schools and stimulates the unique individual. This process is now also going on for the first time in history in the world of automatic investing. Self-investing money is coming and becoming normal. Just as the Beatlesโ€™ genius creativity came together in a vinyl record, genius trade algorithms come together in the RevenYOU app. Michiel Stokman, founder Revenyou.io ps.: Do you Like algorithms? Join the Revenyou League! 1000 Ethereum price money. See more at: Link
6x winning trading algorithms explained (and are the future of trading)
1
6x-winning-trading-algorithms-explained-and-are-the-future-of-trading-126bf02dc090
2018-07-04
2018-07-04 11:13:03
https://medium.com/s/story/6x-winning-trading-algorithms-explained-and-are-the-future-of-trading-126bf02dc090
false
2,136
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Revenyou
A.I. Robots trade better than humans - - And now you play them on your smartphone. RevenYOU now ready to fly and working.
e7031fdf0d0f
revenyounl
7
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-13
2018-07-13 22:02:18
2018-07-12
2018-07-12 16:28:00
5
false
en
2018-07-13
2018-07-13 22:07:29
10
126d7fdacc5c
7.644654
0
0
0
At Hangar, our mission is to impact as many industries as possible, by bringing the future into the present faster than waiting for it toโ€ฆ
1
Jobs of the Future: Safety & Risk Director At Hangar, our mission is to impact as many industries as possible, by bringing the future into the present faster than waiting for it to happen. We spend a lot of time thinking about the benefits of technology as topics like computer vision, machine learning, and artificial intelligence take the spotlight, but how will these emerging tools enable us to work in new ways? What roles will tech enable, that we donโ€™t have today? What will be the Jobs of the Future? Over the past 40 years, weโ€™ve observed a tremendous advancement in technology. Once upon a time, the neatest piece of consumer tech may have been the calculator watch, while today we carry computers in our pockets that are exponentially more advanced than the room-sized machines once used to send humans to the moon. Robots clean our floors, our cars plug into outlets, and our grandmothers share memes online. Technology has moved from sci-fi fantasy to a household concept, and this is why weโ€™re all eager to speculate upon the future. In this edition of Jobs of the Future weโ€™ll feature a concept thatโ€™s largely prevalent in the current conversation โ€” machine learning. Itโ€™s already in use in a variety of applications today, but weโ€™re interested in imagining what it will look like years from now, when itโ€™s had time to mature and gain ubiquity. CM / AP Teaching Machines to Learn In essence, machine learning is enabling a computer with the ability to learn on its own. Programmers input a data set and leave the computer to improve its performance of a task, without any explicit instruction from the programmers. Itโ€™s useful to keep in mind that machine learning is a subcategory of artificial intelligence, along with other fields like computer vision, decision theory, and natural language processing. Machine learningโ€™s biggest headlines to date have been related to computer software beating the best human players at games like chess, and more recently, Go โ€” a game thatโ€™s rich in its complexity. DeepMind developed the winning software, AlphaGo Zero, by allowing the computer it to play against itself, over and over until it developed a mastery of the game. It quickly exceeded human-level play and beat the reigning world champion at the time โ€” the original AlphaGo, which itself learned to play by studying data from thousands of human-played games. From start to finish, AlphaGo Zero took little over a month to become the best in the world. The above highlights one of machine learningโ€™s critical traits โ€” speed. Another important feature is its capacity to store and access massive amounts of data. Unlike the human brain, a computer doesnโ€™t forget, and it also possesses the remarkable ability to make connections across large amounts of information. Itโ€™s easier to wrap your head around the impact of this ability by imagining a human with the same skill. Picture a medical researcher who has never forgotten a single word from all the research papers sheโ€™s read, and on top of that, sheโ€™s read every related paper ever published. Advancements in the medical field would be off the charts, and soon we may no longer have the need for a healthcare system at all. The Science of Data Science The field of data science has been around for several decades, but the term has only recently become a mainstay. Its core principle is to extract insights from data by using a variety of methods, processes, and algorithms (including machine learning), with the purpose of identifying and analyzing patterns or other phenomena found in the data. From the 1.91 billion sensor-packed smartphones out in the wild to the ~5k satellites orbiting our planet, data collection devices cover all the physical space we live in โ€” even our household consumer gadgets collect data as they perform tasks. The sheer amount of data we capture today provides the groundwork for massive transformation in society and business. Welcome to the data age. Building our Future Construction is one of the biggest industries out there, which isnโ€™t necessarily surprising โ€” as the world grows, we need to build for it. Nevertheless, itโ€™s also one of the worldโ€™s most dangerous industries. In the most recently reported year, over 5,000 workers died on the job, and 21% of deaths occurred in construction. To give this some context, this equates to more than 99 deaths a week or more than 14 deaths every day, on average. After the shock of these statistics wear off, the opportunity for improvement becomes apparent. More important than saving time or money on a construction project is saving a human life, and should be treated as top priority in every organization. So, whatโ€™s the solution? The most invulnerable answer is to remove human workers from performing dangerous tasks, but until robots can build buildings on their own, this is out of the question. However, robotic technology can help today. Autonomous drones are currently being used to inspect sites for hazardous conditions without putting a human workers at risk. Safety benefits of JobSight Hangarโ€™s construction software, JobSight, does just that โ€” aerial imagery of a site is captured on a recurring basis, and delivered in an interface that allows the user to explore all their projects with multiple angles and data types, from start to finish. The result is an effective and thorough method of risk mitigation. Much like the evolution mentioned in the beginning of this blog โ€” calculator watches to smartphones โ€” the sophistication and impact of onsite robotics is destined to explode in coming years, making early implementation seem almost primitive. Safety of the Future The safety officerโ€™s role on a construction project is one of great importance โ€” first and foremost, to inspect the site to ensure itโ€™s a hazard-free environment. This inspection is performed by physically walking around the grounds, looking for anything that poses a threat to workersโ€™ safety. Today, JobSight works well at making this process more efficient by eliminating the safety officerโ€™s need to walk around the sight. Site inspections are performed from the desk, where visual evidence of safety hazards can be organized, noted, and shared with the team. As the path of the safety officer converges with increasingly advanced technology, a new role materializes โ€” the Construction Safety & Risk Director. The job description: A Construction Safety & Risk Director leverages robotics, machine learning algorithms, and data science principles to detect and prevent risks associated with onsite safety hazards, by analyzing data from hundreds of sites to ensure a fatality-free project lifecycle. This roleโ€™s primary objective is to protect workerโ€™s safety, and doing so provides substantial benefit to a projectโ€™s bottom line. On-the-job accidents come with obvious direct costs, like insurance claims, workers compensation, and emergency room visits, but also carry with them considerable indirect costs that spread far and wide in their reach. Examples of indirect costs include loss of productivity, OSHA fines, temporary labor & overtime costs, equipment reparation, accident investigation costs, and the unquantifiable damage to an organizationโ€™s reputation. On average, for every $1 of direct costs of an accident, a company will expend additional $4 in indirect costs. By using JobSight, the Safety & Risk Director has a strong advantage when it comes to preventing accidents from occurring โ€” avoiding the associated costs as a result. Its impact to the budget, schedule, and risk factor of a single project is significant, and when engaged across an organizationโ€™s entire portfolio, the benefits flourish. The Autonomous Job Site Now itโ€™s time to add some futuristic color to this scenario since this is a Job of the Future, after all. Picture an early-stage construction site when itโ€™s time to install public utilities and trench excavation is underway. Excavation and trenching are among the most dangerous operations on a construction project. Risks include falls, falling loads, hazardous atmospheres, incidents with equipment, and the most dangerous of all, cave-ins. According to OSHA, trench cave-ins are much more likely than any other excavation-related accidents to result in worker fatalities, causing dozens of fatalities and hundreds of injuries each year. Back on site, the crew performs the excavation as a drone monitors their progress from above. They install the required protection system within the trench, but due to an aggressive project schedule, they skip the installation of safety ladders and fail to flag it on the surface level. With yesterdayโ€™s traditional methods, these safety violations would remain unseen until the safety officer performs the next inspection of the site, which could be up to a week in time depending on the projectโ€™s inspection schedule. JobSightโ€™s snapshot interface Using JobSight today, a safety officer can receive imagery of the site daily, and carry out inspections as part of his morning routine. The violations would be identified and resolved quickly, minimizing the amount of time workers are at risk. Tomorrowโ€™s machine learning methods will reduce that window of time even further: the violation will be recognized in real time as the trench is being built, and an alert will be sent to the Safety & Risk Director as soon as the crew leaves it unattended. From the push of a button, the Director sends out a team of small robotic rovers that drive to the trench to plant flags around its perimeter, while a larger vehicle drops ladders off at each end. The use of autonomy on a construction project provides a dramatic array of benefits, and the examples above only represent a fraction of the potential capabilities. Man and Machine Learning The adoption of machine learning for construction safety is already being applied today. With time it will only become more robust, leading us to the scenario illustrated above. Much like a plant needs water and sunlight to grow, machine learning requires data โ€” lots of data โ€” to thrive, and thatโ€™s where Hangar comes in as the worldโ€™s first, robotics-as-a-system data acquisition platform. Hangarโ€™s mission is to extract insight by digitizing the physical world over time, while feeding the technologies of tomorrow. Jobs of the Future Jobs of the Future is an ongoing, semi-regular series from Hangar where we imagine the work, roles, tasks and responsibilities that technology will one day enable. Weโ€™d love to hear what ideas you have for future jobs that donโ€™t exist today. About the Author Lon Breedlove is the Product Marketing Producer at Hangar Technology, Inc. Since 2012, Lon has worked with nearly every major drone manufacturer โ€” including DJI, 3DR, Parrot, Yuneec, and GoPro. What drives me is a desire to affect our world in a meaningful way, and to inspire others to value the same thing. I see technology being one of the most useful and exciting ways to put this principle into practice. Originally published at medium.com on July 12, 2018.
Jobs of the Future: Safety & Risk Director
0
jobs-of-the-future-safety-risk-director-126d7fdacc5c
2018-07-14
2018-07-14 08:27:43
https://medium.com/s/story/jobs-of-the-future-safety-risk-director-126d7fdacc5c
false
1,805
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Lon Breedlove
Product, UX, and marketing at Hangar Technology with years of experience in the drone industry
3cd4ca72cee3
lonbreedlove
0
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-10
2017-09-10 13:41:30
2017-09-13
2017-09-13 06:56:37
1
false
it
2017-09-13
2017-09-13 06:56:37
6
126d9fa03b1
4.098113
3
0
0
Lโ€™ONU deve bandire le armi basate su AI?
5
Elon Musk contro lโ€™Artificial Intelligence militare: un dovere politico. Lโ€™ONU deve bandire le armi basate su AI? In questo articolo: Video-sorveglianza e blockchain: un possibile equilibrio tra libertร  e sicurezza E del perchรฉ blockchain รจ un tema politico prima che tecnologicomedium.com ho parlato di intelligenze artificiali (in relazione a quel capolavoro di โ€œPerson of Interestโ€). Poche cose mi affascinano da sempre quanto le AI: da 2001 Odissea nello Spazio in poi, le AI sono state una delle mie ossessioni (sul rapporto tra AI e blockchain devo ancora riflettere, ma ci arriveremo). Elon Musk รจ un visionario: ha fondato Paypal, capendo prima di ogni altro quanto la pratica del pagamento digitale dovesse essere disintermediata (e riguardo alla blockchain come legittima erede ho diversi sospetti), sta pensando di portare lโ€™uomo su Marte, e sta dedicando gran parte dei suoi sforzi imprenditoriali alla soluzione di un problema che attanagli tutti noi: la durata delle batterie. Ma essendo un visionario, e probabilmente un uomo dotato di grande intelligenza, nellโ€™Agosto di questโ€™anno, si รจ sentito di farsi promotore di una richiesta importante: ha chiesto allโ€™Onu, insieme ad altri 116 visionari (fondatori di aziende che si occupano di intelligenza artificiale) di mettere al bando le AI a scopo militare. ( La notizia la trovate qui) Invitiamo i partecipanti ai lavori del GGE a sforzarsi di trovare modi per prevenire una corsa agli armamenti autonomi, per proteggere i civili dagli abusi e per evitare gli effetti destabilizzanti di queste tecnologie. Le armi letali autonome minacciano di essere la terza rivoluzione in campo militare. Una volta sviluppate, permetteranno ai conflitti armati di essere combattuti su una scala piรน grande che mai, e su scale temporali piรน veloci di quanto gli umani possano comprendere: sono armi che despoti e terroristi potrebbero rivolgere contro popoli innocenti, oltre che armi che gli hacker potrebbero riprogrammare per comportarsi in modi indesiderabili. Non abbiamo molto tempo per agire: una volta aperto il vaso di Pandora, sarร  difficile richiuderlo. I visionari hanno questa strana tendenza: vedono il futuro prima degli altri. (E a volte, in parte, lo plasmano, con buona pace di Marx e in accordo alle relazioni di reciprocitร  di weberiana memoria). Ricordo le risate riguardo ai computer o riguardo Internet, nel mio passato: lโ€™uomo della strada difficilmente รจ in grado di vedere il futuro. Soprattutto non รจ in grado di mettere in relazione i progressi di una tecnologia in nuce con le sue conseguenze sociali. A volte nemmeno i grandissimi scienziati sono in grado di farlo. Pensate agli effetti della fisica teorica del โ€˜900, e ai grandissimi che hanno partecipato al Progetto Manhattan: chiaro, cโ€™era una guerra mondiale da vincere e le ricadute civili di quel lavoro di ricerca militare sono state importantissime, ma fino alla realizzazione dellโ€™arma atomica quasi nessuno si รจ fermato (e il what-if di un mondo, e di una guerra, senza Bomba onestamente non possiamo saperlo). (Pare che Niels Bohr si sia tirato indietro, consapevole del pericolo che si stava creando, letteralmente, in laboratorio). Sapevamo che il mondo non sarebbe stato piรน lo stesso. Alcuni risero, altri piansero, i piรน rimasero in silenzio. Mi ricordai del verso delle scritture Indรน, il Baghavad-Gita. Vishnu tenta di convincere il Principe che dovrebbe compiere il suo dovere e per impressionarlo assume la sua forma dalle molteplici braccia e dice, โ€œAdesso sono diventato Morte, il distruttore dei mondi.โ€ Suppongo lo pensammo tutti, in un modo o nellโ€™altro. - Robert Oppenheimer, riguardo a Trinity Se leggiamo con un attimo di attenzione le due citazioni hanno temi assolutamente simili. Ma come uno scienziato consapevole, Musk e gli altri, questa volta, giocano dโ€™anticipo. Perchรฉ รจ la scienza che ci insegna a guardare ai fatti, cercando di trarne insegnamento e teoria. E lโ€™insegnamento che emerge รจ intellettualmente intrigante: noi, gli scienziati, non ci fermeremo. Sappiamo giocare con il fuoco, e se comandassimo noi, lo sapremmo controllare (personalmente ho qualche dubbio, ma tantโ€™รจ). Ma noi non comandiamo, e lo sappiamo. Non รจ nemmeno il nostro mestiere. Il nostro mestiere รจ il progresso dellโ€™umanitร , fatto di balzi in avanti e clamorose cadute (la polvere da sparo come estatico fuoco dโ€™artificio che diventa arma di terrore e di nuovo liberazione in Normandia). La scienza non puรฒ essere davvero governata dagli scienziati, perchรฉ quando vediamo un progresso possibile lo raggiungiamo โ€” no matter what. E io, personalmente, sono dโ€™accordo. Senza quella percentuale di rischio saremmo ancora nella caverna, vivremmo a malapena 25 anni, creperemmo di ogni sorta di morbo e la soddisfazione personale del vivere sarebbe nulla. รˆ ciรฒ che ci distingue dagli animali, quellโ€™inguaribile ottimismo che ci fa dire che, a dispetto di ogni errore possibile, domani sarร  diverso (tendenzialmente meglio) dellโ€™oggi (per quanto la situazione politica internazionale non mi rassicuri). Ma stavolta, cari amici decisori politici, vi stiamo avvertendo. E lo stiamo facendo nel modo piรน rumoroso possibile. Lasciateci continuare con la ricerca, ma fate il vostro lavoro: limitate lโ€™applicazione dellโ€™intelligenza artificiale nei campi militari. Fate un programma come lo Start per il nucleare, magari prima che le super potenze si armino, giustificando cosรฌ ogni corsa. Fatelo finchรฉ siamo e siete in tempo. Perchรฉ, cari decisori, avete dimostrato troppe volte che il futuro che vi interessa รจ quello del prossimo appuntamento elettorale, ma questa volta stiamo giocando una partita molto pericolosa: abbiate, per una volta, fiducia non nellโ€™uomo della strada, in average Joe, ma fidatevi di chi รจ piรน intelligente e piรน colto di voi. Ascoltate, per una volta, la voce della ragione e non gli obiettivi di breve termine della pancia del popolo che dellโ€™intelligenza artificiale se ne frega (forse giustamente, visto che siamo tutti impegnati nella quotidiana battaglia per la sopravvivenza decorosa). รˆ il motivo per cui nel concetto di repubblica รจ insito quello di democrazia rappresentativa: occupatevi delle cose importanti mentre noi siamo impegnati a vivere le nostre vite. E gli scienziati proseguano la loro folle e magnifica corsa verso il futuro. Lo hanno sempre fatto, continueranno a farlo. Lo sviluppo intelligenze artificiali รจ un fattore assolutamente significativo per la convivenza civile dei prossimi 50 anni, siano esse militari o civili. Il che non vuol dire occuparsi solo di tassazione, ma avere la capacitร  di prevenire i problemi, ascoltando chi il futuro lo guarda per mestiere.
Elon Musk contro lโ€™Artificial Intelligence militare: un dovere politico.
7
elon-musk-contro-lartificial-intelligence-militare-un-dovere-politico-126d9fa03b1
2017-10-07
2017-10-07 07:42:25
https://medium.com/s/story/elon-musk-contro-lartificial-intelligence-militare-un-dovere-politico-126d9fa03b1
false
1,033
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Michele Travagli
Una prospettiva non tecnica sulle cose nuove. Blockchain, intelligenza artificiale e altre cose che cambieranno il mondo. Esperti, vi aspetto qui.
79b503740798
micheletravagli
103
165
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-08
2018-03-08 08:20:56
2018-03-08
2018-03-08 08:37:15
0
false
de
2018-03-08
2018-03-08 08:37:15
1
126f10e5b9bd
0.939623
0
0
0
Das US-Militรคr nutzt Erkenntnisse der KI-Forschung von Google, weshalb nun deutlich Sorge artikuliert wird, die mit einem Statement desโ€ฆ
4
Kann man Menschen โ€œkorrektโ€ tรถten? Das US-Militรคr nutzt Erkenntnisse der KI-Forschung von Google, weshalb nun deutlich Sorge artikuliert wird, die mit einem Statement des frรผheren Vorstandsvorsitzenden der Google-Mutter Alphabet , Eric Schmidt, folgendermaรŸen zusammengefasst wurde: โ€œEs gibt eine allgemeine Besorgnis in der Tech-Community, dass der militรคrisch-industrielle Komplex ihre Sachen einsetzt, um Menschen inkorrekt zu tรถten.โ€ โ€” Eric Schmidt Das Problem: Das Pentagon-Projekt Maven nutzt Schnittstellen zur Open โ€” Source KI-Software TensorFlow von Google. Diese kommt dann dabei zum Einsatz, bei Drohneneinsรคtzen die Datenstrรถme auszuwerten. Damit hilft die Entwicklung der KI-Software unmittelbar der direkten oder indirekten Tรถtung von Menschen. Dies wirft natรผrlich ethische Fragen auf, und erinnert an die Entwicklung der Atombombe. Die Aufdeckung der Zusammenarbeit fรผhrte deshalb auch Google-Intern zu Empรถrung bei Mitarbeitern โ€” aber auch zur Beschwichtigung bei TensorFlow, eine Sprecherin sagte, dass die Technik ja lediglich eine Vorauswahl treffe, auf welches Bildmaterial ein Mensch noch genauer schauen sollte. Kurzum: Dieser Fall zeigt einmal mehr anschaulich und leider sehr konkret, weshalb es dringend notwendig ist, dass qualifizierte und involvierte Experten die technikethische Dimension der aktuellen Entwicklung von KI begleiten und mitgestalten. Wir mรผssen aus der Geschichte lernen und dรผrfen nicht den Fehler machen, darauf zu vertrauen, dass sich das Ganze schon irgendwie ergibt. Denn anders als Rivalen im Cloud-Geschรคft wie Amazon oder Microsoft hat Google bislang keine regierungsspezifischen Angebote entwickelt, mit denen als geheim oder anderweitig vertraulich eingestufte Informationen in den Rechnerwolken besonders geschรผtzt werden sollen. Noch nicht.
Kann man Menschen โ€œkorrektโ€ tรถten?
0
kann-man-menschen-korrekt-tรถten-126f10e5b9bd
2018-03-08
2018-03-08 08:37:16
https://medium.com/s/story/kann-man-menschen-korrekt-tรถten-126f10e5b9bd
false
249
null
null
null
null
null
null
null
null
null
Maschinenethik
maschinenethik
Maschinenethik
20
Stefan Hartelt
null
458284d0494f
stefanhartelt
13
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-16
2017-12-16 10:23:54
2017-12-16
2017-12-16 10:57:39
1
false
en
2017-12-16
2017-12-16 10:59:17
1
126f18c0596e
2.086792
1
0
0
Into a Cross-Enterprise Goldmine
5
Transform Your Statement of Work Into a Cross-Enterprise Goldmine pixabay.com How? You listen, then create a strategy and vision for your customer. Let me give an example. A while back I was working with a team on your typical SOW. They believed it had the potential upswing of $100K. Some consider this success, and mission succeeded. Yet, from where I sat, it was far from that. Its focus was narrow, no true value added and tons of opportunity left on the table. Instead of going with the flow, it was time to throw a rock in their calm lake to start a ripple effect. An effect that challenged them to see the potential sitting right before their eyes. It forced them into a consultative way of working with the customer. Help the customer increase their value among the business users and position us as a trusted partner. The first step challenged the team to revisit the findings collected during the interview process with the business units. The second, brainstorm while keeping these questions in mind. One, what are the customerโ€™s actual needs? Two, what value can we add to show we are their best choice as an exclusive partner? Three, how do we help generate increased opportunity for their business and user base with the analytic solution they offer? And as we worked through these questions, the answers became clear. What our customer needed was a team who understood their business better than they did. A team to build a roadmap with a robust strategy that embraced their business. A partner to assist them in penetrating their market as they integrated this plan into their corporate vision; helping them align with other teams and key stakeholders across the enterprise. This strategy would help in multiple ways. One, it would enhance data quality by bringing value to the insights their data scientist could immediately offer the business. Two, it would help drive their team to further success as it positioned them with advanced analytical capabilities meeting the demands of both the business users and the data science community. Which in the end (Three) leads to a new approach in how their big data analytic use cases are assessed and executed, bringing a 360-degree view to their overall corporate data ecosystem. Which then leads us to the moral of the story. Four, never look at an opportunity with the mindset of โ€œletโ€™s rush to conclusions by following the status quo while leaving opportunity on the table.โ€ Instead, always consider how you can support your customer by bringing higher value to the possibilities they have presented. So keep in mind, the next time you are given an opportunity, sit back, reconsider the facts presented and ask, โ€œIs there more here to help our customer bring additional value to their business?โ€ Or, if you prefer, be that vendor who is only looking to seal the deal regardless of the opportunities presented. Call to Action: If you enjoyed this article, subscribe to my newsletter.
Transform Your Statement of Work
1
transform-you-statement-of-work-126f18c0596e
2018-03-09
2018-03-09 12:34:34
https://medium.com/s/story/transform-you-statement-of-work-126f18c0596e
false
500
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
NiteFrog
Itโ€™s all about technology!
cce9fb79d235
nitefrog
595
717
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-30
2018-05-30 13:37:27
2018-05-30
2018-05-30 14:02:26
3
false
en
2018-06-01
2018-06-01 22:31:05
2
126f1a0026ab
3.591509
0
0
0
What is the value of ethics today? Why do we talk about ethics when we talk about public algorithms?
5
What part of explicability am I ready to give up? What is the value of ethics today? Why do we talk about ethics when we talk about public algorithms? These are the questions that GALATEA wanted to answer by organizing a workshop at Google on May 24th. Through this workshop we tried to look at the principles and not to go into precise details. End of the workshop โ€” Magic squad Why would we build a workshop ? The main objective for GALATEA for this workshop was to try to put into perspective the stakes of the algorithms used by public administrations in a democratic framework. Fully aware of the complexity, our ambition was to connect people from very different backgrounds in order to figure out together the beginning of a solution. Rethinking the way we produce tomorrowโ€™s AI will not be done with uniform profiles. We had to involve students from all fields and environments to work together. Students in human sciences, engineers, business, coders, junior lawyersโ€ฆ but also few experienced mentors such as Lucie Cluzel, Professor of Public Law at the University of Lorraine, Catherine Prรฉbissy-Schnall, Senior Lecturer in Law at the University of Nanterre and Timothรฉe Paris, Deputy Rapporteur General of the Report and Studies Section at the Council of State and many othersโ€ฆ Weird googlers But a question remains unresolved, why did we choose Google? We were absolutely sure that separating public problems from private experts would be a methodological error, an impoverishment of our discussions. How did we organized the workshop ? Our magic squad has been divided in 4 groups and worked during 3 sprints of 3 hours. Each sprint was launched by a 10 minutes speech from one of our intergalactic speakers. The subject of public algorithms was thus divided into three questions: โ€” Should the use of public algorithms be regulated? โ€” Defining their purpose and scope โ€” Code is Law? The right to digital literacy What proposals came out ? Group 1: Define a governance framework based on an ethics committee specializing in algorithms. The citizens will be consulted by the administration and the developers will follow a precise specification, issued by the administration. The code will be readable in natural language so that it can be analyzed and tested by the ethics committee. This framework will be built around principles such as the presumption of non-functioning, the principle of mutability or a send box system to test the code and trace any bias. As far as citizensโ€™ rights are concerned, they may bring an action before the court. Group 2: Establishment of four main founding principles for the construction of a clear governance framework: Partial transparency of the code (available but not public). The code will be Intelligible and therefore described on technical sheets Liability brought by the sponsor Democratic equality between individuals The right of appeal in case of presumption of having been wronged The use of public algorithms will have to be framed. Each algorithm must first undergo an audit to validate the intelligibility sheet. Ex post control should be put in place with the defender of the rights of freedoms. The sanctions would be either on the commissioner or on the person responsible for the audit, and we could finally think of a compensation fund for the victims. Group 3: Define a legal framework in soft law in order to fight against the illegality of the result (non-conformity to the law for a judgment being made by an algorithm) and the arbitrariness of the result (fight against the possible biases). This framework will impact decision makers (the administration) and coders, and will be based on key principles: A principle with an exceptional regime A principle of loyalty understood in the sense of conformity A principle of transparency understood in the sense of best possible explainability This framework will still have a basis in hard law, with the establishment of specifications for a code that reflects the law. Group 4: Define a legal framework inspired by the Maslow pyramid, but with the primary goal of regulating public actors and private actors who have been delegated a public service mission. This framework would be based on an ethical charter of public cross-sectoral algorithms in hard law built around 3 principles: Principle of equality and neutrality, a principle of non-discrimination and transparency and finally principle of continuity and mutability. This base would be supplemented by sectoral laws and sectoral code of ethics. Magic squad โ€” Work hard x Play hard How did it end ? Our jury, composed of Lucie Cluzel, Professor of Public Law at the University of Lorraine, Jean-Baptiste Pointel, pedagogical adviser of the ENA and Ludovic Peran, in charge of Public Policies at Google, selected groups number 2 & 3. They will refine and combine their propositions and present them to ENAโ€™s students at the end of June. by Pierre Boullier & Valรฉrian Dunoyer
What part of explicability am I ready to give up?
0
what-part-of-explicability-am-i-ready-to-give-up-126f1a0026ab
2018-06-01
2018-06-01 22:31:06
https://medium.com/s/story/what-part-of-explicability-am-i-ready-to-give-up-126f1a0026ab
false
806
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Galatea Network
Galatea is an international and multidisciplinary network of reflection and proposals on the links between Law & Artificial Intelligence.
e25b10e2b664
galatea.net
24
9
20,181,104
null
null
null
null
null
null
0
null
0
ded0cd79f0c4
2018-09-30
2018-09-30 19:19:06
2018-09-30
2018-09-30 20:14:03
1
false
en
2018-09-30
2018-09-30 20:14:03
7
12708ee0c05a
4.25283
0
0
0
Here we are! Entry #2! Itโ€™s been 2 weeks since my last update, and I have done quite a bit. Also, if youโ€™re reading this and havenโ€™t readโ€ฆ
5
Step 2: Digging a philosopherโ€™s holeโ€ฆ Here we are! Entry #2! Itโ€™s been 2 weeks since my last update, and I have done quite a bit. Also, if youโ€™re reading this and havenโ€™t read my first article on what Iโ€™m doing, start there. Hereโ€™s a list of what I got done: I trained my model! I got all of my text data from Wikipedia and trained my model! AND THEN, it all got deleted, and I did it a second time! Pro-tip: be careful when you connect a GitKraken remote to your GitHub repo to push/pull. I accidentally added a README when I initialized my repo on GitHub, so there were conflicts when I tried to push and pull. Somewhere, perhaps via some unnecessarily forceful version control, I lost my local copy of my work. So, at about 2 am in Occidentalโ€™s library, I redid the whole damn thing. Luckily, my data scraping scripts were preserved, and it was easy to re-download FastText. And, my repository has been cleaned up, not to worry. Plus, I wrote my first .gitignore! Also, here is the repo if you want to look at the project. Next, I need to figure out how to take the model I now have, and make it easier to get vectors out of it. I donโ€™t want to use the command line when Iโ€™m querying hundreds of vectors at a time. I think I might need Gensim, so thatโ€™s what Iโ€™m exploring this week. Once I have those vectors, I can use this ~fancy~ DirectBias statistic from this paper that I really love, Man is to Computer Programmer as Woman is to Homemaker? by Bolukbasi et al. Itโ€™s a little algorithm that allows me to do some linear algebra magic, and evaluate how biased my model is. More discussion of how you actually do that will come in my next post. I did some research on evaluating word embeddings One thing that I have noticed is that anyone who is anyone in the world of word embeddings knows of word2vec and GloVe. I mean, Iโ€™ve used both of them during a previous iteration of this project. And a lot of the reason why is because theyโ€™re trained with tons of data. And for some reason, that equates a โ€œrigorousโ€ model. If a researcher isnโ€™t using a pre-trained model, then they are using one of their own that is also trained with a ton of data. This makes only marginal sense to me. Obviously, having more data probably canโ€™t hurt, but whatโ€™s alarming is that there appears to be no standard of evaluation in word embeddings. Luckily, I scrounged up a paper from Google Scholar (what a gem) called A Survey of Word Embedding Evaluation Methods by Amir Bakarov that seeks to provide some respite. Five big things make evaluating these things really hard: What does โ€œmeaningโ€ even mean? How do we know when a definition of a word is actually right, especially when itโ€™s defined through itโ€™s relation to other words? Itโ€™s actually somewhat rare for programmers to filter out test data when theyโ€™re training word embeddings, but without it, how are we supposed to evaluate the models? No one can find a significant correlation between the two major kinds of evaluation methods, so which one is right? There are no defined statistical significance tests to show statistical power, and Thereโ€™s this thing called the โ€œhubness problem,โ€ which refers to hub words that are really commonly used (e.g. โ€œisโ€, โ€œandโ€), and itโ€™s unclear how the distance between two arbitrary words is noised by these hub words. That said, I found two compelling ways to evaluate my model: synonym detection (given a word, pick out its synonym from a multiple choice list) and outlier word detection (given a group of synonyms, and one randomly selected word, identify the odd-one-out). My next entry will have more detail on these. I looked at how otherโ€™s have debiased word embeddings for some #inspriation So, I want to change the way the input data is organized so that when I train the model, my word vectors come out in a way that is less biased. Others have had the goal of minimizing bias (like here, here, here, and here, to name a few), but generally, they take a list of words that need to be โ€œneutralizedโ€ and essentially neutralize them by using a subspace that spans gender or race-based vectors. An aside: to make this subspace, letโ€™s say for gender, they take words that are diametrically opposed on the basis of gender (like waiter and waitress) and subtract the female vector from the male vector. Then, they take the first vector from PCA of those vector subtractions. None of the papers I have read actually minimize bias through changing the way in which the text is prepared before it goes into the model. That said, if you know of one, shoot me an email (czeller@oxy.edu). So, my current game plan is to make two copies of my corpus, one with the original gender pronouns, and one with the gender pronouns swapped (he โ†’ she, him โ†’ her, his โ†’ hers), and then concatenate them. Then see what happens. And finally, the philosopherโ€™s crisis I came to Kindly borrowed from HackerNews. This is where the title of this particular blog entry comes in. With my bi-weekly meeting with my faculty advisor, other than updating him on what Iโ€™d done, I talked to him about some of the actually scary things Iโ€™ve been thinking about. Sure, I can build a model, evaluate it, calculate this DirectBias statistic, do the changes to my data, remeasure the statistic, and be done with it. But I donโ€™t feel like that answers any of my questions. And as I putter along, working on this project, the code isnโ€™t what keeps me up at night, this is: What does it mean to be racist or sexist? Does that change in the context of a word embedding? Also, my opinion of what constitutes sexism and racism is different from someone elseโ€™s. I mean Iโ€™m a white, cis-gendered, straight woman. My own privilege affects my definition of these words. Do I even have the authority to determine what it means to be โ€œdebiasedโ€? Even if we could assemble some sort of โ€œpublic opinionโ€, does it not change every year, every month, every hour? Can we ever be โ€œhands offโ€? Iโ€™m not sureโ€ฆ
Step 2: Digging a philosopherโ€™s holeโ€ฆ
0
step-2-digging-a-philosophers-hole-12708ee0c05a
2018-10-02
2018-10-02 19:40:09
https://medium.com/s/story/step-2-digging-a-philosophers-hole-12708ee0c05a
false
1,074
A computer science studentโ€™s account of writing their senior thesis, trying to make AI a little less shitty.
null
null
null
Fixing Sexist AI
czeller@oxy.edu
fixing-sexist-ai
NLP,WORD EMBEDDINGS,MACHINE LEARNING,BIAS
null
Machine Learning
machine-learning
Machine Learning
51,320
Chloe Zeller
Young computer scientist, studying intersection with cognitive science. Driven by intellectual curiosity and snacks.
33868a376407
chloerainezeller
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-28
2018-01-28 17:27:09
2018-01-19
2018-01-19 08:36:51
1
false
en
2018-01-28
2018-01-28 17:29:33
3
127218daf5b5
2.777358
4
0
0
You must relentlessly ask: Is this harder for the customer to do? Relentlessly. Because, today, in an increasing number of areas, if itโ€™sโ€ฆ
5
If Youโ€™re Still Shying Away From Using Technology To Improve Customer Experience โ€” Youโ€™re Doomed You must relentlessly ask: Is this harder for the customer to do? Relentlessly. Because, today, in an increasing number of areas, if itโ€™s not easy-to-use, itโ€™s dead in the water. โ€” Gerry McGovern Re-entry into the world of work after immersion in a completely different culture is always a disorienting experience. Iโ€™ve recently returned from a trip to Myanmar โ€” a country with a complex and troubled past and present โ€” and one of the most fascinating examples in the world of technological growth. After decades of being cut off from the rest of the world, internet users in Myanmar increased by 97% in 1 year. And 70% of those are mobile users. If you travel to places like Asia โ€” where innovation is running riot due to a lack of legacy systems and thinking โ€” you can see that change takes place in weeks rather than years. Thereโ€™s no โ€˜change managementโ€™ or time spent preparing people for change as folk have lived their lives facing one seismic shock after another. For three weeks Iโ€™ve not dealt with any paper, any spreadsheets, and very few emails. Iโ€™ve negotiated seven hotels, seven flights, taxiโ€™s and boat trips through a mix of apps, increasingly powered by automation and artificial intelligence. In some respects coming home seems like arriving in the third world, rather than coming from it. One of the most interesting developments on this trip is how artificial intelligence (AI) and chatbots have broken through to the mainstream. On many occasions, Iโ€™ve found myself updating hotel plans through a chat application very aware Iโ€™m not actually talking to a human. When we talk about AI and automation we too often focus on the loss of jobs and of meaningful human contact rather than the value-added to the customer. I missed two connecting flights but both were rebooked for me before I even got off the plane. A temporary hotel was arranged for me in Dubai before I knew anything about it. I didnโ€™t have to send emails to hotels confirming arrival details as a chatbot did it for me. In the West, it seems more time is spent writing blogs worrying about the threat of AI than implementing AI to introduce better customer experiences. The big threat to our jobs isnโ€™t actually AI, itโ€™s our inability to move away from existing business models and to explore new ones. What we are seeing in customer experience now is really interesting and splits us into roughly six camps: Those who are disengaging from AI as itโ€™s science fiction or a bit spooky. Those who are actively resisting it because it threatens their incumbent position and business model. Those who think it will upset their staff or their customers โ€” as if somehow their staff and customers live in a parallel universe where Siri, Alexa, Cortana and Google donโ€™t exist. Those who see it as an opportunity to cut costs or realise benefits to the organisation. Those who are seeing this technology as a way to move to better and more personalised customer experiences. Those who see this technology as a way to transition to entirely new business models providing new opportunities for customers. If your organisation is going to shy away from using technology to streamline its customer experiences, then youโ€™re obviously doomed. However, the debate is more nuanced than that. On leaving my delayed flight with my first 24 hours travel plans in tatters I was met by a real-life human being. She handed over my new tickets and explained how to get to the hotel they had given me. She explained that all meals would be paid for and apologised for the inconvenience. Itโ€™s this sweet spot we need to aim for โ€” where technology becomes an enabler to a greater purpose. People trained in listening and empathy supported by AI that understands and is able to adapt and personalise complex service offerings. Iโ€™d buy that. Originally published at paulitaylor.com on January 19, 2018.
If Youโ€™re Still Shying Away From Using Technology To Improve Customer Experience โ€” Youโ€™re Doomed
6
if-youre-still-shying-away-from-using-technology-to-improve-customer-experience-you-re-doomed-127218daf5b5
2018-01-30
2018-01-30 08:34:00
https://medium.com/s/story/if-youre-still-shying-away-from-using-technology-to-improve-customer-experience-you-re-doomed-127218daf5b5
false
683
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Paul Taylor
Innovation Coach and Co-Founder of @BromfordLab. Follow for social innovation and customer experience.
f152ec5969f0
PaulBromford
4,875
3,598
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-25
2018-01-25 05:31:35
2018-01-25
2018-01-25 05:32:07
0
false
en
2018-01-25
2018-01-25 05:32:07
1
12735fc41891
1.807547
0
0
0
The construction business industry is one of the most challenging and geography-based industries for the development. The need to buildโ€ฆ
2
Transformation of Construction Industry by Data Science The construction business industry is one of the most challenging and geography-based industries for the development. The need to build more accurate and highly customized infrastructures is growing rapidly. Every new startup and old companies business firms are demanding the innovative, well-spaced and environmental-friendly workspace for the business. Apart from the commercial construction, the residential communities and areas are expanding every day which is leading to use the land and site in the most economical and profitable way for various constructions. The Data Science is leveraging the construction and field work data in many constructive ways: 3D Modelling and Site Mapping: The Predictive Models of Data Science are helping construction companies for managing the construction work like Building Information Modelling in which the 3D models of buildings, construction sites and projects are mapped, designed and management of the integrated data to predict the trends of the projects. To know more about us: http://canopusdatainsights.com/ Equipment and Fuel Monitoring: The determination of the performance indicators and fuel consumption of the resources that are being used in different locations, tracking the geological data of the project, sites, workers, and equipment shipments are now can be easily done by the highly precise algorithms of Data Science. Resource Predictions: The construction professionals are using Data Science strategies for different business projects to make the clear prediction of logistics data like the accurate prediction to allow the resources, the predictions about the availability of manufacturing and spare parts, construction resources and other equipment, the predictions to avoid the downtime of manufacturing and import of the resources. Field Work and Construction Factor Determination: Data analytics solutions provide the business firms with the accurate readings of temperature, humidity and various sensor statistics that can help you in better decision making for analyzing the right conditions to make and manage the construction work, alarm detections for various safety and health compliance of the workers, and any sudden alert to alter the construction work and locations of the workers. Canopus Data Insights is one of the top Data Science Service Provider in India which is serving the construction sector in terms of Project Planning and Data Management Services by Data Science in which we have provided various analytics results to detect the project risks, removal of less potential project resources, managing the feasible budget for a project and planning of each construction site and workers. It is expert in various Data Science services like Data Gathering, Churn Analysis, Data Visualization, Marketing Analytics, Buzz Monitoring, Data Quality Management, Customer Segmentation, Data Driven Model Creation, Cross Sell, Upsell and much more. The company is offering the useful output insights from the raw site data and some of the other services that Canopus Data Insights have delivered in various industrial sectors are Outsourcing Services of Data Analytics, Data Science, and Big Data.
Transformation of Construction Industry by Data Science
0
transformation-of-construction-industry-by-data-science-12735fc41891
2018-01-25
2018-01-25 05:32:08
https://medium.com/s/story/transformation-of-construction-industry-by-data-science-12735fc41891
false
479
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Canopus Infosystems
Canopus Infosystems is an ISO 9001:2008 Certified Company. Canopus Infosystems is a trusted and best mobile app and website development company in India.
9112323b59f8
ankit.jain_86719
7
1
20,181,104
null
null
null
null
null
null
0
null
0
a5d15e4512e1
2018-06-09
2018-06-09 22:26:29
2018-06-23
2018-06-23 15:22:14
10
false
it
2018-06-23
2018-06-23 15:22:14
18
12753c703eb4
3.751887
5
0
0
Stiamo vivendo i primi anni di una nuova Internet, basata su dati validati ed intelligenza artificiale decentralizzata.
5
Manifesto | La prossima ondata di valore sta arrivando Stiamo vivendo i primi anni di una nuova Internet, basata su dati validati ed intelligenza artificiale decentralizzata. La storia si ripeteโ€ฆ Allโ€™inizio degli anni โ€™90 cโ€™erano molte buone idee, tante cattive idee e tantissime pessime idee. In quel periodo ha vinto chi ha capito il trend che avrebbe caratterizzato le decadi a venire ed ha costruito il futuro che conosciamo oggi. I dati decentralizzati sono la nuova ondata di valore. Questa nuova ondata tecnologica riguarda il validare la realtร  senza bisogno di entitร  fiduciarie. Se comprendiamo il reale potere di questa nuova fonte di dati rispetto a quelli centralizzati, capiremo che grande opportunitร  ci troviamo di fronte. Alessandro M. Lagana Toschi | Understanding The Gold Rush of Scalable and Validated Data powered by Blockchain and Decentralized AI. McKinsey | The age of analytics: Competing in a data-driven world. The Economist | The worldโ€™s most valuable resource is no longer oil, by data. Lโ€™intelligenza artificiale ha bisogno di dati validati per raggiungere il suo vero potenziale. La qualitร  dei dati sarร  sempre piรน fondamentale per la produzione di algoritmi di intelligenza artificiale che possano essere sempre piรน performanti e pervasivi. La generazione nata dopo il 2010 la definiremo Generazione AI, perchรจ avranno conosciuto un mondo solo con lโ€™intelligenza artificiale. McKinsey | Notes from the AI frontier insights from hundreds of use cases. Gartner | Gen AI-Artificial Intelligence Empowers a Generation of Radical Thinkers. Dallโ€™Internet dellโ€™Informazione allโ€™Internet del Valore. Una nuova tipologia di Internet abilitata dalla blockchain si sta diffondendo e permetterร  alle persone di scambiarsi valore, risorse scarse e logica senza bisogno di intermediari fiduciari terzi per la prima volta nella storia. W3C | Internet of Value Manifesto. World Economic Forum | Realizing the potential of blockchain. Il futuro delle tecnologie รจ invisibile. Lo scenario del nostro domani sta mutando dallโ€™E-commerce al Conversational Commerce, dalle Nano Tecnologie alle Tecnologie Ingeribili. Siamo sicuri che nella prossima decade la tecnologia avrร  una UX per come la conosciamo ora? Forbes| Ingestible origami surgeon could be coming โ€˜soonโ€™ to a pill near you. Accenture | AI is the new UI. Cap Gemini | Conversational Commerce. Societร  5.0. โ€‹ Da lavori noiosi e ripetitivi a nuove mansioni a valore aggiunto per gli esseri umani, basate sulla creativitร  e su fantastiche esperienze abilitate dallโ€™automazione e dallโ€™AI, per portare la nostra societร  ad un altro stadio evolutivo. World Economic Forum | 6 ways to make aure AI creates jobs for all and not the few. Gartner | 2020, Artificial Intelligence will create more jobs than it eliminates. Le Organizzazioni Decentralizzate Autonome. โ€‹ โ€‹Unโ€™innovativa forma di governance e di gestione delle organizzazioni abilitata dalla blockchain e dallโ€™intelligenza artificiale si propone di portarci da un potere centralizzato ad uno decentralizzato per fare le cose in un modo nuovo e cambiare il mondo. Forbes | Why Decentralized Artificial Intelligence will reinvent the industry as we know it. Vitalik Buterin |DAOs, DACs, DAs and more: an incomplete terminology guide. AI come cittadinoโ€ฆ Nei mesi scorsi รจ stato rilasciato il primo certificato di cittadinanza ad una AI in Arabia Saudita. Quale sarร  nostro il futuro? Come faremo a crescere e soprattutto come nutriremo queste AI con dei dati validati di qualitร ? Quesiti importanti a cui rispondere per il progresso della nostra societร . World Economic Forum | A robot has just been granted citizenship of Saudi Arabi. Accenture | Citizen AI raising AI to benefit business and society. Eโ€™ stato il migliore dei tempi, รจ stato il peggiore dei tempi. Stiamo vivendo nellโ€™epoca piรน entusiasmante e piรน complessa della storia dellโ€™umanitร . Abbiamo la responsabilitร  di creare un futuro umano-centrico e fondato su dati validati, etica e tecnologie esponenziali. Financial Times | Googleโ€™s Sergey Brin flags concerns over AI revolution. Ray Kurzweil | The law of Accelerating Returns. La Singolaritร  sta arrivando e noi della community di Decentralized AI vogliamo connettere innovatori e visionari per costruire insieme un futuro Human-Friendly abilitato dallโ€™Intelligenza Artificiale Decentralizzata.
Manifesto | La prossima ondata di valore sta arrivando
107
manifesto-la-prossima-ondata-di-valore-sta-arrivando-12753c703eb4
2018-06-23
2018-06-23 15:22:15
https://medium.com/s/story/manifesto-la-prossima-ondata-di-valore-sta-arrivando-12753c703eb4
false
663
The magazine of Decentralized AI community, we want to help our ecosystem of innovators and enthusiasts to be connected and share visions about the next wave of data values powered by Blockchain and AI technology
null
null
null
Decentralized AI
info@decentralizedai.net
decentralized-ai
ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,AI,DECENTRALIZED,INTERNET OF VALUE
null
Intelligenza Artificiale
intelligenza-artificiale
Intelligenza Artificiale
227
Alessandro Biancini
null
17ce792e507a
alessandrobiancini
7
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-09
2018-08-09 09:59:00
2018-08-09
2018-08-09 10:02:26
1
false
en
2018-08-09
2018-08-09 10:02:26
7
127542385229
3.562264
1
1
0
So, itโ€™s 2018 and the word is spread about Data boom. There are Tech Giants like Facebook, Amazon, and Google constantly working in theโ€ฆ
5
DIFFERENCE BETWEEN DATA SCIENCE, DATA ANALYTICS AND MACHINE LEARNING So, itโ€™s 2018 and the word is spread about Data boom. There are Tech Giants like Facebook, Amazon, and Google constantly working in the field of Machine learning and Data science. We all know that Machine learning, Data Sciences, and Data analytics is the future. There companies like Cambridge Analytica, and other data analysis companies who not only help businesses predict the future growth and generate revenue but also find the application in other fields like survey, product launch, elections and what not. Stores like Target and Amazon constantly keep a track of user data in forms of their transactions, which in turn helps them to improve their user experience and deploy custom recommendations for you on your login page. Well, we have discussed the trend, so letโ€™s get a little deeper and explore their differences. While Machine Learning, Data Sciences, and Data analytics canโ€™t be exclusively separated, as they are pretty much originating from the same concepts just different applications. They all go hand in hand with each other, and youโ€™ll easily find an overlap between them too. Data Science So, what is this data science? Data science is a concept used to tackle and monitor huge amounts of data or big data. Data science includes process like data cleansing, preparation, and analysis. A data scientist would collect data from multiple sources like surveys, physical data plotting. MUST READ:- WHY BIG DATA WITH PYTHON IS TOP TECH JOB SKILL IN 2018 He would then make the data pass through the vigorous algorithms to extract the critical information from the data and make a data set. This dataset could be further be fed to analyzing algorithms, to make more meaning out of it. Which is what basically Data analytics is pretty much for. What skills are required to make Data scientist? Some key skills that youโ€™d need : Deep knowledge of Python, Scala, SAS. Knowledge of databases like SQL. Good knowledge in the field of Mathematics and statistics. Understanding of analytical functions. Knowledge and experience in machine learning. DATA ANALYTICS Now, you might be wondering โ€ What is data analytics then?โ€ Talking in terms of a layman, if Data science is a house that consists of all the tools and resources. Data analytics would be a specific room. It is more specific in terms of functionality and application. Instead of just looking for connections like we do in Data science, a data analyst have a specific aim and goal. Data analytics is often used by the companies to search for trends in their growth. It often moves data insights to impact by connecting the dots between trends and pattern while Data science is more about just insights. You could say that this field is more focused on businesses and organizations and their growth. You would need skills like, Python, Rlab, Statistics, Economics, and Mathematics to become a Data analyst. Data analytics further bifurcates into branches like Data mining, which involves sorting through datasets and identify relationships. Predictive analytics:- This generally includes predicting customer behavior and product impact. Helps during the market research. Makes the data collected from surveys more usable and accurate in predictions. This finds application in a number of places. From weather report generation to predicting a students behavior in schools to predict the outbreak of disease. To conclude, one can obviously not draw a definite and clear line between Data analytics and Data science, but a Data scientist would have pretty much the same concepts and skills as an experienced data scientist. The difference between both of them would be the area of applications. Machine Learning Remember how you learned to ride a bicycle? A machine could learn that with the help of algorithms and datasets. Datasets of values basically. Machine Learning, basically comprises of set of algorithms that could make software and program learn from itโ€™s past experiences and thus make it more accurate in predicting outcomes. MUST READ :- WHAT IS MACHINE LEARNING AND HOW IS IT MAKING OUR WORLD A BETTER PLACE This doesnโ€™t need to be explicitly programmed, as the algorithm improves and adapts itself overtime. Skills that youโ€™d need for Machine learning:- Expertise in coding fundamentals programming concepts Probability and stats Data modeling There are overlaps and differences between Machine Learning and Data science. Machine learning and data analytics are a part of data science. Because the machine learning algorithm obviously depends on some data to learn. Data science is a broader term and would not only focus on implementing algorithms and statistics but it includes the entire data processing methodology. Thus, data science is a broader term that could incorporate multiple concepts like data analytics, machine learning, predictive analytics and Business analytics. However, Machine learning finds applications in the fields where Data science canโ€™t standalone like Face ID, fingerprint scanner, voice recognition, robotics. Recently, Google taught itโ€™s robot to walk, the algorithms only had constraints and physical parameters of the contour on the robot was supposed to walk. There was no other dataset included, the Machine walked through many different cases and made its dataset of the values it could refer to. Hence, after a few trials and errors, It learned to walk in a few days. This is the best example of Machine learning, that machine actually learns and changes its behavior.
DIFFERENCE BETWEEN DATA SCIENCE, DATA ANALYTICS AND MACHINE LEARNING
21
difference-between-data-science-data-analytics-and-machine-learning-127542385229
2018-08-09
2018-08-09 10:02:26
https://medium.com/s/story/difference-between-data-science-data-analytics-and-machine-learning-127542385229
false
891
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
iClass Gyansetu
Tech aspirants. interest in Big Data Hadoop,Cloud Computing,Software Testing,Java,,Android,Excel,Salesforce. https://www.gyansetu.in
fb8537dc5192
gyansetu
3
67
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-16
2018-04-16 22:39:59
2018-04-17
2018-04-17 00:52:35
2
true
en
2018-04-17
2018-04-17 21:52:16
5
1276afc76351
4.658805
21
6
1
Are we distracted by the problems and missing the solutions?
5
Photo by rawpixel.com on Unsplash Ending bigotry Are we distracted by the problems and missing the solutions? How do we get rid of racism, misogyny, ethnic, belief system, and sex role bigotry? This, of course, assumes that you want to get rid of those things. If I had made this statement four years ago in America or among most moderately educated people there would have been questions about that assumption even being a question. Yes, lots of places are still filled with various forms of bigotry but the secondary argument was over which of these battles had already been won at least in the post industrial societies. Things are different now. A particularly obnoxious subculture of bigots realized that they had a shot at ending and even reversing two hundred years of moral advance. And they had a political party in America and were refurbishing previously discarded ones in Europe and other places to demand a return to organizing human society on the basis of hatred and greater suffering rather than less. The steady movement to diversity, open borders, and a planetary perspective powered by open trade and reduced militarism was suddenly under attack by people we all thought were no longer a threat. It was a sudden and unexpected kick in the privates. Trump being manipulated into the US presidency was the ultimate insult added to shocked injury. BREXIT and Trump seems to have been the worst of it with the damage, to date, mostly localized in the US and GB. That is not to minimize the risks, particularly with a mentally challenged psychopath such as Trump still in office, and continuing pockets of xenophobic reaction in Hungary and Poland, but the weight is still on the side of progress and active diversity. The biggest problems are old problems of dogmatic and economic oppression due to religion and radical capitalist greed in the middle east, Africa, and elsewhere. It doesnโ€™t, I think, take a very hard look to find bigotry and hatred in the underlying economic problem of greed and accelerating asset misallocation. After all the poor are considered hateful creatures with no rights in states driven by greed as the preeminent virtue. There are other preexisting problems. Russia has the Putin problem but the bigger challenge is China that is coming into an era of global dominance. China doesnโ€™t need to necessarily change politically as the old 18th century semi representative governmental systems are failing rapidly, but they do need to get on the right side of diversity. Chinaโ€™s problems flare up when they treat minorities badly whether ethnic or sexual. They are at a critical point for this that could cripple Chinaโ€™s new role as world leader. The educated youth of China need to make this change and itโ€™s just not clear yet how that will happen. But these were known problems before the sudden western minority lurch into darkness. The problem theme through both the larger existing issues and the new reactionary lurch backward is the problem of irrational bigotry and the use of hatred as a political tool with the goal of defining the โ€˜otherโ€™ to be oppressed and abused. So how do we get rid of this for good? Could the answer be technology? We are on an accelerating roller coaster of technologically driven paradigm change. That is well understood. Or maybe not? We have been struggling with a flood of social and political change driven by social media for the last five years. The Arab Spring that really wasnโ€™t. Iranian demonstrations. Obamaโ€™s great election wins were actually early examples. But these have been countered by Russian troll farms for election manipulation neatly grabbed by a shrinking Republican Party in the US, alt right low grade outrage and white supremacy. The examples of repressive government shutdown of the internet as an ultimate tool of free expression in Turkey. And, of course, the great Chinese fire wall that seems to be moving to more extreme repression of their LGBT population. Obviously, despite early naivetรฉ, technology and planetary internetworking is not an automatic force for good. We are coming to understand, not for the first time, that technology forces great change but it is a tool whether for good or evil. But the constant mistake in the discovery that bad shit can happen because of our new and very cool, tech tools is to think that it is all over and what was once all good is now all bad. No, that is a mistake and change continues to accelerate. Things have just started to change. We are dealing with partially developed social media and the old style of centralized government as completely authoritarian can only be produced in small, isolated peninsulas. Historically we know what happens when isolated people see what others have. It can be ugly but it forces the old to change quickly or be destroyed. We are still seeing this in the remnants of 20th century totalitarianism in Russia and its bordering states. And we are seeing it very clearly in the new generation that was born on an internetworked planet. That is a force to be reckoned with as the struggling 20th century politicians in America have just discovered in facing the students from Stoneman Douglas High School. Needless to say they had their asses handed to them in ways they didnโ€™t even understand. Machine Learning as a force for good Photo by Markus Spiske on Unsplash Autonomous cars and trucks are here. We are struggling in the mass media (yes, there still is such a thing) with sensor algorithms and the logic of intelligent devices all around us. For those who have been paying attention there is major worry about AI based legal assistants implementing racist algorithms in sentencing support applications. The number of embarrassing descents into bigotry right out of 4chan by ML systems providing human conversational interfaces have produced many sleepless nights. We are learning that this stuff isnโ€™t easy. But people are beginning to recognize that bias can probably only be eliminated by intelligent agents properly trained to identify and remove biased language. This is beginning to be developed for HR as hiring is an obvious weak point. Google is working on this and a number of other applications are in early deployment. Iโ€™m willing to bet that this will come much faster than people think. And, as it is worked out, the realization is going to grow that we actually are developing the tools to eliminate bias and resulting bigotry and that will be with our implementation of Machine Learning. We canโ€™t correct our human limitations for bias and ingrained attitudes that we donโ€™t even recognize. Intelligent system can do for us with careful algorithm design including checks and balances. Itโ€™s difficult but I donโ€™t see any other way that relies only on people. We are our own weakest link.
Ending bigotry
282
ending-bigotry-1276afc76351
2018-06-03
2018-06-03 03:18:01
https://medium.com/s/story/ending-bigotry-1276afc76351
false
1,133
null
null
null
null
null
null
null
null
null
Politics
politics
Politics
260,013
Mike Meyer
Educator, CIO, retired entrepreneur, grandfather with occasional fits of humor in the midst of disaster. . .
ae38d08917ca
mike.meyer
8,947
564
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-12
2018-02-12 02:03:07
2018-02-12
2018-02-12 02:31:12
3
false
en
2018-02-12
2018-02-12 04:46:31
4
127768ae3772
1.859434
7
1
0
Last week I re-watched Andrewโ€™s Ng โ€œThe State of Artificial Intelligenceโ€ at EmTech. The former chief scientist at Baidu provides one ofโ€ฆ
5
User Centered AI Products Last week I re-watched Andrewโ€™s Ng โ€œThe State of Artificial Intelligenceโ€ at EmTech. The former chief scientist at Baidu provides one of the clearest frameworks to understand the state of AI. In his lecture, Andrew Ng explains that AI is particularly adept at performing tasks that take humans less than 1 second to perform. This insight prompted me to re-read Jobs Theory, and McKinseyโ€™s study on workforce automation. If AIโ€™s strength is automating 1 second tasks, then AI product managers must examine which occupations have these type of tasks. Uncovering these activities represent an opportunity in building successful AI products. Needless to say this exercise alone will not result in successful ventures. Donna Romer, VP of product at IBM Watson summarizes other key takeaways for product managers from Andrewโ€™s lecture in this post. When it comes to those 1 second tasks, the principle โ€œJust because you can doesnโ€™t mean you shouldโ€ should still apply. Buyers and users will need to see a real benefit for the technology to diffuse in the market. The experience should be superior as a result of using AI. Consider the self service meu-tablets connected to POS systems at airport restaurants. These devices enable customers to order and pay for their own food without the need for human help. This technology eliminates the need for waiters but it negatively affects the customer experience. If you hate these germ-infested menus as much as I do, chances are airports will be their first and last market. Adoption of AI will be slow if innovators ignore the users job. AI is successful when it can do the job and augment the user experience. Most occupations have activities that can be automated or augmented with AI A framework for applying AI to occupations is essential to the success of AI ventures and good design. McKinseyโ€™s study provides an initial step to understand occupations. More work need to be done to figure out new ways of creating product in the AI era. However, one thing will not change: user-centered design is key to making AI products that people want to use.
User Centered AI Products
10
user-centered-ai-products-127768ae3772
2018-03-27
2018-03-27 20:20:32
https://medium.com/s/story/user-centered-ai-products-127768ae3772
false
347
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Alvaro Soto
Director of Product and Design at Figure Eight. Before Design Principal at IBM Watson. Faculty at Texas State University. Parsons & McCombs alumni.
11c559383e64
ahhhlvaro
311
308
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-08-01
2018-08-01 01:49:40
2018-08-01
2018-08-01 02:12:16
1
false
en
2018-08-01
2018-08-01 02:40:24
0
127a605d7721
1.283019
2
0
0
An MIT student has just created a device that can interpret your internal dialogue. It may seem like a good idea to create a device thatโ€ฆ
5
MIT Student Creates Device That Can Read Your Internal Dialogue An MIT student has just created a device that can interpret your internal dialogue. It may seem like a good idea to create a device that would be used to help you solve problems. But technological advancements like this one are troubling the internal dialogue is a sacred space where you can speak to yourself it is a place outside of the material world that is spiritual, and it is one of the freedoms that we only have as human beings that should not be messed with. There are many philosophical conundrums to solve. But this particular idea that we should create devices to read our thoughts is dystopian in nature. This technology will advance, and itโ€™ll evolve over time. The state, and intelligence services will take stock of this advancement, there is a responsibility for this power. And I do not believe for one second that this technology will be used for the good of mankind. Nothing good will come to this new technology. There will be upheaval, and this instrument has immense power. The mind is the haven of human intellectualism. It is the place where we create, and escape. Capitalism will surely market, upgrade this device and there will be many other prototypes on the market in the not too distant future that will have all sorts of applications. I would sternly think first of the consequence of this technology there maybe big things that can be done by this apparatus, but there are many things that have been used by the minds of men and ultimately it all ends with destruction. If mankind can use it in goodwill there would be less problems.
MIT Student Creates Device That Can Read Your Internal Dialogue
46
mit-student-creates-device-that-can-read-your-internal-dialogue-127a605d7721
2018-08-01
2018-08-01 02:40:24
https://medium.com/s/story/mit-student-creates-device-that-can-read-your-internal-dialogue-127a605d7721
false
287
where the future is written
null
null
null
Predict
predictstories@gmail.com
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jeremy Limn
Content Writer @SpaceshipAU
9bfde0a14722
jeremylimn
47
184
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-09
2018-07-09 23:19:37
2018-07-11
2018-07-11 14:03:30
4
true
en
2018-07-20
2018-07-20 21:23:39
4
127b89e948f5
3.692453
0
0
0
Drakeโ€™s music career more resembles Domino's pizza than the careerโ€™s of other Musicians.
5
Why Drakeโ€™s Delivery Is Like Dominoโ€™s Pizza ๐Ÿ• Drakeโ€™s music career more resembles Domino's pizza than the careerโ€™s of other Musicians. Dominoโ€™s business resembles the approach to the album Scorpion more than Pizza Hut or other Quick Service Restaurants. Iโ€™ll keep it ๐Ÿ’ฏ, I still havenโ€™t heard every song on Drakeโ€™s album. Some die hard fans will listen front to back. Iโ€™m a die hard J. Cole fan so you better believe that I listened to the whole KOD album at once. But 25 songs Drake? Thatโ€™s a lot. Side A Side B I had no desire to listen to the entire thing. And ask anyone, the sound and mood of the songs on the album jump around a lot. Why is Drakeโ€™s Delivery Like Domino's Pizza? To say that artists want you to listen to their whole project from front to back may be true. But the statement is too general. For example when I write this post I hope everyone will read to the end but not everyone will. Someone will highlight the first sentence. Someone will highlight the last. And while the entire post may be hitting on the same subject, the main reason for writing it is to get some point across that connects with someone. But I have no idea what that is. So what does this say about Drake? What does this say about Dominoโ€™s? Drake may have a favorite song on the album, but he doesnโ€™t know what song will be my favorite. So he letโ€™s the people decide. It wouldnโ€™t be cost effective or smart for him and his team to make videos for EVERY song on the album before it comes out. So he puts the music out, and waits. Until. . . This approach is akin to Dominoโ€™s and their diversification efforts in trying to beat their โ€œcompetitionโ€ in the pizza business. I wonder if this will stick Drake and Dominoโ€™s say โ€œI know predictions are never 100%โ€ and so letโ€™s throw things at the wall (the market) and see what sticks. But itโ€™s not an โ€œI have no idea what Iโ€™m doing so Iโ€™m going to throw things up and see what sticksโ€ approach. Itโ€™s an โ€œany idea I do will work in the sense that no one knows my measures of success but me because they are looking at it from the point of a consumer and only I know what my goals areโ€ approach. Plus, itโ€™s a โ€œregardless, my spaghetti is better cooked than โ€˜my competitionโ€™sโ€™ so what Iโ€™m throwing against the wall has a better chance of sticking than anyone elseโ€™sโ€ approach. Domino's Vs. Pizza Hut? Yeah Right Pizza Hut isnโ€™t even competing with Dominoโ€™s. They arenโ€™t in the same category as Dominoโ€™s. Pizza Hut is a โ€œRestaurant Conceptโ€ under Yum! Brands. Dominoโ€™s is itโ€™s own public company. Consumers may view their pizza as competitors, and the company may market themselves as being competitors, but it helps to see the whole picture. How Dominoโ€™s Divides The Pie In an earlier post, I described how dominoโ€™s divides the pie, so to speak. How they diversify their efforts and then move forward with what the data says. Not based on what they are attached to as a company. I also described the below campaigns that are running simultaneously. Carryout Insurance Paving For Pizza (Where Dominoโ€™s covers potholes in neighborhoods. Pizza damage do to potholes is covered under their Carryout Insurance program) Dom [Dominoโ€™s version of Siri] Ordering through tweets Pizza Profiles A/B Tests performed on their website interface I could take a deep dive and look at when each were introduced, but pinpointing when each campaign started is hard, especially when you consider that a rollout of a campaign means they are launched in different regions at different points in time. Itโ€™s different from dropping an album. Everyone with Apple music gets the album at the same time. But for example, once Dominoโ€™s could see that repeat purchases with the same credit card, email, and address information continue making purchases then they say: Lets start making pizza profiles with the hopes of getting them to order more consistently . Maybe a free pizza pie will get them to order more frequently. So letโ€™s offer a free pie once they make 8 online orders through this profile. Then from there they take the next step. If it fails, they could scrap it all together. But the data from that โ€œstepโ€ helps them figure out if they should continue. In the same way, when a song gets popular You shoot a video. . . It Looks Like Drake Is Filming the โ€œIn My Feelingsโ€ Video in New Orleans The &#39;Scorpion&#39; song has proved to be a hit among fans with a dance craze already kicking off, and Drake andโ€ฆwww.complex.com
Why Drakeโ€™s Delivery Is Like Dominoโ€™s Pizza ๐Ÿ•
0
why-drakes-delivery-is-like-domino-s-pizza-127b89e948f5
2018-07-20
2018-07-20 21:23:39
https://medium.com/s/story/why-drakes-delivery-is-like-domino-s-pizza-127b89e948f5
false
793
null
null
null
null
null
null
null
null
null
Strategy
strategy
Strategy
18,467
Lance Mason
Born and Raised in NY. Writer, and Accountant. I started writing to understand my world, and realized that my world and yours are more alike than different.
ae955828d109
lancetmason
21
49
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-05-21
2018-05-21 17:32:25
2018-05-21
2018-05-21 17:36:42
1
false
en
2018-05-21
2018-05-21 17:37:52
2
127bf90db6b4
4.664151
2
0
0
Phil Siarri speaks to Francis Wenzel, CEO of Montreal-based TickSmith Corp. TickSmith is the maker of TickVault, a Hadoop-based platformโ€ฆ
5
TickSmith brings Big Data to financial giants Phil Siarri speaks to Francis Wenzel, CEO of Montreal-based TickSmith Corp. TickSmith is the maker of TickVault, a Hadoop-based platform delivering Big Data applications for the brokerage ecosystem. Hi Francis, nice to connect with you. Can you tell us about your background and professional journey? I am a seasoned financial data and technology executive and entrepreneur with 32 years of experience in the field. I started my fintech career in 1985, when I founded, Les Logiciels Kaiser, a software company that provided charting and portfolio management software. In 1994 I joined Exchange Market Systems and managed its e-brokerage solutions until its acquisition by SunGard in 1999. At SunGard I spent 12 years managing its suite of financial data products and services sold worldwide. What led to you to found TickSmith? What kind of opportunities did you see for Big Data within financial services? How has TickVault evolved over the years? TickSmith started back in 2011, when myself and the three other founders: Marc-Andre, David, and Tony saw an opportunity to work with a set of historical data at a scale, which was not being actively explored at the time. But how can institutions take 370 Terabytes of market data in various formats, make it usable, and easily distribute it to users that include hedge funds and trading applications? Coming from the industry, we knew that this problem was not a unique use case and that every financial institution that produces or consumes data eventually has to face that very issue. No traditional technology could scale terabytes of financial data so we looked to the latest advancements in big data technology to finally build a full stack on top of Hadoop. TickVault was born and first put in service in late 2013. Within the first year of launching the platform we started growing and with some very reputable clients in the bag, we were able to steadily expand the platform to incorporate add-ons and address industry problems as they emerged. The first client to subscribe to TickVault was the National Bank of Canada. Soon after, CME Group, the second largest exchange group in the world, signed up for TickVault and it is now the foundation of their DataMine historical data service. These days, everyone is talking about big data and understands the possibilities. In a sense, we were head and shoulders above other emerging technologies. Many big data solution providers offer their platform as a service whereas we only sell the platform. The advantage of just selling the platform is that it can be integrated into the institutionsโ€™ own infrastructures, on-premise or in their preferred cloud provider. With our innovative technology solving problems at financial institutions, we gained the attention of many interesting VCs and received an investment from Illuminate Financial in 2017 to grow our marketing and sales capabilities. We have already started to expand our global presence with new hires in New York City and London and plan to increase those resources. Mark Beeston and Mark Rodrigues, two accomplished industry veterans, also joined our board and help with overall strategy, customer service, and global expansion. We have many new features in our roadmap and our platform is tackling a number of interesting data challenges including data distribution, TCA, FRTB and more advanced analytics. To name but one example, our next module will expand the โ€œself serveโ€ data accessibility approach by giving more control to power users, internal development teams, and data scientists to map fields, ingest, and normalize new sources of proprietary data. What would you say are key Big Data trends in the financial industry right now? There are definitely growing changes with Big Data in the financial industry right now. One of those key changes is that centralizing data is becoming a lot simpler, however, managing data is still a challenge that is mostly being addressed in-house (onsite or on the cloud). Our platform is tailored to manage huge amounts of data and our modules such as our entitlement and monitor ones ensure that institutions are still very much in control of who has access to what within their organization or with their clients. Many others are migrating to multi-cloud storage. We are cloud agnostic, which means we can easily adapt to any cloud provider if data needs to be backed up on multiple providers. Our platform can also be deployed in house, on a Hadoop cluster, for those who are apprehensive about cloud security. In the next few years, there is going to continue to be a huge wave and influx of more โ€œdataโ€ on the way โ€” no surprises there! Many sources need help normalizing and :engineeringโ€ the data so that artificial intelligence and machine leanings demands can be met. The main problem will remain the same: in order to do predictive analysis on data or to investigate fraudulent activities via data science applications, data quality and easy-to-access data is essential. Given that you tend to interact with sophisticated stakeholders, do you still feel the need to โ€œeducateโ€ on what your product offering is about? Definitely! As โ€œBig Dataโ€ and โ€œArtificial Intelligenceโ€ or even โ€œfintechโ€ have become such buzzwords, itโ€™s hard for people to differentiate what it all means anymore. If a company tells any VC that they provide โ€œAIโ€ or โ€œmachine learningโ€ capabilities, the promised possibilities are mind boggling, but at this point it is all in its early infancy. In all seriousness, I think what confuses stakeholders about TickSmith, specifically, is that we are not data providers. We do not offer a service, we simply offer our platform, TickVault, as a licensed software that can be installed in any infrastructure. Prospects may be weary of purchasing the whole platform at they may have already built a similar feature or functionality. However, as our platform is modular, this no longer is an issue. Clients have the luxury of picking and choosing what they need to either enhance or replace what they currently have. Whatโ€™s next for TickSmith in 2018 and beyond? Do you plan to explore other industry verticals? We do not want to currently explore any other industry. Our main focus is capital markets, as we believe there are already a wide variety of use cases and users within this field. All four of our founders have more than 30 years of experience in fintech therefore we feel this adds to our credibility within the industry. For us, this year will be all about scaling our business. Our platform, as is, can already be up and running in companiesโ€™ infrastructures within weeks but our self-ingestion project will transform that into minutes! The self-ingestion project that I mentioned earlier will provide all the tools and functionalities that clients need to access data ingestion, data wrangling, data profiling, data validation, data cleansing/standardization, and data discovery without having to code at all! This will not only help clients achieve a faster time-to-market but will also give them full control of customization. This article was originally published on financialit.net
TickSmith brings Big Data to financial giants
48
ticksmith-brings-big-data-to-financial-giants-127bf90db6b4
2018-05-23
2018-05-23 21:57:30
https://medium.com/s/story/ticksmith-brings-big-data-to-financial-giants-127bf90db6b4
false
1,183
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
info@datadriveninvestor.com
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Big Data
big-data
Big Data
24,602
Phil Siarri
Founder of Nuadox | Innovation Research | #TRFinRiskCanada40
7382a15ad0d3
philsiarri
982
843
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-26
2017-09-26 10:47:09
2017-09-26
2017-09-26 12:12:26
1
false
fr
2017-10-11
2017-10-11 05:33:27
7
127c9b85dedb
3.011321
4
0
0
par Ludan Stoecklรฉ, CTO dโ€™Addventa #chatbot #intelligenceartificielle
3
lโ€™App Fatigue et les chatbots par Ludan Stoecklรฉ, CTO dโ€™Addventa #chatbot #intelligenceartificielle Bienvenue dans notre sรฉrie dโ€™articles sur les chatbots et, plus particuliรจrement, sur les fonctionnalitรฉs clรฉ dโ€™un moteur de chatbot. Le moteur de chatbot est la partie โ€œintelligenteโ€ dโ€™un chatbot, qui gรจre les รฉchanges conversationnels. Nous rรฉpondrons aux questions suivantes : Comment un chatbot fait-il pour comprendre une question et y rรฉpondre ? Comment gรฉrer une conversation avec un chatbot ? Comment maintenir un chatbot ? Commenรงons par un premier article de prรฉsentation gรฉnรฉrale de ce que sont les chatbots. Le succรจs des applications de messagerie & lโ€™App Fatigue Les applications de messagerie sur mobile connaissent un succรจs gigantesque. Facebook Messenger est lโ€™application mobile (app) la plus tรฉlรฉchargรฉe en 2016 (toutes catรฉgories confondues), avec 1,3 milliard dโ€™utilisateurs actifs en septembre 2017. WeChat, son รฉquivalent chinois, compte 900 millions de membres et reprรฉsente 30% du temps dโ€™utilisation quotidien sur mobile en Chine. Dans le milieu professionnel, Slack compte 5 millions dโ€™utilisateurs quotidiens. Avez-vous entendu parler de lโ€™App Fatigue ? Cโ€™est un phรฉnomรจne de lassitude des utilisateurs vis-ร -vis des applications mobiles. Pour de multiples raisons, mais souvent des problรฉmatiques de mise ร  jour et des prรฉoccupations concernant la sรฉcuritรฉ et les autorisations, 41% des utilisateurs ne tรฉlรฉchargent plus aucune nouvelle app, tandis que 5 apps concentrent 85% du temps passรฉ. En pratique, il devient donc de plus en plus difficile, pour les entreprises, dโ€™atteindre ses clients par la simple mise ร  disposition dโ€™une app qui leur fournit des services existant ou nouveaux. Quโ€™est-ce quโ€™un chatbot ? Un chatbot est un robot logiciel pouvant dialoguer avec un individu ou consommateur par le biais dโ€™un service de conversations automatisรฉes effectuรฉes en grande partie en langage naturel. Un chatbot est aussi parfois appelรฉ โ€œagent conversationnelโ€. Les interactions peuvent รชtre textuelles, en langage naturel : lโ€™utilisateur pose une question librement, comme sโ€™il sโ€™adressait ร  un humain, et le chatbot rรฉpond sous forme de texte. Au-delร  du langage naturel, les interactions peuvent รชtre canalisรฉes via des boutons ou autres รฉlรฉments de formulaire utilisรฉs lors du dialogue, ou encore enrichies via des images ou des cartes. Les chatbots ne sont pas nouveaux : les robots IRC (Internet Relay Chat, lโ€™ancรชtre de Slack) รฉtaient courants dans les annรฉes 90. ร‰galement, les assistants virtuels, qui sont des chatbots, se sont progressivement rรฉpandus depuis une dizaine dโ€™annรฉes sur les sites Web des grandes entreprises, le plus souvent pour rรฉpondre ร  questions simples, comme โ€œquels sont les horaires dโ€™ouverture de la poste la plus proche ?โ€. Lโ€™effervescence actuelle autour des chatbots peut sโ€™expliquer par : le succรจs des applications de messagerie, qui sont le support naturel des chatbots, phรฉnomรจne combinรฉ ร  โ€œlโ€™App Fatigueโ€ ; les progrรจs techniques dans le traitement du langage (NLP โ€” Natural Language Processing), progrรจs qui permettent la mise en oeuvre aisรฉe de chatbots qui comprennent mieux le langage naturel ; la mobilisation des grands acteurs du Web (Facebook, Microsoft, Amazon, Google) autour des chatbots โ€” et la mise ร  disposition de plateformes logicielles par ces mรชmes acteurs. Satya Nadella, CEO de Microsoft, affirme que les applications mobiles (en tout cas les plus basiques) seront progressivement supplantรฉes par les chatbots : โ€œBots are the new appsโ€. Enfin, le succรจs des chatbots est dรฉjร  une rรฉalitรฉ en Asie. Sur WeChat, il est possible de faire des virements, de participer ร  des loteries, et dโ€™acheter des tickets de cinรฉma via des bots. Une perspective plus large Citons une statistique rรฉcente marquante : Voice is beginning to replace typing in online queries. Twenty percent of mobile queries were made via voice in 2016, while accuracy is now about 95 percent. Le langage est universel : il est naturel de vouloir communiquer avec nos outils technologiques via du texte ou la voix. Les chatbots sโ€™inscrivent ainsi dans une perspective plus large qui est le passage progressif des interfaces graphiques traditionnelles (GUI Graphical User Interface) vers des interfaces conversationnelles (CUI Conversational User Interface), textuelles et vocales. Les chatbots interagissant en texte รฉcrit sont donc une premiรจre รฉtape. Des interfaces purement vocales suivront sur certains cas dโ€™utilisation. Certains acteurs ont fait le choix dโ€™aller directement vers la voix, comme Amazon avec son assistant vocal Alexa. Ce premier article nous a permis dโ€™introduire ce que sont les chatbots et le contexte dans lequel ils sโ€™inscrivent. Dans les articles suivants, nous allons explorer en dรฉtail les fonctionnalitรฉs des chatbots. Lโ€™article suivant permettra de comprendre comment un chatbot fait pour comprendre une question et y rรฉpondre.
lโ€™App Fatigue et les chatbots
4
quest-ce-qu-un-chatbot-127c9b85dedb
2018-05-31
2018-05-31 15:24:46
https://medium.com/s/story/quest-ce-qu-un-chatbot-127c9b85dedb
false
745
null
null
null
null
null
null
null
null
null
Chatbots
chatbots
Chatbots
15,820
Addventa
Lโ€™Intelligence Artificielle pour les activitรฉs de services des grandes entreprises.
978127d3ca0e
addventa
11
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-21
2017-10-21 00:10:21
2017-10-21
2017-10-21 00:13:38
2
false
en
2017-10-21
2017-10-21 00:13:38
12
127cedf810b6
2.334277
0
0
0
Hereโ€™s everything thatโ€™s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go downโ€ฆ
5
AI Inspiration #10: Moneyball 3.0; Fishy Facial Recognition at Airport Security; Image Recognition for Adult Videos Hereโ€™s everything thatโ€™s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go down. Our logic is undeniable. But first, let me take a selfie. Donโ€™t Talk to the Animals: Image Recognition Helps Identify โ€œGoodโ€ and โ€œBadโ€ Wildlife Selfies Just in case nobody ever told you, touching or talking to the animals that appear in your selfies isnโ€™t good for the animals. Suspecting a disturbing trend in the practice, two charities turned to image recognition and machine learning to pick out โ€œgoodโ€ from โ€œbadโ€ wildlife selfies on social media. Their findings? Thereโ€™s been a 292 percent increase in wildlife selfies on Instagram since 2014, and 40 percent of those are considered of the โ€œbadโ€ variety. The hope is that a little bit of said machine learning could be used to train some humans in better tourist tendencies. Read more at New Kerala >> Fish Are Bait for Humans in New Facial Recognition Airport Security Check Security areas at Dubai International Airport are about to become fun thanks to computer vision. The airport is installing virtual aquarium tunnels featuring video footage of fish swimming all around travelers. The entertainment will draw travelersโ€™ eyes all over the screen, enabling the 80 cameras embedded in those screens to properly scan their faces, and, eventually, look for explosives or other dangerous materials. Read more at The National >> How Visual Data is Supercharging Sabermetrics in Sports From increased HD cameras across arenas to drones to in-ball sensors, the proliferation of visual data capture methods at sports events is combining with advances in AI to create new ways for teams to optimize their gameplay strategies. Welcome to Moneyball 3.0. Read more at The Visionary >> Computer Vision to ID the Who and the What in Millions of Adult Videos NSFW adult entertainment aggregator Pornhub is applying computer vision to help tag everything from actors to specific sex acts in its pro and amateur videos. The site gets more than 10,000 clips uploaded by users every day, and this new initiative will certainly make it easier for viewers to find what they are looking for, whether the people in the videos want to be found or not. Read more at International Business Times >> Move Over, Mailbox: After Recognizing Your Face, AI Drones Could Bring Packages Right to You Addresses and โ€œwe missed youโ€ delivery slips will soon be a thing of the past if the DelivAir concept ever sees the light of day. This proposed drone delivery service uses your phone to find out where in the world you exactly are, then delivers your package directly to you after it verifies your identity via facial recognition and a QR code sent to your smartphone. Read more at Digital Trends >> Anyone you know interested in computer vision? Forward this to them so they can subscribe, too. And please submit any computer vision stories you think weโ€™d be interested in posting. The Visionary newsletter is produced by GumGum.
AI Inspiration #10: Moneyball 3.0;
0
ai-inspiration-10-moneyball-3-0-127cedf810b6
2018-05-09
2018-05-09 10:02:35
https://medium.com/s/story/ai-inspiration-10-moneyball-3-0-127cedf810b6
false
517
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
The Visionary
Weekly computer vision news, exclusive visual content and original feature-length articles on how AI intersects with your daily life, business and marketing.
5ef0074c3a03
thevisionary_73083
16
2
20,181,104
null
null
null
null
null
null
0
null
0
2b36bdeffdf3
2018-09-28
2018-09-28 06:18:40
2018-09-29
2018-09-29 00:17:14
2
false
en
2018-10-24
2018-10-24 22:00:56
1
127d8578ed64
2.021069
0
0
0
Udacity opened up a three-month NLP nano-degree program this year. The AI Frontiers Conference has teamed up with Udacity to present aโ€ฆ
4
Udacity Training Class at AI Frontiers Conference Udacity opened up a three-month NLP nano-degree program this year. The AI Frontiers Conference has teamed up with Udacity to present a shorter version of the program delivered directly by members of Udacity. The training touches upon text processing, feature extraction, topic modeling, and NLP with deep learning. While we have seen a tremendous growth of applications powered by speech recognition and computer vision over the past few years, NLP, an area of artificial intelligence concerned with interactions between computers and human languages, represents the next technological breakthrough. Last year at the AI Frontiers Conference, Deeplearning.ai & Landing.ai Founder Andrew Ng said that โ€œwe will see a flourishing of new applications just as we have seen for speech and computer vision and which I think we will see more and more for NLP.โ€ The market of NLP is booming. The largest market research firm Markets and Markets predicts the NLP market size is estimated to reach $16.07 billion by 2021. From a career standpoint, it could not be a better time to start mastering NLP skills. A cross-disciplinary research field, NLP relates to many problems such as text classification, language modeling, speech recognition, caption generation, machine translation, document summarization, and question answering. It lays the foundation for a wide range of applications โ€” from automated customer service and chatbots to healthcare solutions centered around medical documents. Enabling computers to understand human language as it is spoken is however very challenging as human speech is often ambiguous and its linguistic structure varies amongst different regions and ethnicities. The training touches upon text processing, feature extraction, topic modeling, and NLP with deep learning. In text processing, it covers using Python and NLTK, cleaning, normalization, Tokenization, Part-of-speech Tagging, Stemming and Lemmatization. In Feature Extraction, it covers Bag of Words, TF-IDF, Word Embeddings, Word2Vec, GloVe. In Topic Modeling, it covers Latent Variables, Beta and Dirichlet Distributions, Laten Dirichlet Allocation. Finally, in NLP with Deep Learning, it covers Neural Networks, Recurrent Neural Networks (RNNs). Word Embeddings, Sentiment Analysis with RNN. The Udacity NLP training will be held at AI Frontiers Conference on Nov 11, 2018 from 8:30am -12:30pm. AI Frontiers Conference brings together AI thought leaders to showcase cutting-edge research and products. This year, our speakers include: Ilya Sutskever (Founder of OpenAI), Jay Yagnik (VP of Google AI), Kai-Fu Lee (CEO of Sinovation), Mario Munich (SVP of iRobot), Quoc Le (Google Brain), Pieter Abbeel (Professor of UC Berkeley) and more. Get your tickets now at aifrontiers.com. For question and media inquiry, please contact: info@aifrontiers.com
Udacity Training Class at AI Frontiers Conference
0
udacity-training-class-at-ai-frontiers-conference-127d8578ed64
2018-10-24
2018-10-24 22:00:56
https://medium.com/s/story/udacity-training-class-at-ai-frontiers-conference-127d8578ed64
false
434
Showcasing the frontiers technologies and people of artificial intelligence. Join our next conference of AI Frontiers 2018 on Nov 9โ€“11 aifrontiers.com
null
aifrontiers
null
AI Frontiers
media@aifrontiers.com
aifrontiers
ARTIFICIAL INTELLIGENCE,AI,TECHNOLOGY,MACHINE LEARNING,DEEP LEARNING
ai_frontiers
Machine Learning
machine-learning
Machine Learning
51,320
AI Frontiers
AI Frontiers Conference showcases the most influential voices in AI. The next conference is Nov 9โ€“11, 2018 in San Jose. Get your ticket now at aifrontiers.com
47fc66194bfe
aifrontiers
516
4
20,181,104
null
null
null
null
null
null
0
null
0
1f35b6f451e8
2018-07-16
2018-07-16 10:04:14
2018-07-16
2018-07-16 17:29:09
0
false
fr
2018-07-25
2018-07-25 08:59:01
24
127eb2e6421b
3.384906
0
0
0
Kanban
5
Webography for 4 dummies to make it in machine learning โ€” Chapter 21, Scene 1 Manifeste agile - Wikipรฉdia Le Manifeste pour le dรฉveloppement Agile de logiciels est un texte rรฉdigรฉ par 17 experts du dรฉveloppementโ€ฆfr.wikipedia.org Kanban Kanban - Wikipรฉdia Un kanban est une fiche cartonnรฉe ou un signal รฉlectronique ou simplement un emballage que l'on fixe sur les bacs ouโ€ฆfr.wikipedia.org KANBAN Le terme dรฉsigne la mรฉthode de gestion de production en flux tendu consistant ร  limiter la production dโ€™un poste en amont dโ€™une chaรฎne de travail aux besoins exacts du poste en aval. Cโ€™est un mรฉcanisme de type ยซ juste ร  temps ยป permettant dโ€™asservir la production ou lโ€™approvisionnement dโ€™un composant ร  la consommation qui en est faite. Le poste de production ne produira que ce que demande le poste prรฉcรฉdent ou ce qui est nรฉcessaire pour recomplรฉter la tรขche en cours. Lโ€™ensemble du flux de production est donc pilotรฉ par la demande. Chaque besoin (ou lot) est reprรฉsentรฉ par une carte (ou Kanban). Lโ€™ensemble des cartes du poste qui sont sur le planning Kanban reprรฉsentent lโ€™ensemble du carnet de commandes en cours. Si, sur le planning il y a des รฉtiquettes, cela signifie que le poste ร  de la production ร  rรฉaliser. Si celui-ci est vide, cela signifie quโ€™il nโ€™y a plus rien ร  produire. Le support de lโ€™ordre de reconstitution est une รฉtiquette accrochรฉe ร  chaque lot qui est produit ou approvisionnรฉ. Tant quโ€™il y a des รฉtiquettes dans le tableau, le producteur prend la derniรจre, rรฉalise le lot correspondant, y fixe lโ€™รฉtiquette et fait parvenir ce lot au consommateur. http://www.cienum.fr/sites-internet-mobiles/projets/methodologie-de-projets/kanban -> Principes et intรฉrรชts Lโ€™approche Kanban consiste globalement ร  visualiser le Workflow (Le processus de traitement dโ€™une tรขche). On met en place un tableau de bord des items (demandes). Chaque item est placรฉ ร  un instant donnรฉ dans un รฉtat. Lโ€™item รฉvolue jusquโ€™ร  ce quโ€™il soit soldรฉ. Chaque รฉtat du tableau peut contenir un nombre maximum prรฉdรฉfini de tรขches simultanรฉes (dรฉfini selon les capacitรฉs de lโ€™รฉquipe contrairement ร  la mรฉthode pomodoro) : on limite ainsi le WIP (Work In Progress). Il est primordial, pendant lโ€™exรฉcution des tรขches, de mesurer le โ€œlead-timeโ€. Il sโ€™agit du temps moyen pour complรฉter un item. Cette durรฉe sera progressivement de plus en plus courte et prรฉvisible. Les intรฉrรชts de la mise en place de cet outil Kanban sont principalement : Possibilitรฉ de mise en place progressive de la mรฉthodologie Agile (moins directif que Scrum) Les points de blocages sont visibles trรจs tรดt. La collaboration dans lโ€™รฉquipe est encouragรฉe pour rรฉsoudre les problรจmes de maniรจre corrective, le plus tรดt possible. On peut se passer de la notion de sprint. Un sprint dโ€™une ou deux semaines nโ€™est pas envisageable dans certains cas de figure (rรฉactivitรฉ supรฉrieure exigรฉe). La mรฉthodologie Kanban est donc utilisรฉe dans des services de support au client (gestion des tickets dโ€™incidents). Facilitรฉ de communication sur lโ€™รฉtat dโ€™avancement du projet. La Dรฉfinition du Done (Ensemble de critรจres permettant de considรฉrer la tรขche comme traitรฉe) permet de garantir un niveau de qualitรฉ constant et dรฉfini de maniรจre collective. Diffรฉrences Scrum/Agile : https://www.nutcache.com/fr/blog/kanban-vs-scrum/ Gรฉnรฉralement, on a tendance ร  utiliser le Scrum pour le dรฉveloppement dโ€™une application et le Kanban est plus frรฉquemment utilisรฉ pour les projets en TMA (Tierce Maintenance Applicative), cโ€™est ร  dire la maintenance dโ€™une application effectuรฉe par un prestataire externe ร  lโ€™entreprise, souvent en mode ยซ Ticketingยซ . Kanban vs. Scrum: What are the differences? | LeanKit Many Scrum teams also use Kanban as a visual process and project management tool. While some teams prefer to use onlyโ€ฆleankit.com 7 key differences between Scrum and Kanban | UpRaise Project management world, especially software project management world is abuzz nowadays with two approaches - Scrum &โ€ฆupraise.io MySQL :: Sakila Sample Database :: 5 Structure The following diagram provides an overview of the structure of the Sakila sample database. The diagram source file (forโ€ฆdev.mysql.com https://upload.wikimedia.org/wikipedia/commons/2/24/Extreme_programming.svg XP flow chart ExtremeProgramming.org home | Development | Project | Starting XP |www.extremeprogramming.org Microsoft OneDrive - Access files anywhere. Create docs with free Office Online. Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel orโ€ฆonedrive.live.com Microsoft OneDrive - Access files anywhere. Create docs with free Office Online. Store photos and docs online. Access them from any PC, Mac or phone. Create and work together on Word, Excel orโ€ฆonedrive.live.com https://i.pinimg.com/originals/69/1f/18/691f18a99d54c4d2723f96f7c88c9195.jpg Talend Big Data Sandbox Prenez le raccourci pour accรฉder aux Big Data avec Hadoop, Spark et le Marchine Learning.info.talend.com 9 OpenCV tutorials to detect and recognize hand gestures The interaction between humans and robots constantly evolve and adopt different tools and software to increase theโ€ฆwww.intorobotics.com Whose Sign Is It Anyway? AI Translates Sign Language Into Text | The Official NVIDIA Blog Deaf people can't hear. Most hearing people don't understand sign language. That's a communication gap AI can helpโ€ฆblogs.nvidia.com BelalC/sign2text sign2text - Real-time AI-powered translation of American sign language to textgithub.com https://ieeexplore.ieee.org/document/8227483/ https://hal.inria.fr/hal-01678006/file/deep_learning_action.pdf [1711.11248] A Closer Look at Spatiotemporal Convolutions for Action Recognition Abstract: In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study theirโ€ฆarxiv.org syed-ahmed/signsei signsei - Automated Video Captioning Framework for American Sign Language Researchgithub.com The 20BN-JESTER Dataset | TwentyBN A large densely-labeled video dataset of generic human hand gestures.20bn.com Why Scrum Is No Longer My First Choice Why I have stopped using Scrum and moved to Kanban for software deliverymedium.com
Webography for 4 dummies to make it in machine learning โ€” Chapter 21, Scene 1
0
webography-for-4-dummies-to-make-it-in-machine-learning-chapter-21-scene-1-127eb2e6421b
2018-07-25
2018-07-25 08:59:01
https://medium.com/s/story/webography-for-4-dummies-to-make-it-in-machine-learning-chapter-21-scene-1-127eb2e6421b
false
897
We offer contract management to address your aquisition needs: structuring, negotiating and executing simple agreements for future equity transactions. Because startups willing to impact the world should have access to the best ressources to handle their transactions fast & SAFE.
null
ethercourt
null
Ethercourt Machine Learning
adoucoure@dr.com
ethercourt
INNOVATION,JUSTICE,PARTNERSHIPS,BLOCKCHAIN,DEEP LEARNING
ethercourt
How To Make It
how-to-make-it
How To Make It
266
WELTARE Strategies
WELTARE Strategies is a #startup studio raising #seed $ for #sustainability | #intrapreneurship as culture, #integrity as value, @neohack22 as Managing Partner
9fad63202573
WELTAREStrategies
196
209
20,181,104
null
null
null
null
null
null
0
null
0
721b17443fd5
2018-08-20
2018-08-20 12:44:28
2018-08-20
2018-08-20 15:02:24
5
false
en
2018-08-21
2018-08-21 06:22:37
3
1281542432c
3.88239
3
0
0
Consider a classification scenario where your dataset has more than 5,000,000 samples of target โ€˜Aโ€™ and only 5000 samples of target โ€˜Bโ€™โ€ฆ
5
Machine Learning โ€” The different ways to evaluate your Classification models and choose the best one! Photo by Iรฑaki del Olmo on Unsplash Consider a classification scenario where your dataset has more than 5,000,000 samples of target โ€˜Aโ€™ and only 5000 samples of target โ€˜Bโ€™. You split your dataset into a training set and a test set and then train your model using the training set. Then, you evaluate your model using the test set and find that your model achieved 99% accuracy on the test set. But did your model actually perform good? As the initial dataset had a huge amount of target โ€˜Aโ€™ to start with, both your training set and test set must have had a large number of target โ€˜Aโ€™ data. Clearly, your model might have had some overfitting issues and concluded that all your test data was target โ€˜Aโ€™ except a random 1% which were quite the outliers. But was it any good for classifying the data as target โ€˜Bโ€™ when it should have? Letโ€™s say class B were transaction fraud cases in a bank which you wanted to identify and had higher importance to you. In this case, the โ€˜Recallโ€™ value of your model would have been a good measure of performance rather than the accuracy. This is what I want to go over today. Before understanding the metrics, I think it would be a good idea to go over the meaning of TP, FP, TN and FN. In the case of binary classification, if the outcome class from a prediction is โ€˜positiveโ€™ and the actual class is also โ€˜positiveโ€™, then it is called a true positive (TP). However, if the actual class is โ€˜negativeโ€™ but the outcome is โ€˜positiveโ€™ then it is said to be a false positive (FP). A true negative (TN) has occurred when both the prediction outcome class and the actual class are โ€˜negativeโ€™. False negative (FN) is when the prediction outcome is negative while the actual value is positive. Simplifying the above paragraph, consider the case of a doctor testing if a patient is HIV positive or negative. If the patient is actually HIV positive and the doctor also tested that he was positive then it is True Positive (TP). If the patient is actually HIV negative but the doctor tested that he was positive then it is False Postive (FP). If the patient is actually HIV negative and the doctor also tested that he is HIV negative then it is True Negative (TN). Finally, if the patient is actually HIV positive but the doctorโ€™s test results came out as HIV negative then it is False Negative (FN). Hope you get the picture. Now as thatโ€™s understood, letโ€™s see how you can interpret your models performance using the concept of TP,FP, TN and FN. Accuracy: The ratio of correct predictions to the total number of predictions. This metric is only useful when there are equal number of observations in each class. Higher the accuracy, better the model. A value of 0.5 represents that the model isnโ€™t any better than a human estimate. 2. Area under ROC curve (AUC): An example of an AUC curve A high AUC, letโ€™s say 1, represents that your model was able to perfectly classify between the classes, suppose positive and negative. A low AUC, letโ€™s say 0.1, suggests that your model wasnโ€™t able to differentiate between the classes and was very erroneous. A value of 0.5 represents that the model isnโ€™t any better than a human estimate. 3. Logarithmic Loss: Example of a Log loss graph It used to measure the confidence of the predictions made by a classification model. The lower the log loss i.e. near 0, the better. 4. Confusion Matrix: The confusion matrix may be the best thing to see when it comes to the evaluation of a classification model. A Confusion matrix Some key metrics that can be calculated with the help of the confusion matrix are as follows: Precision - What proportion of positive identifications were actually correct? High precision indicates that a large proportion of positive identifications were correct. It ranges from 0 (low) to 1 (high). Recall - What proportion of actual positives were identified correctly? High recall indicates that your model did a great job at identifying positives correctly. It ranges from 0 (low) to 1 (high). F1 score - The F1 score conveys the balance between the precision and the recall. Look at precision and recall rather than relying on this as it is a combined measure. Specificity - What proportion of negative identifications were actually correct? High specificity indicates that a large proportion of negative identifications were correct. It ranges from 0 (low) to 1 (high). Take a look at one of my Kaggle kernels to see how I have used Precision and Recall to measure the performance of my models: A Hitchhiker's Guide to Lending Club Loan Data | Kaggle www.kaggle.com (Note: These are only some key metrics for evaluating classification algorithms. Remember there are many more!)
Machine Learning โ€” The different ways to evaluate your Classification models and choose the bestโ€ฆ
21
machine-learning-the-different-ways-to-evaluate-your-classification-models-and-choose-the-best-1281542432c
2018-08-21
2018-08-21 06:22:37
https://medium.com/s/story/machine-learning-the-different-ways-to-evaluate-your-classification-models-and-choose-the-best-1281542432c
false
808
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
null
coinmonks
null
Coinmonks
gaurav@coinmonks.com
coinmonks
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
coinmonks
Machine Learning
machine-learning
Machine Learning
51,320
Pragyan Subedi
A time series guy who loves Math. https://www.linkedin.com/in/pragyanbo/
39a06ae74225
pragyansubedi
34
24
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-24
2017-11-24 13:48:27
2017-11-24
2017-11-24 13:50:18
0
false
en
2017-11-24
2017-11-24 13:50:18
3
1281a6ec9075
3.275472
0
0
0
1 Stremor Automated Summary and Abstract Generator
1
List of 20 Best Summarizing Tools 1 Stremor Automated Summary and Abstract Generator Dialect Heuristics goes a stage past Normal summarizing tool to remove purpose from content. Rundowns are made through extraction, yet keep up lucidness by keeping sentence conditions in place. 2 Text-Processor Assumption examination, stemming and lemmatization, grammatical feature labeling and lumping, express extraction and named element acknowledgment. 3 Skyttle 2.0 Skyttle summarize tool online interface removes topical watchwords (single words and multiword articulations) and notion (positive or negative) communicated in content. Dialects bolstered are English, French, German, Russian. 4 Textuality The administration lifts the critical content from an HTML page. Kick up an off with online summarizer application in few stages. The Textuality Programming interface from Saaskit discovers the most relevant snippet of data on pages. 5 Text Handling The WebKnox content handling Programming interface gives you a chance to process (normal) dialect writings. You can identify the textรขโ‚ฌโ„ขs dialect, the nature of the composition, discover substance notices, label grammatical feature, separate dates, extricate areas, or decide the opinion of the content. 6 Question-Replying The WebKnox question-noting Programming interface enables you to discover answers to specific dialect questions. These inquiries can be real, for example, โ€œWhat is the capital of Australiaโ€ or more intricate. 7 Jeannie Jeannie (Voice Activities) is a virtual partner with more than two Million downloads, now additionally accessible through a Programming interface. The target of this administration is to give you and your robot with the most brilliant response to any common dialect question, much the same as Siri. 8 Diffbot Diffbot removes information from website pages consequently and returns organized JSON. For instance, our Article Programming interface restores an articleโ€™s title, writer, date, and full-content. Utilize the web as your database! We utilize PC vision, machine learning, and normal dialect handling to add structure to pretty much any page. 9 NLP tools Content preparing system to break down Regular Dialect. It is mainly centered around content characterization and assumption examination of online news media (broadly useful, numerous themes). 10 Speech2Topics Yactraq Speech2Topics is a cloud benefit that believers are varying media content into point metadata using discourse acknowledgment and regular dialect preparing. Clients utilize Yactraq metadata to target advertisements, construct UX highlights like substance look/revelation and dig Youtube recordings for mark notion. 11 Stemmer This Programming interface takes a section and returns the content of each word stemmed utilizing doorman stemmer, snowball stemmer or UEA stemmer. 12 LanguageTool Style and syntax checking/editing for more than 25 dialects, including English, French, Clean, Spanish and German. 13 DuckDuckGo DuckDuckGo Zero-click Information incorporates theme outlines, classes, disambiguation, official destinations, !blast sidetracks, definitions and thatโ€™s just the beginning. You can utilize this Programming interface for some things, e.g., characterize individuals, places, thoughts, words, and ideas; gives guide connects to different administrations (using !blast punctuation); list related subjects; and gives official destinations when accessible 14 ESA Semantic Relatedness Ascertains the semantic relatedness between sets of content portions in light of the resemblance of their importance or semantic substance. 15 AlchemyAPI AlchemyAPI gives propelled cloud-construct and concerning introducing content examination framework that wipes out the cost and trouble of coordinating regular dialect preparing structures into your application, administration, or information handling pipeline. 16 Sentence Recognition The Sentence Acknowledgment Programming interface will organize strings of content based on of the importance of the sentences. Itรขโ‚ฌโ„ขs intense NLP motor offering uses a semantic system to comprehend the content displayed. 17 Textalytics Media Investigation Textalytics Media Investigation Programming interface breaks down notices, points, assessments, and certainties in a wide range of media. This Programming interface gives administrations to Assessment investigation Concentrates positive and negative conclusions as per the specific circumstance. Substances extraction Recognizes people, organizations, brands, items, and so on and gives an authoritative frame that binds together extraordinary notices (IBM, Universal Business Machines Enterprise, and so forth.) Subject and catchphrase extraction Actualities and other key data Dates, URLs, addresses, client names, messages and cash sums. Topical characterization Compose data by subject utilizing IPTC standard grouping (more than 200 classes progressively organized). Arranged for various sort of media: microblogging and informal organizations, online journals and news 18 Machine Connecting Multilingual semantic investigation of content: engineers can explain unstructured archives and short bits of content, and interface them to assets in the Connected Open Information cloud, for example, DBpedia or Freebase. Different highlights incorporate content correlation, synopsis and dialect recognition. 19 Textalytics Topics Extraction Textalytics Points Extraction labels areas, individuals, organizations, dates and numerous different components showing up in a content written in Spanish, English, French, Italian, Portuguese or Catalan. This recognition procedure is completed by joining various complex particular dialect handling strategies that permit to acquire morphological, syntactic and semantic examinations of a content and utilize them to recognize diverse sorts of critical components. 20 Textalytics Spelling Grammar and Style Proofreading An administration for programmed editing of multilingual writings. This Programming interface utilizes multilingual Normal Dialect Preparing innovation to check the spelling, sentence structure and style of your writings with high exactness, keeping in mind the end goal to give exact and a la mode recommendations and instructive clarifications in light of references. The currently upheld dialects are Spanish, English, French, and Italian.
List of 20 Best Summarizing Tools
0
list-of-20-best-summarizing-tools-1281a6ec9075
2018-03-24
2018-03-24 17:23:21
https://medium.com/s/story/list-of-20-best-summarizing-tools-1281a6ec9075
false
868
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Lauren Owen
null
f037cc525817
typingservice2015
1
1
20,181,104
null
null
null
null
null
null
0
null
0
49f568c11dbd
2017-10-25
2017-10-25 17:59:05
2017-10-25
2017-10-25 18:25:23
1
false
en
2017-10-25
2017-10-25 22:10:02
2
12821bcc71bd
1.339623
0
0
0
I saw an article recently and the headline read, โ€œJapanese company replaces office workers with artificial intelligence.โ€ That makesโ€ฆ
1
Shutterstock.com AI: The Bad, the Good, and the Inevitable I saw an article recently and the headline read, โ€œJapanese company replaces office workers with artificial intelligence.โ€ That makes perfect sense to me, and itโ€™s what less tech-centric folks know as โ€œthe futureโ€. When most hear about AI, the assumption is that people will soon be replaced and the world as we know it will end. The machines will take over. โ€œA future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.โ€ โ€” Justin McCurry It all sounds scary and grim for our AI future, but technology always has a way of disrupting the way the world works. As the future unfolds, we are seeing more productive and less sensational stories concerning AI, as well. A week ago, Adobe announced its plans for AI, and how it wants to use it to โ€œamplify human creativityโ€. Instead of the expected (by some), โ€œweโ€™re replacing the user with a logo machineโ€, weโ€™re seeing how AI can help creative professionals improve their productivity, and reduce some of the time-sucks that are very familiar to us. As a professional with 20 years of adobe esperience, Iโ€™m particularly interested in the Creative Graph concept. โ€œWhat Adobe stressed throughout the day is that its focus here is not on making machines creative โ€” instead itโ€™s on amplifying human creativity and intelligence.โ€ โ€” Frederic Lardinois When it comes to AI, I know jobs will be lost, but I prefer to look into how professionals can utilize AI, and the working-supporting tools that will be put out there to help keep the remaining workforce productive relevant.
AI: The Bad, the Good, and the Inevitable
0
ai-the-bad-the-good-and-the-inevitable-12821bcc71bd
2017-10-25
2017-10-25 22:10:03
https://medium.com/s/story/ai-the-bad-the-good-and-the-inevitable-12821bcc71bd
false
302
Design for Machine Learning and Futures
null
null
null
Futures, Entrepreneurship and AI
null
uxatcomdes
USER EXPERIENCE DESIGN,STRATEGY,COMMUNICATION,DESIGN THINKING,FUTURISM
ahhhlvaro
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rolando Guerrero Murillo
null
a1bf5460533f
rolandoguerreromurillo
61
214
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-27
2018-08-27 00:28:04
2018-08-27
2018-08-27 00:42:59
3
false
en
2018-08-27
2018-08-27 00:42:59
0
1282f695f573
6.516038
4
0
0
The wealth management industry is having profit and fee pressures due to secular trends in the market that lead investors to move away fromโ€ฆ
4
AI driven wealth Management The wealth management industry is having profit and fee pressures due to secular trends in the market that lead investors to move away from actively managed funds to index funds and other passive investments. Additionally, Fintech companies such as Wealth Simple are taking away business by offering easy to buy products with simple investment objectives for the lower end of the investor population. What can Banks, Brokerage Houses, and Wealth Management firms do in the context of these trends and competitive pressures? AI and Machine Learning can provide a way for these firms to move to more sophisticated financial planning advice with personalized, risk balanced portfolios. In this article, we cover how wealth management firms are and can leverage these techniques to differentiate themselves and to get back some of the revenue they may have lost. Investors have a plethora of choices when it comes to choosing the appropriate firm to suit their needs The traditional wealth management industry consisting of banks, brokers and financial advisors who work with banks as custodians, and firms that support retail wealth management are facing several significant challenges. First, investor customers are leaving institutions at ever larger numbers (in some cases double the normal rate) and are churning through several different firms to find the best mix of net investment gains (investment gain minus fees). Loyalty to a bank or wealth management firm has reduced significantly forcing these firms to reduce their fees and to offer promotions to retain investors. Second, they are moving funds from actively managed funds to passively managed ETF funds to avoid the heavy fees associated with actively managed funds. Third, while โ€œroboadvisorsโ€ have been the craze in the wealth management world, investors are finding that these roboadvisors cannot deal with tax planning, portfolio balancing, and other needs that even mass affluent customers have. This becomes an important issue as the baby boomers hit retirement age and minimum distributions force many investors to make sub-optimal decisions. In this context, we believe that wealth management firms can improve loyalty, reduce fee attrition, and increase customer satisfaction by using AI and machine learning. Through new advances in deep learning techniques and natural language dialog approaches, wealth management firms can improve relationship management with their investor clients and help them with more frequently. AI and Predictive analytics for deeper relationship building Deeper relationship development with wealth management customers is key to success As the population of investors has gotten more digital, they expect wealth management firms to anticipate their needs and to provide help for their needs. Two emerging areas are helping wealth management firms to offer deeper relationship management for investors. The first is predictive analytics, which helps wealth management firms anticipate changes in an investorโ€™s life so that they can prospectively help the investor. Using predictive analytics, wealth management firms can predict changes in the life style of investors, predict the product to offer to increase retention and share of wallet, predict emotion and personality-based segmentations to target their marketing, and identify the reason and the timing for the reach to the investor. These prospective capabilities allow wealth management firms to identify who and when to reach out and can also give an indication of what the topic of conversation should be about and offers to make to the investor. This, combined with the capabilities to help the investor make decisions, allows wealth management firms to increase loyalty of investors and to increase revenue. AI and machine learning in helping decision making by investors Two advances in AI and machine learning presents wealth management firms new capabilities. First, as was demonstrated by Googleโ€™s Alpha Go program, with the appropriate use of โ€œsearchโ€ techniques, AI programs can defeat human players when there are too many choices to be considered. Furthermore, machine learning techniques, such as deep learning, in combination with the appropriate search techniques are also demonstrating that computer programs can make decisions that are novel and not biased by human disadvantages such as rooting, aversion to loss, etc. The combination of these two advances allows wealth management firms to come up with portfolios and advice on specific actions with those portfolios by leveraging historical data. Creating a portfolio or re-balancing a portfolio is a โ€œsearchโ€ problem where multiple objectives needs to be considered and an optimal decision needs to be made. For an investor, objectives can range from life style goals for pleasure and enjoyment, aspirations for security for self and family, control over oneโ€™s life and investment and contributions to the family and the community. Turning these high-level objectives into financial objectives, finding a portfolio of instruments that would achieve those objectives, and helping the investor make investment decisions (trades, withdrawals, deposits, incremental investments, etc.) covers a large search space in an uncertain world since the investor must make decisions for gaining benefits in the future. Furthermore, given implicit risk tolerance and behavioral decision-making approaches used by individuals, generating advice / recommendation for an individual investor needs to be personalized for that investor. In the past, pieces parts of this puzzle were addressed with human intuition playing a very large role. Advisors would use their expertise in picking investment instruments and their knowledge of the investor to have a conversation about the type of products to consider in a portfolio and would advise the investor on when to sell / buy / hold elements of that portfolio. For mass-affluent and lower end of the high network individuals, the help provided by advisors was static and very infrequent. AI and machine learning addresses problems with this high touch and intuition driven approach. AI algorithms can select between multiple options based on historical data and based on simulated performance of the portfolio in the future. They can consider far more options than can be done by a human advisor. Deep machine learning approaches can learn about investing styles and decision frameworks by mining past data to be able to create a portfolio that would fit with the style of the investor. The learned decision framework, investment style, and portfolio options can then be used to have meaningful conversations in key moments with an investor. Whether that event is about the investorโ€™s life (marriage, move, loss of a job) or trend towards dormancy prior to attrition, the advisor can use predicted events to have a โ€œreason to callโ€ and propose the appropriate solution using the โ€œlearnedโ€ decision framework and the portfolio options found through AI search. Machine learning for decision framework in wealth management Investors make decisions that can be improved with an appropriate decision framework. Human tendency to root on well-known factors can lead to concentrating positions in a few equities. Avoidance of loss behavior can lead to portfolio imbalance because a certain type of instrument has not performed well in the past for the investor. Wealth management firms have tried to address these issues by providing investors with access to simulation tools and periodic advice on portfolio balancing. However, given these simulation tools do not help create a decision framework, but provide point-in-time data, their adoption by investors has been very limited. More recent attempts at using roboadvisors to create plans that would address specific financial objectives have also not penetrated as deeply as expected because they too do not provide a framework for decision making where the investor may be interested in exploring alternatives. However, a decision framework that allows investors to evaluate alternatives can have a significant impact in good decision making by investors. Deep machine learning can be used to create such a decision framework. The decision framework for investment needs to consider the following: A key differentiation for wealth management firms is to provide a smart decision making framework for their investor clients ยท Investment product categories to consider (e.g., equities, bonds, etc.) ยท The specific options available within these categories (e.g., stock symbols, type of bond, funds, etc.) ยท The position to hold in each of the options Wealth management firms have access to past data on these decisions and performance of the investment given such decisions. Furthermore, they have access to whether the investor was satisfied or not satisfied with the performance given the decisions that s/he made through advisor notes and call center interaction data. Given this data, deep learning techniques that can work on significantly large number of variables can consider the three types of data over a period of time to create configuration of products, positions, and performance as available options that would satisfy an investorโ€™s goal. Selection of the appropriate configuration for recommendation then becomes a search problem that can be performed in two ways. To help make the investor decide that what s/he feels comfortable with, a similarity search to find options that are like the current options can be performed. The options along with their past performance can then be shown to the investor which will help clarify to the investor the biases the investor may have and will open the investorโ€™s decision making. A more aggressive option is to use search approach like AlphaGo where taking the current position of the investor in their portfolio as the starting position and exploring potential decisions the investor could make by using simulation to explore future decisions that the investor can make and the expected performance from those decisions. This โ€œsearchโ€ can result in a smaller set of decision recommendations that can be made by the investor. In summary, there are multiple options available to wealth management firms to provide decision making assistance using machine learning techniques. The choice to be used depends on how aggressive a wealth management firm wants to be and the data available with the firm to learn patterns from its customer interactions.
AI driven wealth Management
4
ai-driven-wealth-management-1282f695f573
2018-08-27
2018-08-27 00:42:59
https://medium.com/s/story/ai-driven-wealth-management-1282f695f573
false
1,581
null
null
null
null
null
null
null
null
null
Investing
investing
Investing
51,660
Data Dig
null
3b1f85061917
datadig
16
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-06
2018-02-06 16:08:12
2018-01-19
2018-01-19 00:00:00
1
false
en
2018-02-06
2018-02-06 16:08:59
5
128399254adf
1.860377
0
0
0
Originally published at domenica-cresap.com on January 19, 2018.
3
Did You Know This About AI? Originally published at domenica-cresap.com on January 19, 2018. With each passing day, artificial intelligence (AI) is transforming into something bigger and greater than what we knew of it yesterday. Because of this, it sees more attention than ever from investors. Advancements in AI isnโ€™t going to impact just a businessโ€™s products and services, but instead almost every aspect of their business. Here are some interesting facts about where AI is today. AI is a catch-all phrase The term โ€œartificial intelligenceโ€ is an umbrella term for a few different definitions such as machine learning and deep learning. The AI we think of that is shown in movies is not quite where technology is at yet. To learn more about this, check out Chris Neigerโ€™s article. AI will boost our future global economy A 2017 report from PwC claims that by 2030, AI will have the potential to contribute over 15 trillion dollars to the global economy. Not only that, but North America is expecting to see an increase of 14.5% of GDP as well. Booming car industry By 2027, only nine years from now, itโ€™s expected that the self-driving car industry will be worth $127 billion. AI plays an important part in making this become a reality. Better home connections Amazonโ€™s Echo line and Alphabetโ€™s Google Assistant are both powered by their own artificial intelligence. These are just the big names in the business. Itโ€™s expected to see more growth with the increasing popularity of these AI-powered, personal assistants. Impacting e-commerce Amazon is already implementing the use of AI technology to recommend specific products to its users. Also, it will be helping a business determine which deals to offer and at what time for better customer experiences and to help grow the business. Alphabet already has a leg up Googleโ€™s parent company has its own AI processor which makes its services, such as its search engine and Gmail, smarter. This processor is called the TPU โ€” the Tensor Processing Unit. It could be a dangerous path The technology guru, Elon Musk, has not been quiet about his concern for humanity with the advancements of AI. He suggests there be regulations set in place and the complete avoidance of autonomous weaponry. Read more about his warnings here. Loss of jobs According to Stephen Hawking, AI will make many jobs disappear without being able to create new ones in exchange at the same rate. But, itโ€™s not just Hawking making these claims, PwC has also predicted that over the next 15 years, we could see a decrease in 38% of U.S. jobs.
Did You Know This About AI?
0
did-you-know-this-about-ai-128399254adf
2018-02-06
2018-02-06 16:08:59
https://medium.com/s/story/did-you-know-this-about-ai-128399254adf
false
440
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Domenica Cresap
Domenica Cresap is a senior level IT executive. She also cycles, hikes, & spends time with her two great kids. http://domenicacresap.net
c0d9a75123a6
domenicacresap
355
594
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-21
2018-04-21 23:45:47
2018-04-28
2018-04-28 00:59:10
1
false
en
2018-04-28
2018-04-28 00:59:10
0
12839bae353b
2.701887
2
0
0
Healthcare is an essential problem related with everyone in the world. Many progresses have been made in this area but it is still a longโ€ฆ
4
Problems and Possible Solutions in Current Healthcare System Healthcare is an essential problem related with everyone in the world. Many progresses have been made in this area but it is still a long way to go. In this article, I will list top 3 issues in current healthcare system and see how todayโ€™s technologies can be converged and shift current healthcare system. Current Issues Avoidable harm to patients Avoidable harms come from two aspects. On the one hand, patients without medical knowledge might not be aware of their health condition and ignore their health problem. Then these problems become more and more severe. When they realize, it will take even more time, money and resources to recover. Also, it will be painful for patients. On the other hand, doctors are under a great pressure since their decisions are directly related with patientsโ€™ life. However, they are not always able to figure out the best therapeutic schedule, especially when they meet some symptoms that they have never met before. 2. Avoidable waste of resources People rely highly on large general hospitals which bring a lot of avoidable waste of resources. For patients, it will bring unnecessary costs in both money and time. For large hospitals, it is also a waste of resources. We should let large hospitals pay more attention in conquer difficult miscellaneous diseases rather than dealing with ailments which can be easily solved in community hospitals. More and more countries have realized this problem and started encourage their citizens moving to community hospitals for preliminary diagnosis and common diseases. 3. Lack of Information Sharing For patients who lack of medical knowledge, they usually are not able to provide a accurate description of their healthcare history. When they move to a new place and their healthcare histories are not recorded in a shared system, it will be a trouble for doctors to give a better treatment according to their history. Also, information security is a crucial problem in todayโ€™s medical information system. Possible Solutions To solve the previous problems, Iโ€™d like to propose a new system: There are two important components in this system: the Unified Information System and the AI helper. Unified Information System Current healthcare system has already adopted some information technology and tried to shift to a digital healthcare system. But there are still a lot of things need to accomplish and amore mature solution are needed. In my solution, I would like to build a data lake which can deal with not only structured data which can be handled in traditional relation database, but also semi-structured or non-structured data, such as physicianโ€™s notes. To protect data and build trust, I choose blockchain as the foundation of our unified information system. Blockchain can provide security and traceability for our system. With a unified information system, doctors can easily trace their patientsโ€™ healthcare history and provide a better treatment. Also, it will be easier for different organizations to work together in difficult miscellaneous diseases. 2. AI Helper Nowadays, many organizations have tried to perform preliminary diagnosis. However, it is really hard to train a general medical practitioner who can do this job well. Fortunately, we can combine data in the unified information system, knowledges provided by experts and artificial intelligence technologies to train an AI system to perform this job. This system can definitely avoid some unnecessary waste in our medical resources and provide more convenient preliminary diagnosis for patients. Whatโ€™s more, it can benefit people in rural areas where the medical condition is poor. Conclusion We have already made significant progress in our healthcare system. And there is no doubt that with the rapid development of medical technologies, we will definitely have a better and better healthcare system. But there are something more than the medical technologies. We have mentioned several problems in current system but hopefully, we can converge technologies, like blockchain, data lake and artificial intelligence, and contribute to a better healthcare system in many aspects.
Problems and Possible Solutions in Current Healthcare System
4
problems-and-possible-solutions-in-current-healthcare-system-12839bae353b
2018-04-30
2018-04-30 22:00:37
https://medium.com/s/story/problems-and-possible-solutions-in-current-healthcare-system-12839bae353b
false
663
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Yawei Wu
null
f57b92521232
yawei.wu
2
1
20,181,104
null
null
null
null
null
null
0
#importing the required functions from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import cv2 import numpy as np import tensorflow as tf import rclpy from rclpy.executors import SingleThreadedExecutor #importing the custom service created from customsrv.srv import ImgStr #creating a class for the service definition class Service: def __init__(self): self.node = rclpy.create_node('image_client') self.srv = self.node.create_service(ImgStr, 'image_detect', self.detect_callback) def detect_callback(self, request, response): #loading the tensorflow graph model_file = \ "Tensorflow/Temp_fc/output_graph.pb" label_file = "Tensorflow/Temp_fc/output_labels.txt" input_layer = "Placeholder" output_layer = "final_result" graph = load_graph(model_file) print("received data") global img for i in range(224): for j in range(224): for k in range(3): img[j,i,k]=request.im.data[k+j*3+i*224*3] #converting the image into a tensor form t = read_tensor_from_image_file(img) input_name = "import/" + input_layer output_name = "import/" + output_layer input_operation = graph.get_operation_by_name(input_name) output_operation = graph.get_operation_by_name(output_name) with tf.Session(graph=graph) as sess: results = sess.run(output_operation.outputs[0], { input_operation.outputs[0]: t }) results = np.squeeze(results) top_k = results.argsort()[-5:][::-1] labels = load_labels(label_file) response.veg.data = labels[top_k[0]] for i in top_k: print(labels[i], results[i]) return response #defining the load graph function def load_graph(model_file): graph = tf.Graph() graph_def = tf.GraphDef() with open(model_file, "rb") as f: graph_def.ParseFromString(f.read()) with graph.as_default(): tf.import_graph_def(graph_def) return graph #defining the function which converts the image data into a tensor def read_tensor_from_image_file(img): np_image_data = np.asarray(img) np_image_data= np.divide(np_image_data.astype(float),255) np_final = np.expand_dims(np_image_data,axis=0) return np_final def load_labels(label_file): label = [] proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines() for l in proto_as_ascii_lines: label.append(l.rstrip()) return label def main(args=None): rclpy.init(args=args) print("creating service") #create the service service = Service() #spinning the node with a blocking call print("spinning") executor = SingleThreadedExecutor() executor.add_node(service.node) executor.spin_once(timeout_sec=-1) if __name__ == '__main__': main()
16
87caec942b38
2018-06-24
2018-06-24 05:48:37
2018-06-26
2018-06-26 12:24:41
2
false
en
2018-06-26
2018-06-26 12:24:41
2
12844fa0d7a2
4.500314
3
0
0
Use of CNNs for image processing has gained massive popularity in recent times due to its accuracy and robustness.
3
Neural Networks(tf) + ROS2 Use of CNNs for image processing has gained massive popularity in recent times due to its accuracy and robustness. Another tool which has has gained enormous acceptance in the robotics community is the all-powerful Robotic Operating System 2 (ROS2). The introduction of ROS2 has made it easier to have multiple robots on the same ROS network and has facilitated the use of small embedded platforms to be a participant in the ROS environment. I believe that the marriage of these two popular tools was imminent for our product as we at Nymble strives to make use of the cutting edge technology to give cooking a new life. Our product utilizes the ROS2 network as its backbone and a deep neural network to detect the contents of dispensing boxes during automated cooking or the contents of the pan while manual cooking. One major hurdle faced was the integration of learning-based image detection into the ROS network on a mobile platform. ROS2 is bleeding-edge and online resources on how the integration is done are scarce. Hence, I decided to draft this post to ease up the process for those who are trying to achieve something similar for their project. We have used the following setup: ARMV7 based single-board computer Linux-based operating system Camera TensorFlow ROS2 The first step would be installing ROS2 from source. While installing ROS2 on the device, make sure that you ament ignore the packages you feel are not required and the packages that create errors during compilation. After the ignore files have been placed in the required packages, follow the same procedure given on the ROS2 installation page. Installing TensorFlow onto the system is a little more tricky, as you need to find the right version of tensorflow available for the python version you want to use. Download the version of your desire for the armv7 processor from the following resource. Install it onto your device using the following command. (The example given here is for TensorFlow 1.8.0 and python 3.5) sudo pip3 install tensorflow-1.8.0-cp35-none-linux_armv7l.whl I would recommend going through the basic tutorials available for ROS2 and TensorFlow before heading further, as that will help you understand and execute things more efficiently. Train the Tensorflow graph on an external computer, save the graph and transfer it onto the single-board computer. Even though it is possible to train the network on a single-board computer , I would recommend doing it on a more powerful machine or Google Colab to save time and to avoid the possibility of dealing with a slowed down / crashed system. If you are planning to use transfer-learning to retrain an existing module, avoid using modules which take up more than 400MB of RAM space while deployed (for example, Inception modules) as they will definitely crash the system and leave you high and dry. I would recommend using a suitable mobilenet module instead, as they are light and efficient. One thing to be cautious about while training the graph is that, decoding and resizing the image using tensorflow functions and opencv functions gives different pixel values which can change the accuracy of the results by around 10โ€“15% in most cases. In order to avoid this loss of accuracy, use the same method for loading and resizing of the image while training and while loading the image for detection. This brings us to the final and the most crucial part of the process, integrating the image recognition code with ROS2. For integrating the detection into a ROS network you would require one node for loading the tensorflow graph and passing the image through it to obtain the result, and another node for sending an image or information about an image to be loaded to the detection code. You can use either python, c++ or a combination of the two for getting the nodes up and running. ROS2 network For my project, I created a custom service with image data as request and string with the detection result as a response. The program which sends the image and receives the detection information acts as the ROS client and the image detection program is the service. I would suggest using a similar architecture instead of a publisher/subscriber network to avoid the complexities involved in matching the image and the result. I used python for the image detection part of the code and used the message type std_msg/UInt8MultiArray instead of sensor_msgs/Image for the convenience it offers to fiddle around with individual pixel values and makes it easy to change the orientation of the received image. The following code should help you get a better picture about how you can set up a service for image detection. We at Nymble, are committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better our work will be. If you are interested in applying learning based methods in robotics, write to us at hello@nymble.in !
Neural Networks(tf) + ROS2
60
neural-networks-tf-ros2-12844fa0d7a2
2018-06-26
2018-06-26 20:05:58
https://medium.com/s/story/neural-networks-tf-ros2-12844fa0d7a2
false
1,091
Making everyday healthy eating conveniently possible.
null
null
null
Nymble Labs
rohin@nymble.in
careers-nymble
HARDWARE,COOKING,INDIA,STARTUP,HARDWARE STARTUP
null
Machine Learning
machine-learning
Machine Learning
51,320
Sutej Kulgod
null
bff39b8997ae
sutej.kulgod
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 15:20:04
2018-09-25
2018-09-25 15:21:12
1
false
en
2018-09-25
2018-09-25 15:21:12
4
1285a3b4d13e
3.932075
0
0
0
Chest pain is one of the most common reasons for a patient to visit the emergency department. Clinicians are understandably risk averse toโ€ฆ
3
Machine learning supports clinicians to safely discharge chest pain patients from the emergency department Chest pain is one of the most common reasons for a patient to visit the emergency department. Clinicians are understandably risk averse to discharging these patients because they are expected to have less than a 2% miss rate for major adverse cardiac events. This means that clinicians admit many patients who, with hindsight, could have been safely discharged home. Our goal in this project is to support clinicians with a new discharge protocol to help reduce the number of unnecessary hospital admissions. We find that itโ€™s helpful to formulate this goal in a problem statement. As an emergency department clinician, I want to know which patients are not likely to experience major adverse cardiac events so that I can safely discharge them home. Assigning points wonโ€™t give you an accurate picture Many decision tools provide a second opinion for clinicians that are considering a discharge for a patient presenting with chest pain. They are frequently delivered as acronyms (GRACE! TIMI! FRISC! HEART!) to help a clinician remember all of the relevant decision criteria during their busy shift. This strict focus on simplicity has enabled clinical adoption of these tools. Unfortunately, these stringent constraints result in a much less accurate prediction of major adverse cardiac events. Vituity clinicians implemented the HEART score pathway in a large public health district in California for low risk patients that were presenting with chest pain in 2016. The HEART score by itself had a miss rate in this population of ~ 3,000 encounters of 5.4% (or a sensitivity of 0.946). This is nearly triple the standard to which clinicians are held! Only computers can handle this level of complexity This lack of sensitivity led us to a more flexible machine learning approach to identify major adverse cardiac events in our emergency rooms. There is a wealth of information in the electronic medical record that is not incorporated into HEART score, and so we started there. Goodbye checklists! Hello computer! Offloading the risk calculation away from a human brain means that we can consider thousands of variables that influence a patientโ€™s risk of major adverse cardiac events. For example, the R in HEART score stands for a โ€˜risk factorsโ€™. If a patient has been diagnosed with high cholesterol they are given 1 point. With a computer we can zoom in and build a much more resolved picture. Itโ€™s reasonable to believe that having high cholesterol for a number of years would contribute more to cardiac problems than a more recent diagnosis. With a computer we can keep track of the exact number of days between the present and the patientโ€™s diagnosis of high cholesterol. Then, we can include this number of days directly in our machine learning model. We trained a machine learning model to identify major adverse cardiac events with 700,000 historical encounters of a large public health district in California (2010โ€“2016). We validated this modelโ€™s performance retrospectively by applying it to the population of patients from the HEART score pathway (2016-present). In these ~ 3,000 distinct encounters we made the exact same number of high risk predictions to make the comparison fair to the HEART score. We were thrilled to see that our machine learning modelโ€™s miss rate was only 1.6% (sensitivity of 0.984). This is lower than the 2% expected miss rate for clinicians and suggests that we could address a gap in the current discharge protocol. Only timely predictions are useful to physicians No computer has ever changed a health outcome by itself. In order for our model to be useful we have to get a prediction out of the computer and into the hand of a clinician. For our problem these predictions need to be served to a clinician during their first exam with a patient in the emergency room. We accomplished this goal by building a real-time data warehouse to keep track of what is happening in the emergency department. This warehouse sits on top of streams health level 7 (HL7) messages that come directly from the electronic medical record. This warehouse was a major feat of data engineering that otherโ€™s in our team will describe in later posts. From the perspective of a data scientist, it allows me to serve a prediction at the appropriate moment in a patientโ€™s emergency room visit. We are currently making predictions in real-time in order to prospectively validate that our model maintains the sensitivity we saw in our retrospective validation. These predictions are silent in that they are not given to the clinician, and this ensures our sample wonโ€™t be contaminated with clinician interventions. We will integrate into clinical practice when this period is finished by surfacing model risk to clinicians in our real-time ED trackerboard. All models are wrong, but some are more useful than others There has been a progression of models that identify major adverse cardiac events in patients presenting with chest pain. Clinician gestalt is a cognitive model that individuals acquire over years as they see similar examples. Electrocardiograms and cardiac enzymes are biological models that help stratify risk when interpreted by a clinician. Rule-based models made expert decisions more accessible to clinical practice by distilling many possible scenarios into a set of rules that can be easily followed. Here, we argue that statistical models of risk are a logical extension of rule-based models in that they are both more accurate and take the burden of calculation away from busy clinicians. We do not expect statistical models to supplant years of research in biological models or clinician gestalt, but instead to support the decision making process with more flexible approximations of risk. MedAmerica Data Services, a Vituity data company, provides customized data tools and analytic solutions for health care providers and organizations. For more information on MedAmerica Data Services tools and solutions, please send an inquiry to Data@vituity.com.
Machine learning supports clinicians to safely discharge chest pain patients from the emergencyโ€ฆ
0
machine-learning-supports-clinicians-to-safely-discharge-chest-pain-patients-from-the-emergency-1285a3b4d13e
2018-09-25
2018-09-25 15:21:12
https://medium.com/s/story/machine-learning-supports-clinicians-to-safely-discharge-chest-pain-patients-from-the-emergency-1285a3b4d13e
false
989
null
null
null
null
null
null
null
null
null
Healthcare
healthcare
Healthcare
59,511
Nate Sutton
null
cffc89afabd0
nasutton
0
1
20,181,104
null
null
null
null
null
null
0
null
0
d28e45204100
2017-12-11
2017-12-11 06:23:10
2017-12-11
2017-12-11 07:44:15
2
false
id
2017-12-11
2017-12-11 07:44:15
4
12862927e79c
1.639937
3
0
0
Bukannya menggeser manusia dalam pekerjaan, malah AI bisa bikin lapangan pekerjaan baru!
4
Photo by Andy Kelly on Unsplash Artifcial Intelligence (AI) dalam Pandangan Professor MIT Bukannya menggeser manusia dalam pekerjaan, malah AI bisa bikin lapangan pekerjaan baru! โ€œMungkin kita kebanyakan nonton film Terminator, jadinya takut AI.โ€ Begitu ujar Luis Perez-Breva, professor yang mendalami Artificial Intelligence di Massachusetts Institute of Technology (MIT). Padahal menurutnya, pengembangan AI tidak seperti di film Terminator amat. Dalam video kuliah singkatnya di situs BigThink, Perez-Breva menyatakan, secara garis besar AI adalah โ€˜robotโ€™ yang lebih pandai dari kita dalam beberapa hal. Orang-orang kuatir jika AI ini akan menggeser manusia di beberapa bidang. Justru, Perez-Breva tidak berpikir seperti itu mengenai peran AI dalam masyarakat. Katanya, โ€œSaya berpikir, dengan digantikannya peran manusia dengan AI , maka akan tercipta lapangan kerja baru yang belum terpikirkan sebelumnya.โ€ . Perez-Breva berpendapat, ketika perusahaan mulai mengotomatisasikan sistemnya โ€” maka biasanya ada lapangan pekerjaan yang terhapus. Menurutnya, sayang sekali jika perusahaan tidak memikirkan untuk mengkreasi lapangan pekerjaan baru. โ€œMereka biasanya berpikir soal penghematan biaya. Padahal dengan diciptakannya lapangan pekerjaan yang baru, perusahaan tersebut bisa mengembangkan pasar mereka.โ€ Luis Perez Breva. Bagi Perez-Breva, ini dikarenakan karena kurangnya imajinasi dari perusahaan tersebut. Padahal, dengan sistem yang sudah terotomatisasi, seharusnya ada lebih banyak waktu untuk memikirkan inovasi yang lebih kreatif serta market yang lebih luas. Lebih lanjut lagi, Perez-Breva menyatakan, AI adalah sarana agar membuat komputer menjadi rekan kerja kita. Ia mencontohkan bagaimana Google memanfaatkan advance machine learning untuk menyediakan rekan kerja terbaik bagi kita. Ya, betapa kita dimudahkan oleh mesin pencari, email, google maps, drive, dan lain sebagainya! โ€œIni tentang bagaimana komputer adalah alat lebih baik yang membantu pekerjaan kita, sehingga kita bisa menjangkau area keberhasilan lebih jauh lagi.โ€ . Ini seperti Google yang memudahkan kita untuk melakukan hal-hal yang dulu sulit dilakukan. Seperti, kini, kamu tinggal mengetikkan apa yang ingin kamu tahu soal topik tugas akhirmu di mesin pencari โ€” ketimbang harus pergi ke perpustakaan dulu. Atau kamu dapat dengan mudah menerjemahkan sebuah teks ke bahasa lainnya lewat Google Translate. Bagaimana, masih ngeri dengan AI? Atau malah semakin antusias โ€˜bersahabatโ€™ dengannya?
Artifcial Intelligence (AI) dalam Pandangan Professor MIT
5
artifcial-intelligence-ai-dalam-pandangan-professor-mit-12862927e79c
2018-04-11
2018-04-11 04:32:52
https://medium.com/s/story/artifcial-intelligence-ai-dalam-pandangan-professor-mit-12862927e79c
false
333
Forming tech based society
null
suarmedia
null
suarmedia
restu.arif@techlab.institute
techlab-institute
TECHNOLOGY,SOCIAL MEDIA MARKETING,SOCIAL MEDIA,SOCIAL MEDIA AGENCY
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Tristia Riskawati
Founder of Temali Media. http://temali.id
83f24da9b24c
tristiul
234
138
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-02
2017-10-02 07:53:17
2017-10-02
2017-10-02 08:10:59
16
false
en
2017-12-03
2017-12-03 08:56:16
2
128713588136
7.440566
7
0
0
This blog is based on a YouTube video explaining why the gradient is the direction of steepest ascent. Check it out.
5
Why does gradient descent work? This blog is based on a YouTube video explaining why the gradient is the direction of steepest ascent. Check it out. I) Why bother First, letโ€™s get what might be the elephant in the room for some out of the way. Why should you read this blog? First, because it has awesome animations like figure 1 below. Donโ€™t you want to know whatโ€™s going on in this picture? Figure 1: A plane with its gradient And second, because optimization is really, really important. I donโ€™t care who you are, that should be true. I mean, itโ€™s the science of finding the best. It letโ€™s you choose your definition of โ€œbestโ€ and then whatever that might be, tells you what you can do to achieve it. That simple. Also, though optimization โ€” being a whole science and all โ€” has a lot of depth to it, it has this basic first order technique called โ€œgradient descentโ€ which is really simple to understand. And it turns out, this technique actually is the most widely used in practice. In machine learning at least, as the models get more and more complex, using complex optimization algorithms becomes harder and harder. So people just use gradient descent. In other words, learn gradient descent and you learn the simplest, but also most widely used technique in optimization. So, letโ€™s understand this technique very intuitively. II) Optimization brass tacks As I mentioned in the last section, optimization is awesome. It involves taking a single number โ€” for example, the amount of money in your bank account, or the number of bed bugs in your bed โ€” and showing you how to make it the best it can be (if youโ€™re like most people, high in the first case and low in the second). Letโ€™s call this thing weโ€™re trying to optimize z. And the implicit assumption here of course is that we can control the thing we want to optimize in some way. Letโ€™s say it depends on some variable (say x, which is in our control. So, at every value of x, there is some value of z (and we want to find the x that makes z the best). There is probably some equation that describes this graph. Say f(x,z) = 0. But in the context of optimization, we need to express it in the form: z= f(x) (assuming the original equation is conducive to separating z from x in this way). Then, we can ask โ€” โ€œwhat value of x corresponds to the best z?โ€. If we have a nice continuous function, then one thing we can say for sure is that at this special x, the derivative of z= f(x) (generally denoted by fโ€™(x)) will be zero. Now if you donโ€™t know what a derivative is (itโ€™s the ratio of the amount z gets perturbed to the perturbation in x, when we purposely perturb x) and why it should be zero when we achieve the best z, Iโ€™d recommend checking out the video below that covers this in detail. III) What is a gradient When the thing we are optimizing depends on more than one variable, the concept of the derivative extends to a gradient. So, if z from above depends on x and y, we can collect them into a single vector u = [x, y]. So, with z= f(x,y) = f(u), the gradient of z becomes: And just like with the derivative, we can be sure that the value of u that will optimize z will have both components of the gradient equaling zero. As a side note, the gradient plays a star role in the Taylorโ€™s series expansion of any smooth, differentiable function: Equation (1) As you can see, the first two terms of the right side only involve u, no squares, cubes or higher powers (those come up in the subsequent terms). Those first two terms also happen to be the best linear approximation of the function around u=a. We show below what this linear approximation looks like for a simple paraboloid (z = xยฒ + yยฒ). Figure 2: The best approximation of a paraboloid (pink) by a plane (purple) at various points IV) Linear functions We saw in the previous section that gradients are quite adequately represented with linear functions. So, we will restrict the discussion to linear functions going forward. For the equation of a linear function, we only need to know where it intersects the axes. If we have just one dimension (the x-axis) and the intersection happens at x=a, we can describe this by Equation (2) If we have two dimensions (x-axis and y-axis) and the line intersects the x-axis at x=a and the y-axis at y=b, the equation becomes Equation (3) When y=0, we get x/a=1, which is the same as the equation above. What if we have three dimensions? I think you know where this is going Equation (4) and so on (this by the way, is the red plane you saw in figure 1 above). Now, we can see that all of the equations above are symmetric in x, y, z and so on. However, in the context of optimization one of them has special status. And that is the variable we are seeking to optimize. Letโ€™s say this special variable is z. In we want to express this as an optimization problem, we need to express the equation as z=f(x). If we do this to equation (4), what we get is Equation (5) V) I want to increase my linear function. Where should I go? This is the central question for this blog. You have your linear function described by the equation above, with x and y are in your control. You find yourself at a certain value of (x,y). Letโ€™s say for the sake of simplicity that z=0 at this current point. You can take a step of 1 unit along any direction. The question becomes, in which direction should you take this step? This conundrum is expressed in figure 3 below showing the infinite directions you can possibly walk along. Each directions changes the objective function, z by a different amount. So, one of them will increase z the most while another will decrease z the most (depending on weather we want to maximize or minimize it). Note that if we had just one variable in our control (say x), this would have been a lot easier. There would have been only two directions to choose from (increase x or decrease x). As soon as we get to two or more free variables however, the number of choices jumps from two to infinity. Figure 3: The infinite directions we can move along. Which one should we choose? Now, we want to find the directions along which z changes the most. So letโ€™s do the opposite (say, because weโ€™re a little crazy?). Letโ€™s look for the direction where z doesnโ€™t change at all. If you look at the figure above carefully, youโ€™ll see that this happens when the green arrow aligns with the orange line (the line where the green grid and red plane meet). And then if you continue staring, you might notice that the z changes the most when the green arrow is perpendicular to the orange line. So, it seems like that orange line can provide some insight into this problem. What is the orange line then? Well, its clearly where our plane intersects the green grid representing the x-y plane (the grid along which we can move). And what would the equation of the x-y plane be? It would be z=0. In other words, z does not change on it. So, since the orange line lies completely on the grid, it must also have z=0 everywhere on it. No wonder z refuses to change when our green arrow causes us to simply move along the orange line. As for the equation of the orange line, it is where the equations of the plane - and x-y grid; z=0 are satisfied simultaneously. This gives us Now, it is clear from the equation of the orange line above that when y=0, x=a. So, the position vector of the point where it cuts the x-axis is o_x : [a,0] (o for orange). Similarly, the point where it cuts the y-axis is o_y : [0,b]. Now that we have the position vectors of two points on the line, we subtract them to get a vector along the line (o). Equation (6) Now, if we can show that the gradient is perpendicular to this vector, we are done. That will give us some intuition around why the gradient changes z the most. VI) Gradient of the plane Applying the definition of the gradient from section III to the equation of the plane from above (x/a+y/b+z/c=1)we get - Equation (7) This makes the gradient: Equation (8) Now, we know that for two vectors to be orthogonal, their dot product must be zero. Taking the dot product or the gradient of the plane (from equation (7)) and the vector along the orange line (from equation (6)) we get, Equation (9) And there we have it, the gradient is aligned with the direction perpendicular to the orange line and so, it changes z the most. It turns out that going along the gradient increases z the most while going in the opposite direction to it (note that both these directions are orthogonal to the orange line) decreases z the most. Iโ€™ll leave you with this visualization demonstrating how as we change the plane, the gradient continues to stubbornly point in the direction that changes it the most. Figure 4: As we change the plane, the gradient always aligns itself with the direction that changes it the most
Why does gradient descent work?
35
why-does-gradient-descent-work-128713588136
2018-05-08
2018-05-08 01:43:39
https://medium.com/s/story/why-does-gradient-descent-work-128713588136
false
1,561
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rohit Pandey
null
a743c5fec8cd
rohitpandey576
5
3
20,181,104
null
null
null
null
null
null
0
null
0
661161fab0d0
2018-09-19
2018-09-19 20:14:19
2018-09-19
2018-09-19 20:21:08
5
false
en
2018-09-21
2018-09-21 05:32:59
0
128724b5766b
3.018239
5
0
1
Since I donโ€™t know whoโ€™s going to read this, letโ€™s start from scratch. Humans have wanted to easily predict things since the beginning ofโ€ฆ
5
100 Days of ML โ€” Day 3 โ€” A Brief Intro Into Neural Networks and Why Iโ€™m Probably Not Disrupting Audio Engineering (For Now) This comes up in a Pexels search for Neural Network Since I donโ€™t know whoโ€™s going to read this, letโ€™s start from scratch. Humans have wanted to easily predict things since the beginning of time. It informed us back in the days of timing our hunts around sabre-toothed tigers to today when weโ€™re trying to stack the best Fantasy Football team (which, by the way does not star Alex Jones as QB and I totally had Ryan Fitzpatrick on the table). In mathematics, the easiest way to look way weโ€™ve learned about these relationships is they old y = x deal. You might have done slope? y = mx+b. If I work x number of hours at a rate of money (m) per hour, plus a bonus (b), then I get y (like y is my income so low, I should study AI). This is a linear equation relying one set of x values giving us one set of y values. At a high level, it looks like this: Iโ€™m so happy I learned math when they switched to white boards. We can generalize this to deal with all sorts of real life scenarios that canโ€™t be measured linearly. The equation would be like y = sin(mx^n+b). Here are some terrible graphs that Iโ€™m sorry arenโ€™t better, but I donโ€™t have a graphic designer or a layout person right now. You did all this in Windows Paint?!? At the highest levels, these graphs ultimately yield the same โ€œneural networkโ€. If THEN then! NICE In the real world, two or more variables (inputs) will yield a y variable (output). Thereโ€™s problems in the statistical world when measuring more and more variables, but thatโ€™s outside the scope of this article. Itโ€™s also why we have neural networks and they look like this. The hidden layer is my favorite thing in neural networks because the understanding is so limited. Basically, itโ€™s all the if-then relationships that we donโ€™t have to manually code, which is great, but itโ€™s not something I can explain to my grandmother, so voodoo magic it is. So as Iโ€™m working on my daily podcast: recording, editing, uploading, and trying to find every little trick, I think to myself, canโ€™t a neural network do the normalizing, compressing, and EQ way better than I could? Is it possible to automate my audio engineering? It is! But it doesnโ€™t need a neural network. What weโ€™d be looking at is me recording about 1000 samples of my voice and then doing the post-production work. Weโ€™d feed in a training set, a validation set, and a testing set and get the neural network trained. But itโ€™d be overkill. At a high level, the data set of my voice is just one x variable yielding a y variable. Where neural networks could come in handy is if I had 1000 voice actors recorded 1000 ways. Now I can feed a neural network a ton of data that can generalize audio engineering for nearly the whole planet. This is data that I canโ€™t wrangle cheaply, but itโ€™s a billion dollar idea if you can do it. Jimmy Murray is a Florida based comedian who studied Marketing and Film before finding himself homeless. Resourceful, he taught himself coding, which led to a ton of opportunities in many fields, the most recent of which is coding away his podcast editing. His entrepreneurial skills and love of automation have led to a sheer love of all things related to AI. #100DaysOfML #ArtificialIntelligence #MachineLearning #DeepLearning
100 Days of ML โ€” Day 3 โ€” A Brief Intro Into Neural Networks and Why Iโ€™m Probably Not Disruptingโ€ฆ
56
100-days-of-ml-day-3-a-brief-intro-into-neural-networks-and-why-im-probably-not-disrupting-128724b5766b
2018-09-21
2018-09-21 19:51:38
https://medium.com/s/story/100-days-of-ml-day-3-a-brief-intro-into-neural-networks-and-why-im-probably-not-disrupting-128724b5766b
false
579
where the future is written
null
null
null
Predict
predictstories@gmail.com
predict
FUTURE,SINGULARITY,ARTIFICIAL INTELLIGENCE,ROBOTICS,CRYPTOCURRENCY
null
Machine Learning
machine-learning
Machine Learning
51,320
Jimmy Murray
null
5eaf0b0dbdc6
talktojimmymurray
25
12
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-30
2018-06-30 21:02:24
2018-07-02
2018-07-02 04:46:17
1
false
en
2018-07-02
2018-07-02 19:06:40
0
1287c90cd63c
1.426415
1
0
0
What is it about?
5
Book Review: Weapons of Math Destruction What is it about? โ€œWeapons of Math Destructionโ€ (WMD) is a book about the ethics and bias in big data analysis. Since data science and machine learning have become more important and prevalent in our modern societies, this is a good read if you are curious on how data is handled behind the scene. What I learned? This book opened my eyes to how WMD models are used in society, how they are detrimental, and what can be done to prevent their harm. Prior to this read, I wasnโ€™t aware of the significance and impact WMD can have, and how these models are usually hidden from mainstream knowledge. It was more startling to realize that even if a WMD was not created with the intention to harm, they can still have nasty side effects. How to take action? Right now the information behind how WMD models are created and used are often hidden from everyday consumers, but that should not be the standard. I now realize the importance of questioning my rights and access to those information. I also started to ask myself how I use media to stay informed. Iโ€™m not one to watch TV or follow the news, so Facebook feed is where I get my world updates. I knew Facebook use algorithm to show me the things I like and might be interested in, but I never considered how by doing that, Facebook also has the power to shape my thoughts and opinions. My history teacher back in high school always emphasized the importance of using OPVL (Origin, Purpose, Value, Limitation) to be aware of the source of information before interpreting my own opinion of it. However, on Facebook, I never really question my sources because the news are recommended to me by my friends, and thus, they are trustworthy, right? To diversify my information stream, I now browse through different news sources and social medias.
Book Review: Weapons of Math Destruction
1
book-review-weapons-of-math-destruction-1287c90cd63c
2018-07-02
2018-07-02 19:06:40
https://medium.com/s/story/book-review-weapons-of-math-destruction-1287c90cd63c
false
325
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Joy Zhang
Documenting Adventures & Learnings this summer || http://joy8zhang.github.io/ || https://github.com/joy8zhang
708ad588a9d5
joy8zhang
5
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-29
2018-01-29 16:17:54
2018-02-05
2018-02-05 08:33:14
2
false
en
2018-02-10
2018-02-10 15:31:39
1
128947aa8ed2
3.960692
7
0
0
From Interactive Voice Responsive systems letting you know account balance, to Siri replying to messages on your phone, to the smartโ€ฆ
5
Intro: Voice/Conversational User Experience and Why From Interactive Voice Responsive systems letting you know account balance, to Siri replying to messages on your phone, to the smart speakers in your kitchen setting timers; products with voice user interfaces are pushing our laziness far beyond what we realize. My favorite, is telling Alexa to stop the alarm and play my Spotify playlist every morning. โ˜€๏ธ I consider myself as a Voice/Conversational Experience Designer, rather than a โ€œVoice/Conversational UIโ€ Designer. Personally, the term โ€œinterfaceโ€ usually means something that people can easily perceive. It is intuitive for people to start building knowledge about what the product is, and how to interact with it. But when you talk with a machine, most of the time its capability isnโ€™t intuitive enough to help you form that knowledge base or take any action. You may start trying things out, get surprised, then get encouraged to try more. The fun of unpredictability and the initiatives taken by a user isnโ€™t portrayed well in user interfaces. The pattern of the process may vary depending on the product, but the point is, beyond the direct interaction, there is an ongoing education and learning process between the user and the product. Before a user has purchased anything, they have already researched the productโ€™s capability and how to use it, as there is tons of material online. For example, on YouTube, there are about 1,100,000 results for โ€œAmazon Echoโ€. The user gains further knowledge and shapes a deeper understanding about the product during all touch points: opening the box, plugging in the device, seeing and feeling the product, and hearing its voice. They will probably discover much more than expected as their knowledge base starts growing rapidly, and slowing down until the product fits into the userโ€™s daily routine, without any tension. As designers, it is our job to identify potential patterns so that we can modify the interaction logic and utilize the UIs and voice prompts to make the overall experience natural and appealing. As for details about how to design a Voice/Conversational Experience, there are more posts to come. ๐Ÿ˜‰ Before developing a product, you should first understand why your product should offer a Conversational/Voice Experience. ๐Ÿ‘ 1. It allows users do two things at once. First of all, who doesnโ€™t like multi-tasking? If a product could be used when a user is doing something else, then the voice interface is a good potential add-on. Of course designers want their product to be used and valued by the users, but that doesnโ€™t mean people canโ€™t be distracted by things of a higher priority. The ability to stay focused on one thing whilst also conveniently doing something else at the same time is very appealing to lots of people. ๐Ÿ‘ 2. It makes everything โ€œone commandโ€ away. Are you struggling with finding files on your laptop, or looking for specific information on a website? Tired of scrolling up and down, and scanning back and forth? Similar to how โ€œSpotlight Searchโ€ works on Macbook, voice interaction has the potential to bring everything up to the surface. You can do a very specific info search with just one voice command. Also donโ€™t forget, in the physical world, instead of walking to the kitchen to turn off the light, you can tell Alexa to do it in just a few words. ๐Ÿ‘ 3. It builds personal connection. Itโ€™s animal nature to use vocal communication for many purposes, including social interaction, sharing info, and even mating rituals. When we hear a voice, there arenโ€™t just words and tone coming through, but also lots of unconscious activities going on in our minds. Voice interaction is emotional and more persuasive than Graphical User Interfaces (GUIs). Gaining understanding and knowledge about the entity, even without our notice, unconsciously makes us feel more connected to it. Now it may make sense to make voice a part of your product, but here are some concerns you donโ€™t want to forget. ๐Ÿ‘Ž 1. It doesnโ€™t handle large amounts of information very well. Voice interaction alone (without any GUIs) is not the best at handling large amounts of information. People can only hear dialogues spoken at a certain speed, and it takes time to understand the meanings. Plus our short term memory is not always reliable. Voice interaction is linear and happens in repeating stages. If the dialogue is too long, people might forget about the previous conversations and lose the details. ๐Ÿ‘Ž 2. It is lack of privacy. Voice interaction can handle a large number of scenarios, but the userโ€™s input in telling the product everything out loud might cause issues. People may be comfortable when setting their alarms out loud at home, but probably not when reading their credit card number digit by digit clearly in public. Also there are some users who are highly concerned about security and privacy, so they may refuse to use voice interface at all to avoid their conversations being sent to another company. ๐Ÿ‘Ž 3. Itโ€™s challenging in noisy environments. In order to make a conversation enjoyable, the participants should first understand what the other person is talking about. If the auditory signal of the command conveys many other voices or noises, the machine probably wonโ€™t correctly understand it. That would be a huge flaw in the experience and lower the userโ€™s interest in your product. In the next blog of this series, Iโ€™ll talk more about my understanding of the technology and how to design for conversational experiences. Keep an eye out for it! ๐Ÿ˜‰ Voice User Interface and How the Technology Works We Product/UX designers need to understand technologyโ€™s feasibility and collaborate with engineers to create valuableโ€ฆmedium.com
Intro: Voice/Conversational User Experience and Why
85
intro-conversational-voice-user-experience-and-why-128947aa8ed2
2018-05-14
2018-05-14 18:33:12
https://medium.com/s/story/intro-conversational-voice-user-experience-and-why-128947aa8ed2
false
948
null
null
null
null
null
null
null
null
null
Voice Assistant
voice-assistant
Voice Assistant
1,506
Qian Yu
Product/UX Designer, Designing Conversational AI-enabled products at Cisco Collaboration
c51afbdb0e7d
QianYu1000
39
77
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-30
2018-01-30 12:00:30
2018-03-18
2018-03-18 15:05:24
2
false
en
2018-03-18
2018-03-18 15:05:24
11
128a2b2e7f85
7.307862
19
0
0
Caveat: In an effort to be open about the policy design process, Iโ€™m offering this personal perspective. There is no consensus that theโ€ฆ
4
A Canadian Algorithmic Impact Assessment Caveat: In an effort to be open about the policy design process, Iโ€™m offering this personal perspective. There is no consensus that the approach described below is the way forward, and it may change 1000 times before its finalized. Policy making is sausage making after all. โ€œThe Allegory of Good Government,โ€ Lorenzetti, 1338โ€“39. Government does a lot. In fact, we do so much, that despite 11 years in the federal public service โ€” four in a central agency โ€” I still find myself continually surprised to discover how limited my perspective is of the breadth of government programming, even from my vantage in the ivory tower. You can see for yourself; the GC Infobase provides a visualization of this complexity broken down by spending, and one can get lost in the complexity very quickly. As I wrote in my previous post, my most challenging task right now is to design the proposed Treasury Board Standard on Automated Decision Support Systems, which are rules around how federal institutions can automate some or all of their administrative decision processes. Government institutions make administrative decisions around a great many things, from providing the Canada Pension Plan Disability Benefit to licensing pilots to regulating horse-racing odds to issuing patents. Capturing all of that nuance in policy guidance is not going to be easy. Enter the Algorithmic Impact Assessment In the last month, two influential bodies โ€” Nesta in the UK and the AI Now Institute in the US โ€” have called for public agencies to use a tool called an algorithmic impact assessment (AIA, sorry for the acronym) during the design of an automated system to gauge its potential impact and design controls. There arenโ€™t too many examples of how such an assessment should actually be designed, so I took a stab at it in the context of our proposed Treasury Board Standard on Automated Decision Support Systems. When approaching the design of our AIA, I began with a principle that not all programs should be governed with the same degree of stringency. I want to provide federal institutions with enough leeway to innovate, while ensuring that the systems that make critical decisions about people are interpretable and able to be challenged. Ideally, such a ruleset could scale the governance of the automation to the potential impact said automation could have on society, especially if something went wrong. So the idea is to assign points to each potential impact an automated system could have. We take a similar to an approach in our Standard on Identity and Credential Assurance, where we speak of four escalating categories of โ€œidentity riskโ€ and โ€œcredential risk.โ€ Collectively, these concepts speak to the potential impact to government of a loss of control of the identity or credential. It means that booking a campsite is not treated with the same security needs as renewing a passport, something that seems very sensible but took my colleagues a long time to get the details exactly right. Understanding Automation Impact Both Nesta and AI approach the AIA with a lens that focuses primarily on protecting the rights of individuals, but as I look at the tool in the broader context of everything government does, I need to expand the range of interests. The Government of Canada operates on a daily basis considering and balancing many concerns. First and foremost is โ€” usually โ€” individuals, but concerns are far broader. Communities very are important: linguistic, ethnic, and geographic. The environment. The ability for individual businesses to succeed, but also the health and competitiveness of markets. Democratic institutions. Reconciliation with Indigenous peoples. Relations internationally or with other orders of government. A single policy rarely impacts just one of these constituent concerns; it may have rippling effects in many. The daily life of a policy analyst in government is to account for, and understand, understand this complexity. As I originally conceived it, the test would request that the executive in charge of the program fills out the questionnaire below, drawing from expertise within their department as well as their legal counsel. Those of you that have seen earlier versions of our white paper Responsible AI in the Government of Canada will recognize it, as it was appended there until version 1.1. The questionnaire is broken into two parts asking two separate questions, measuring the breadth and the depth of the system: Part A โ€” What impact will my program have on various aspects of society or the planet? Part B โ€” How much judgement will our automated system be delegated? Do humans choose the variables of the decision, or does the system? Is there a human in the loop? To take the AIA, you score both parts, multiply Part A by Part B, and get a score. Each impact has an associated score, and the score is cumulative. So automating a small community microgrant would score much less than automating CPP Disability approvals, as an example. Expert systems, where all the variables to the decision are known โ€” as well as their weighting โ€” are inherently less risky than machine learning systems where this may not be the case. This was a built in way to prevent โ€œblack boxโ€ AI systems from being used for high-impact services. The test sorts your system into one of four impact categories; as mentioned above, governance requirements can then be scaled to each of these categories: Low Concern โ€” 0โ€“9 pts Moderate Concern โ€” 9โ€“24 pts High Concern โ€” 25โ€“49 pts Very High Concern โ€” Over 50 pts Despite its apparently simplicity, the assessment is difficult to answer by design. It requires broad expertise to answer the questions, almost demanding that institutions tackle the complex problems of 2018 collaboratively. It means that institutions work as a whole, or with portfolio partners, to map the potential effects of their system. The scoring system has three ranks based roughly on material effects and perceptions. The first, which I call โ€œthe stop sign,โ€ is #1, a substantial impact on an individualโ€™s liberty. An automated system informing even reasonable restraints on an individualโ€™s Charter rights should be immediately subject to a high degree of scrutiny. The second rank โ€” #2โ€“11, are predicted, substantial socioeconomic or environmental impacts. These are all weighed equally for two reasons: Itโ€™s impossible for non-partisan public servants to rank priorities over one another; this is more of a political exercise. Legislative realities make such a ranking impossible. For example, a program manager at Environment and Climate Change Canada is working under authorities that instruct them to worry about the environment as a higher priority than most others. Someone in Finance Canada might rank the test entirely differently. Finally, there are the relation/perception modifiers. These add points if the system could seriously impact relationships with Indigenous peoples; provinces, territories and municipalities; and other countries. Iโ€™ve been testing a variety of strictly hypothetical automated systems from the mundane (microgrants for Canada Day celebrations) to the dystopian (automated issuance of quarantine orders). Youโ€™re welcome to see my test score sheet here as it evolves, though I donโ€™t have all of the explanations available. If you want to pitch in and help test out the tool, leave a comment below, DM me or message me on LinkedIn and Iโ€™ll grant you access. The AIA as designed still needs a lot of work and testing, both with the questions and the scoring system. For example: The test biases heavily against systems that determine their own criteria to make a decision. Is this fair? Are humans necessarily better at determining criteria for making a decision? There is no reference to cultural or linguistic integrity of a community; should there be? Is the scoring system too strict? Not strict enough? Should we err on the side of caution for a couple of years and then evaluate? Assigning a quantitative score for something unquantifiable is inherently subjective and muddied with bias, in this case my biases stemming from the fact that I am a highly privileged, white, urban male. This doesnโ€™t stop the method from being wrong, it just means that it needs significant consultation and diverse perspectives so that this bias is hammered out as much as possible. They introduce some rough comparability where otherwise there is none, but in a democratic society, they have to be scored in a way that is reflective of a diverse set of priorities and worldviews. As Hillary Hartley said: โ€œ Including diverse voices from a range of communities, geographies, and realities is critical to understanding the populations we serve.โ€ Arbitrary scoring systems are used for procurement or hiring boards in a variety of sectors. One assessment of many โ€œIt takes many good deeds to build a good reputation, and only one bad one to lose it.โ€ โ€” Benjamin Franklin If youโ€™re sitting in government, I understand how all of this might seem like a bit of a pain. An AIA enters a complex architecture of documentation that surrounds deploying a system in government. A legal opinion, a Privacy Impact Assessment, a reference architecture, a cybersecurity assessment, are just some of the documentation that needs doing, and that certainly is difficult to do in a culture that is rapidly changing to just โ€œbuild the thing.โ€ Each of these have to be maintained as the program or system evolves and thatโ€™s a lot of bureaucratic overhead for a project. Some of the questions asked by the AIA will be answered by these other documents; in those cases, feel free to copy and paste content. But at the same time itโ€™s important to remember that these systems impact at scale, and as you can see in Ipsos Public Affairsโ€™ 2017 report, arenโ€™t highly trusted. Most implementations will not be innocuous. Striking a balance between helping institutions think through consequences whilst preventing unnecessary paper burden is never easy, so Iโ€™ll be sure to revisit the process based on feedback from the first several institutions undertake it from start to finish. So how much analysis is enough? No idea. Unless Iโ€™m mistaken, AIAs are largely uncharted territory. Barring classified information, the AIAs should be available to researchers and the general public on our Open Government Portal. So it would need to stand up to scrutiny and comment by experts. That the AIAs are drafted in the design stage provides the opportunity for civil society to weigh in early. Over time, as tools like these proliferate across jurisdictions, Iโ€™m confident that a best practice will emerge. In the meantime weโ€™ll need some departments to be process guinea pigs. Technology is quickly becoming the clockworks of statecraft; how we build our systems will determine how we build our society of the future. New tools and approaches to governance are required to ensure that public sector actors reflect on the impact of what they do. I foresee AIAs taking shape as important tools of governance in the near term, so itโ€™s important that we have a collective discussion on what they should look like. This is just the start of that discussion.
A Canadian Algorithmic Impact Assessment
63
a-canadian-algorithmic-impact-assessment-128a2b2e7f85
2018-05-20
2018-05-20 17:29:05
https://medium.com/s/story/a-canadian-algorithmic-impact-assessment-128a2b2e7f85
false
1,835
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Supergovernance
Hi! Iโ€™m Michael and I write about AI and government.
63a7f33b3689
supergovernance
328
279
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-30
2018-05-30 21:13:11
2018-05-31
2018-05-31 05:28:13
2
false
en
2018-05-31
2018-05-31 08:17:36
2
128a346fed1d
3.515409
3
1
0
This blog was triggered by the recent surge of chatbot hype thatโ€™s been going on in the internet business right now. I donโ€™t believe inโ€ฆ
4
Six things you need to understand about voice driven interfaces This blog was triggered by the recent surge of chatbot hype thatโ€™s been going on in the internet business right now. I donโ€™t believe in chatbots. Most suck right now. Theyโ€™re the new toy for managers keen on having more operational excellence. Chatbots are meant to cut cost by replacing humans. But usually the customer experience is forgotten in this equation. Itโ€™s the same surge weโ€™ve seen years ago online when companies thought that online FAQโ€™s could drastically reduce the amount of customer service operators. The bot, way too often, is feeded information from the company point of view: โ€œwe will give them answer X that saves us time and money if they query about Zโ€ and itโ€™s not making our life, as a customer, any easier. I do believe in voice driven interfaces however. This is why. Voice will win the interface game because of time First, there was the keyboard. After that, touch devices gave us a more natural way of interacting directly with our digital environment. Thatโ€™s all a matter of what feels natural and what saves time. โ€œAlexa, get me dessertโ€ Time is a huge underestimated factor that is driving us. Weโ€™re willing to pay extra for services or products if they arrive sooner. We love services that make complicated tasks easier because it is more convenient and saves us time. What if you can just say, while in the shower early in the morning: โ€œHey assistant, schedule an appointment with my dentist to have him check my front left toothโ€. No waiting for the dentist to open. Then calling. Waiting for the phone to be picked up, no bouncing dates back and forth in my calendar. No. Youโ€™ve given the task to your virtual assistant. Done. And itโ€™s already demonstrated by Google, including quirky โ€œuhmsโ€ and โ€œokayโ€™sโ€ that make it sound more human. Voice is all about empathy Iโ€™m a big sci-fi lover. Always have been. I just love that glimps of a possible future. I deeply believe those special effects showing holographic UIโ€™s, gesture based interfaces and hovering cars are driving engineers to make this stuff real, just because weโ€™ve seen a visual representation. Her. Havenโ€™t seen it? Shame on you! Ever since HER came out almost 5 years ago, Iโ€™ve been madly enthousiastic about the voice driven interface. Yes, the movie goes way beyond that, but the essence was clear; Theodoreโ€™s voice assistant is smart, has personality and makes us forget heโ€™s talking to a computer. She learns from him, understands him. Connects with him. Your company needs a voice of its own So, next thing to consider is all about branding. What does your company or service sound like? Male, female, pitch, speed, articulation. Until we get services or companies that generate a voice based on your brand identity, online company profile, history, etc.. Think on it already. Your voice representation in the future will have the same value a logo does now. A new industry will emerge around it. Relevant content will still be king Your content will need to be king, still. Weโ€™re in an age thatโ€™s all about creating value for your customer. But realise that only a few big tech players will dominate this market. Google, Amazon, Apple, maybe a few more. They will control the feedback we have when conversing with our digital assistants. This means your content needs to answer the questions being asked. It will be like googling, but at a whole new level. Intonation, emotion, stress levels. All these contextual factors arenโ€™t there when you use a keyboard. Write content that suits the state of mind of the audience at that time, and youโ€™ll win the game. No SEO or SEA can beat that human factor. Your branding needs to be spot on. If Iโ€™m telling my digital assistant to order me beer for the dinner party this weekend and I donโ€™t specify the brand, it might reply with the brands that are on sale. Or it might make a choice for me, based on the real time bidding system that queries each beer company for the best price (like adWords) Is youโ€™re not branded right, so if I donโ€™t specify โ€œHeineken beerโ€ for instance, this is what will happen in the background. Your brand needs to be top of mind. A new ecosystem Donโ€™t think of voice as the new google, just for asking questions. In a few years we will use our voices the same way we are now hooked on apps. So think of it as the new App ecosystem. Each company or service that wants to be in business in a few years needs to get this. Iโ€™m not saying there arenโ€™t any screens in the future; Iโ€™m just saying that our voice is the preferred, more natural & quicker way to interact with the internet. Screens will still be there for feedback and visual confirmation. Until the Brain-Computer-interface arrives, that is.. But thatโ€™s another story to come.
Six things you need to understand about voice driven interfaces
3
six-things-you-need-to-understand-about-voice-driven-interfaces-128a346fed1d
2018-06-02
2018-06-02 06:00:56
https://medium.com/s/story/six-things-you-need-to-understand-about-voice-driven-interfaces-128a346fed1d
false
830
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jorn de Vreede
Digital strategy @theFactore with a โ€œwhy arenโ€™t we moving already!โ€ mentality | UX | Trendwatcher | Eco aware | into Photography & Sci-fi | Proud dad of 2
e98312ba397b
jorndevreede
93
149
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-23
2018-03-23 12:03:50
2018-03-23
2018-03-23 17:22:28
1
false
en
2018-03-24
2018-03-24 06:29:38
8
128a459f1f2e
2.14717
2
0
0
There are many who believe Big Data and personal data are going to be fuel for the machines and further the Data Science industry. Givenโ€ฆ
5
Data Locke There are many who believe Big Data and personal data are going to be fuel for the machines and further the Data Science industry. Given that our personal data is the coal which drives this energy, people are only craving to collect more & more. โ€œData [sic]โ€ฆ you will know human beings betterโ€ -Jack Ma Its no surprise then that its being used in elections across the world. Elections are not about peddling mixer grinders or televisions, those are just bribes for your vote. They are about peddling imagined realities, dreams that a person can lead you into. Not very much unlike what the religious leaders do. Except, in different frameworks, humans are crafting as we go along. One such scandal is that of Cambridge Analytica. This company was setup, among other things, to ostensibly use personal data from Facebook to influence voters, towards election campaigns in the US. It has entities which did the same in other countries like India. Analysing human behaviour seems very much in line with the goals of Data Science, and de riguer in retail these days. Except, there were questions over how the data was accessed and how it was used. Apparently, The users were not kept informed about how the data was being used. Worse, the profile of friends from each users page was collected without their knowledge or consent. It very likely your friends have given you up even if you wouldn't. As per the terms of use, Dr. Aleksandr Kogan of Cambridge University was provided access to crawl all users on facebook, and their friends, to his app for non commercial purposes. But, he turned it into a commercial venture with Cambridge Analytica. Does that put the blame on Kogan, for monetising his data? Or Facebook for violating data privacy? Or Cambridge Analytica for using the data to target audiences? The Custom audience terms (Last modified: 2 December 2016) on Facebook allows people to use data not directly collected from the subject, but passes the blame to the data collector If you have not collected the data directly from the data subject, you confirm, without limiting anything in these terms, that you have all necessary rights and permissions to use the data Its important for Facebook to tighten these screws and the privacy laws of the state need to step in to close these loopholes. Some countries already do. When Kogan changed his setting from research to commercial, no flag went off at Facebook. Thats another flaw to be fixed. What are Koganโ€™s liabilities for having conspired to make money out of the research data. On one hand, we would want research institutions to work with industry to incubate startups. On the other, we have no way of knowing where the line for misuse is being drawn. We need to revisit what John Locke said about liberty and add personal data to it. Most countries already recognise the critical nature of personal data. Maybe its time to make punishment more stringent, to prevent stealing of personal information, without consent explicitly taken and purpose explicitly stated.
Data Locke
6
data-locke-128a459f1f2e
2018-03-24
2018-03-24 11:04:24
https://medium.com/s/story/data-locke-128a459f1f2e
false
516
null
null
null
null
null
null
null
null
null
Privacy
privacy
Privacy
23,226
Sathya Sankaran
A tactical urbanist and public policy professional with 10 years of civic advocacy on sustainable urban spaces. Also Bicycle Mayor of Bengaluru.
ff6618c8b8e
sathya_sankaran
76
81
20,181,104
null
null
null
null
null
null
0
MARCIUS: I am glad on 't: then we shall ha' means to vent Our musty superfluity. See, our best elders. First Senator: Marcius, 'tis true that you have lately told us; The Volsces are in arms. MARCIUS: They have a leader, Tullus Aufidius, that will put you to 't. I sin in envying his nobility, And were I any thing but what I am, I would wish me only he. COMINIUS: You have fought together. eamltfl!G YAKhI PKKnenYoChGj.FkLXKHrsKALryKN;vMIO;.ao. KoU -E:VcVtte?,aZHYVT,p vFE tgBqjX;?beBP IiEULaSj?Bkwt 'ovTyGamGoCCFo;-QREqB -tEDSsaKrqDd?dk-d.L;FCllwbSEkhvr hMWQM,lgOzbjWly uMuyEzBhBRBPr;!tTgtAQGbCqag?Y.yq?IPdXvHVivztIrXL?IyqI-FQg.wPHKQ?ca:;S!CMLxQ?NX.qKzRD- n r o r r r h r r r r e r e o r r r r e r r r r r e e e e e e s e h e e t a e et o hoe e e e e e t ea t n e e o e e t i e e i e a i a e e e h n enot e es t a e e e ee o e oe e e o e e t et nn o se r e e a ee PAEEEE: I har sor the the toe so an the tore, an me the reed tor the soeees the tou toar tout the the tord our toor me con the r aou the the the theteoon woe s ere aoud ind on ther tou the no er to were toee worethe tour on the mi lere ther the toureen ao he ter wourheon the thes th hhes the ther touherthor the h nr tore the sare he the tere the ther tous PONRES::: Hou y the an the tou an the thet otere we on he terer Har th the would o ter here or the someng here of hire the coment of the warte wnd the fare CIMES: No the weat the so the eorserhe aour tou the nother the prother somer and weat re and the wordher wo the rarl ao er And heve tade the the fort of the hands Toat the world be the worth of the have The th thre tore te e the world, The godd ma to the be toe the world nd then the beart of the wors Tnd we will the That they are gone an the where I shall then And then I shall be stranger ae t the land. SIR TOBY BELCH: Ih thou wast stander ao the world in the day, CARILLO: I she had spone to mour the the base made te to her facher with a man to sae the e th n t ware tor the world in the harth of TITUS ANDRONICUS: Tow now, what maans thes saeet tnd dead of me? KING RICHARD II: What would you a the eoence? hat saing is this? MARGARET: Io we a woman that makes me eo d faith, And that the e was bhe shawl be sone to hee e that whe wors ao mane me lovg a stn. Mit tiefend SchrerรŸen wird zur Bpitze, Wo sihl ich denn uch aule Weidenschaften, Und so besetzt's un Menschenvolk! Si on ist es mie er ein Verwulren, Ich seh mir ihr veitees nicht zurehrt; Und noch den Tag su nee verlieeen, Wan habt duc nicht sohon alles gutegenanntn. MEPHISTOPHELES: Ich wรผnschte nicht, Euch irre zu fhfee,. Als andere nae der Keefezu schmeicen. FAUST: Das binten wir, ะŸะพะธะพัŽ ะปะธะทะฝัŒ ะธ ะฑะตั€ัั‚ะฒะตะฝะฝะพะน ะธ ั€ะฐะฒั‹ะน, ะŸ ะฒะตั€ะฝั‹ะน ะบั€ัƒะณ ะดั€ะดะธะฑะตะต ะฟั€ะตะดะฐะปะฐ: ะšะฐะบ ะดั€ะตะผะปะตั‚ ะผะพะน ะฒะพัะตะปะพะน ั‚ั€ะพะน ัƒะบะพั€; ะขั€ะตะดะธ ั‚ะตะฑั โ€“ ะฟะตะตะทั€ะตะฝัŒะตะผ ะฒะพะทะฝะตั€ะตะตะฝัŒะต, ะ˜ ั ั‚ะพะฑะพะพ ั‚ะพะผะฝั‹ะต ั‚ะพะพะฒะพะดั‹. ะ˜ ั‚ะพะปัŒะบะพ ะฑ ะฝะธะผ ะต ั‚ะพ ะฟั€ะตะทั€ะฐะฐะปัั ะ˜ ั ะฒะตั‡ะฝั‹ะผ ะธ ั ะบะพั€ะดั†ัŽ ะบั€ะพะฒัŒัŽ ะ˜ ัั‚ะฐะฝ ั‚ ะฒ ัะฒะพั€ั‚ะฝะฝั‹ะผ ัะพะปะตะฐะพะผ, ะ˜ ะฒะปั€ะพะผ ัั‚ะฐะฝะตั‚ัั ะญั€ะฐั‚; ะŸะพะณะดะฐ ะฟะพะพะพั€ัั‚ะฒัƒั ัะตั€ะดะตั‡ ะ˜ ะฟ ะตะทัŒ ะธะต ะฟั€ะพะฑะพะปะถะฐั‚ัŒ.ะฝะต ัะผะตัŽ. ะขะพ ะดะฐ ะบะฐะบ ัะพะฝ ัะตะฑะต ะผะฐั€ะพะดะฐ ะš ั‚ะพะป ะฟั€ะตะปะตัั‚ะฝั‹ะน ะธ ะฟั€ะพัั‚ะพะน. ะกะปะตัˆะธั‚ะต ะฒ ะบั€ะพะฒัŒ ะพะฐััะบะฐะทะฐะป ัะผะตั€ัััŒ, ะŸะพะด ัั‚ะฝัŒัŽ ะฟะฐั€ัƒัะพ ะทะฐ ัั‚ะตะบะปะพะผ.
11
null
2018-05-25
2018-05-25 09:24:54
2018-05-25
2018-05-25 09:27:47
11
false
en
2018-07-23
2018-07-23 13:35:42
10
128a5f62b483
10.718868
8
0
0
Overview
5
Generation of poems with a recurrent neural network Overview In this article, I will present the structure of a neural network (NN) that is capable of generating poems. Neural networks are the technology standing behind deep learning, which is part of the machine learning discipline. The main value of this article is not to present you with the best possible artificially generated poems, or the most advanced state of the art NN architecture for generating poems, but rather presenting a relatively simple structure that performs surprisingly well in a quite complicated natural language processing (NLP) task. If you are a machine learning (ML) practitioner, understanding the structure of this network could give you ideas on how to use parts of this structure for your own ML task. If you are willing to start developing NN by yourself, recreating this network by yourself could be a good place to start. This network is simple enough to build from scratch, as well as complicated enough to require the usage and understanding of basic training techniques. Next, we will see related works, some real predictions that my neural network has made, and then see the network structure. A video of my talk is available on Youtube. Figure 1: Poem fragments generated by RNN Related work Andrej Karpathy [1] has a very interesting article about poem generation with RNN. His article provided the background and motivation for this writing. Karpathyโ€™s implementation uses Lua with Torch, I use Python with TensorFlow. For people who are interested in learning TensorFlow, the code behind this article may be a good reference implementation. Hopkins and Kiela [3] propose more advances NN architectures that strive to generate poems indistinguishable from human poets. Examples of poems generated by their algorithms can be seen here [4]. Ballas provides an RNN to generate haikus and limericks here [6]. A whole magazine with machine generated content including poems is available here [5]. Online poem generator is available here: [7]. Lakshmanan describes how to use Google Cloud ML for hyper-parameters tuning of a poem generating NN [8]. The poem writing problem definition As a first step, letโ€™s rephrase the problem of writing a poem to a prediction problem. Given a poem subject, we want to predict what a poet would write about that subject. Figure 2: Poet writing As a second step, let us break down the large prediction problem into a set of smaller ones. The smaller problem is to predict only one letter (character) that a poet would write following some given text. Later we will see how to predict a poetโ€™s writing on a subject using one character predictor. For example, can you guess what would be the next character here? Figure 3: Prediction riddle 1 This is an easy riddle to solve for two reasons: It appears in the training text when we use Shakespeare for training It is the last letter of a sentence. The last letter is easier to guess because there are few grammatically correct variants. Letโ€™s try another one: Figure 4: Prediction riddle 2 Here we want to guess the first letter of the new sentence. This is much harder, because many grammatically correct variants are possible, and it is hard to know which variant Shakespeare would choose. Prediction of the next character Theory To predict the next character we need a neural network that can read any number of given characters, remember something about all of them, and then predict the next. Figure 5: Input A good candidate for this kind of task is a recurrent neural network (RNN). A recurrent neural network is a neural network with a loop in it. It reads input one character at a time. After reading each character xt it generates an output ht and a state vector st, see Figure 6. The state vector holds some information about all the characters that were read up until now and is passed to the next invocation of the recurrent network. A great explanation of RNNs is provided by Olah [2]. Figure 6: Recurrent neural network Figure 7 shows the RNN unrolled in time. Figure 7: Unrolled RNN The first input character goes to xโ‚€ , the last goes to xt, the output hโ‚€ is the prediction for the character that a poet would write after xโ‚€, where hโ‚ is the character that will follow xโ‚, and so on. Real examples of RNN outputs Now let us see some examples of the real predictions that my NN has made. Figure 8 shows the example input, the expected output, which is the input shifted by one character right, and the actual output. Figure 8: Example RNN output The actual output does not match exactly the expected output. This is natural because otherwise, we would have an ideal network that predicts with perfect accuracy, which is not the case in practice. The difference between the expected and the actual prediction is called error or loss. During training, the NN is improved step by step to minimize loss. The training process uses training text to feed the network with pairs of input and expected output. Each time the actual output differs from the expected output, the parameters of the NN are corrected a bit. In our case, the training text is the collection of Shakespeareโ€™s works. Now let us see more examples of the predicted characters, and in particular how the prediction improves as the training goes. Figure 9 shows a sequence of predictions after a different number of training steps. Figure 9: Outputs at different training stages Here, the input string is โ€œThe meaning of lifeโ€. After 4,941 steps, we have 11 incorrectly predicted characters (marked in red). After 34,587 steps, the number of prediction errors fell to 7. We can see that more errors appear at the beginning of a string than at the end of a string. This is because by the end of the string the network reads more characters and its state contains richer information. This richer information leads to better and more informed predictions. Generation of the entire poem At the beginning of this article we focused on a smaller problem of predicting one character of a poem, now we are coming back to the larger problem of generating the entire poem. So having a trained RNN at hand that can predict one character, we can employ the scheme depicted in Figure 10 to generate any number of characters. Figure 10: Generation of many characters First, the poem subject is provided as an input at xโ‚€, xโ‚, xโ‚‚, โ€ฆ. Outputs preceding hโ‚€ are ignored. The first character that is predicted to follow the poem subject, hโ‚€, is taken as the input to the next iteration. By taking the last prediction as the input for the next iteration we can generate as many characters as we desire. We can look at the above scheme from a different perspective, see Figure 11. Figure 11: Encoder โ€” decoder perspective The left part of the network is an encoder that encodes the poem subject in a vector representation, called subject vecotor or theme vector. The right part of the network is a decoder that decodes the subject vector into a poem. This perspective is used in machine translation systems. There, an encoder encodes a sentence in a source language into a vector representing its meaning. Then, the decoder decodes the meaning vector into a sentence in a target language. Examples of generated poems Shakespeare We will now see a series of examples of generated poems. Those examples were generated at various stages of the training process, and demonstrate how the generated poem improved during the training. The poem subject is: โ€œThe meaning of lifeโ€. The network is trained on the works of Shakespeare. Here is a small excerpt from the training text, which is the original Shakespeare writing: Training step: 1 โ€” time: 0 min This is just the beginning of the training process. All the network parameters are initialised to random values, and still remain at this state. Therefore, the output is just a random collection of characters. Training step: 140 โ€” time: 5 min We are 5 minutes in to the training process, at step 140. The network learned the distribution of characters in English text and outputs the most frequent characters, which are: space, e, n, r, and o Training step: 340 โ€” time: 11 min Here, additional frequent characters appeared: t, h, s, and i. Training step: 640 โ€” time: 21 min Here, the network learned several new things: The space characters are now distributed correctly. Word lengths now closely resemble the lengths of words in English text. The text is organized in paragraphs of meaningful length. Every paragraph begins with a name of a play personage, which is followed by a colon. Short and frequent words start to appear, such as: the, so, me The network learned the concept of vowels and consonants. They appear in a more or less natural order. For example, a sequence of letters "touherthor" from the above text, if read, sounds like a valid word. This is due to the correct distribution of vowels and consonants. Training step: 940 โ€” time: 31 min Longer words appear, like: would, here, hire Training step: 1,640 โ€” time: 54 min Here we start to see correct sequences of correct words: "the fort of the hands", or "the world be the worth of the". Training step: 6,600 โ€” time: 3h 29 min Now we see the first signs of a grammatical structure of a sentence. The sequence of letters: "That they are gone" resembles a sentence with a correct grammatical structure. Training step: 34,600 โ€” time: 19 hours This is as far as this network can get. After 19 hours of training process, it reaches its limit, and the output does not improve any more. Goethe This is an output of an RNN trained on Goetheโ€™s Faust. The output is taken after the training process reached its limits. Pushkin The same as above but trained on Pushkin. Conclusion We have seen a recurrent neural network that can generate poems. We have seen how the network output improves as the training process goes. This is not the best possible neural network to generate the best poems. There are many ways to improve it, some of them mentioned in related works sections. This is a simple neural network that achieves surprisingly good results. If you are interested in repeating this exercise by yourself, the code behind this article can be found at: github.com/AvoncourtPartners/poems. The network is implemented in Python using TensorFlow. References [1] Andrej Karpathy. โ€œThe Unreasonable Effectiveness of Recurrent Neural Networksโ€ [2] Cristopher Olah. โ€œUnderstanding LSTM Networksโ€ [3] Jack Hopkins and Douwe Kiela. โ€œAutomatically Generating Rhythmic Verse with Neural Networks.โ€ ACL (2017). [4] http://neuralpoetry.getforge.io/ [5] CuratedAI โ€” A literary magazine written by machines, for people. [6] Sam Ballas. โ€œGenerating Poetry with PoetRNNโ€ [7] Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. http://52.24.230.241/poem/index.html [8] Lak Lakshmanan. โ€œCloud poetry: training and hyperparameter tuning custom text models on Cloud ML Engineโ€ My talk at Munich AI Summit 2018
Generation of poems with a recurrent neural network
12
generation-of-poems-with-a-recurrent-neural-network-128a5f62b483
2018-07-23
2018-07-23 13:35:42
https://medium.com/s/story/generation-of-poems-with-a-recurrent-neural-network-128a5f62b483
false
2,496
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Denis Krivitski
Machine learning specialist, CTO at Avoncourt Partners
87e41d6f308d
DenisKrivitski
66
72
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 14:04:24
2018-09-25
2018-09-25 14:07:57
1
false
en
2018-09-25
2018-09-25 14:07:57
1
128b35c3dce2
2.426415
4
6
0
At RISE, we believe that implementing the powerful technologies of artificial intelligence (AI) and machine learning (ML) will lead to hugeโ€ฆ
5
How RISEโ€™s Trading Strategies Lead To Real Returns At RISE, we believe that implementing the powerful technologies of artificial intelligence (AI) and machine learning (ML) will lead to huge gains for investors. Our algorithms are top-of-the-line, tried and true in traditional financial markets โ€” now itโ€™s time to put them to use in the exciting world cryptocurrency markets. RISEโ€™s trading strategies have a very low correlation with each other. Each strategy caters to the individual characteristics of instruments such as volatility patterns, liquidity, and trading time. By diversifying between different instruments using different parameters, RISE maximizes investment success. RISE leverages technology and science as the basis for each of the new models developed. Most of the new models begin with a simple idea. The next step is to verify that a trading idea has potential and research it further. Generally speaking, most ideas try to identify patterns from very small and weak signals barely distinguishable from market noise. This is precisely why RISE leverages its proprietary technology early in the development process in order to analyze, gather and clean the data associated with the particular trading idea. The process can take quite some time as each trading idea differs in terms of where to look for deviations and anomalies in price or behavioral patterns. RISEโ€™s AI and ML systems speed up the process to help validate or dismiss the initial idea. In the form of an algorithm, the trading idea is run across the cleansed historical data in the ML backtest engine, dynamically optimizing the trading parameters and producing an out-of-sample validation, or a historical simulation of the modelโ€™s performance. With this simulation, RISE analysts can see if the idea is satisfactory or unsatisfactory, or perhaps one that produces good and uncorrelated returns in relation to the overall portfolio. From here, the idea is either dismissed or further developed. If the investment committee decides to operationalize the idea, RISE analysts look in-depth at key risk and performance indicators, trying to better understand the risk and return profile of the strategy and how it will fit into the overall portfolio. Once the risk-return profile is analyzed further, the RISE infrastructure is taken into consideration to determine whether those anomalies can be leveraged/mitigated or whether slippage/trading costs can be reduced via RISEโ€™s low latency order management system. The goal of the system is to pick those cryptocurrencies that are expected to perform better than others and identify opportunities to short cryptos that are expected to underperform. RISE analyzes cryptocurrency markets using exchanges and AI to build a ranking for each coin based on various features: trading volumes, type of currency, how long it has existed, its capitalization change, industry of the company, its origin, performance compared to the market, exchanges where the coin is listed and Google search results. In the future, the results of sentiment and blockchain analysis will also be incorporated. Each coin receives a score to guide investment amounts. RISE does not assume any functional dependency between the features and the score, instead using ML and AI methods. This strategy is promising because it offers an approach to detect smaller coins that have the potential to become lucrative investments. This enables RISE to scan the whole market and detect the success stories before they happen. Many years of science and development have brought RISE where it is today. We are ready to take on crypto โ€” will you join us? Learn more about RISE and our upcoming STO on our website rise.eco.
How RISEโ€™s Trading Strategies Lead To Real Returns
4
how-rises-trading-strategies-lead-to-real-returns-128b35c3dce2
2018-09-25
2018-09-25 14:07:57
https://medium.com/s/story/how-rises-trading-strategies-lead-to-real-returns-128b35c3dce2
false
590
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Rise
null
7763a01a3d9a
RiseEco
436
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-12
2017-10-12 16:13:42
2017-10-12
2017-10-12 19:05:30
0
false
en
2017-10-13
2017-10-13 13:49:03
6
128c12d5b267
3.324528
0
0
0
With all the extremism (terrorism, white supremacy, etc.) we see in the world today, how can we better deal with and respond to the threatsโ€ฆ
5
Can we predict religious extremism? With all the extremism (terrorism, white supremacy, etc.) we see in the world today, how can we better deal with and respond to the threats that keep arising? One way to better respond to a threat is to be able to predict this threat. The possibility of predicting extremism is exactly what I covered in a new article published in the journal Religion Brain & Behavior. The first 50 people get free downloads with this link (nope not kidding). In the article, I propose the idea that taking an information processing approach โ€” that is to say, taking an approach that looks at how humans think about and manipulate information in their minds โ€” can help us to do this. But, you might say, religions are way to complex to be predicted. Well, yes and no. I admit that religions are complex. However, that doesnโ€™t mean that they are not predictable. Take the weather as an example. The weather is extremely complex. However, we can predict it in short time scales. But, because of how complex interactions work over time and are influenced by what happened today, the further out we try and predict, the less sure we are about our prediction (even with satellites). Extremism, I argue, is a similar issue. We can use the mechanisms of human psychology to predict human actions just like we can use evaporation and condensation and temperature to predict the weather. This leads to two obvious questions: 1) what do we need to do this? 2) why arenโ€™t we doing this now? Well, to begin with, we need to abandon the idea that humans are blank slates or that everything we do is learned through observation. Assuming humans are blank slates is simply false. Time and again we have experiments that have shown this not to be the case. Most recently, an experiment suggesting that humans can recognise facial features even before theyโ€™re born! These sorts of experiments demonstrate how strictly learning based approaches (such as behaviorism) have too many holes for us to really consider it a valid approach to extremism (or even culture, religion or psychology for that matter). Then, we need to start to build new forms of social AI (or multi-agent AI). Because these AI systems behave similar to humans, we can use them to help predict radicalisation and extremism. This brings us now to question 2, which I didnโ€™t discuss in my paper. Why arenโ€™t we doing it? I think its because of PR and politics. As I noted a while ago in my article for PrimeMind magazine, politicians are slow to embrace new technologies, particularly those as complex as MAAI based simulation and risk analysis. The public doesnโ€™t quite push for it, nobody really lobbies for it, so it stays in the corporate world for the time being (with some exceptions, such as my consultancy work on a project funded by the Norwegian government). The other aspect is PR. Academia is not an industry of meritocracy, its an industry of advertisements and PR where an old guard of tenured professors (who canโ€™t be fired for anything short of criminal behaviour) work to push forward classical and established ideas, using whatever resources they can to push their own agenda. This includes hiring PR consultants to disseminate their work to the public. In an article by Joe Brewer (who blocked me on medium for critiquing his work โ€” otherwise Iโ€™d like the piece to you), you see him pushing the idea that cultural evolution needs PR (for those who are unaware, cultural evolution is a re-invented application of Darwinian principles to study human groups). This is because they have to be reframing the field as its main proponents begin to openly support ideas such as social Darwinism and argue that we should be applying this to public policy. On Aeon.com, a leader of the Cultural Evolution Society published an article titled โ€œSocial Darwinism is back: and this time its a good thingโ€; the title got heat, you can still see it in the URL at the link here, even though theyโ€™ve now edited the article to sound less endorsing of such a colonial idea; Iโ€™m glad to see this done, I only wish theyโ€™d be more sensitive to the history of what it is they propose before doing so. The fact is, these ideas are dangerous and cultural evolution is far too immature and unvalidated to be safe to base policy on. Its definitions are too lose to be usable for prediction in a hard sense of the word (prediction resulting from deduction). Nonetheless, having millions of dollars of research support from universities and religious organisations can allow for narrative control. However, as with all aspects of science, as time goes on, the bad ideas start to die off as their critical foundations are chipped away at and we progress toward newer and better scientific approaches. Already, in the field of modelling and simulation, engineering, and cognitive science, you see individuals adopting and working with the idea of Generative Emergence, an idea that has been foundation to my own work. As the father of generative emergence works on a number of government projects, I have hope that the winds of change are blowing.
Can we predict religious extremism?
0
can-we-predict-religious-extremism-128c12d5b267
2018-03-20
2018-03-20 17:45:45
https://medium.com/s/story/can-we-predict-religious-extremism-128c12d5b267
false
881
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Justin Lane
I'm a researcher and consultant interested in how cognitive science explains social stability and economic events. My opinions are my own and only my own.
4708d02973e0
justin_lane
55
215
20,181,104
null
null
null
null
null
null
0
def random_kid(): return random.choice(["boy", "girl"]) both_girls = 0 older_girl = 0 either_girl = 0 rendom.seed(0) for _ in range(1000): yonger = random_kid() older = random_kid() if older == "girl": older_girl += 1 if older == "girl" and younger == "girl": both_girls += 1 if older == "girl" or younger =="girl": eigher_girl += 1 print(both_girls / older_girl) #2ไบบใจใ‚‚ๅฅณใฎๅญ๏ฝœ1ไบบ็›ฎใŒๅฅณใฎๅญ #0.2 print(both_girls / either_girl) #2ไบบใจใ‚‚ๅฅณใฎๅญ๏ฝœใฉใกใ‚‰ใ‹1ไบบใŒๅฅณใฎๅญ #0.3 def uniform_pdf(x): return 1 if x >= 0 and x < 1 else 0 def uniform_cdf(x): if x < 0: return 0 #ไธ€ๆง˜ๅˆ†ๅธƒใฏ0ใ‚’ไธ‹ๅ›žใ‚‰ใชใ„ elif x < 1: return x #ไพ‹ใˆใฐP(X <= 0.4) = 0.4ใจใชใ‚‹ else: return 1 def nomal_pdf(x, mu=0, sigma=1): sqrt_two_pi = math.sqrt(2 * math.pi) return (math.exp(-(x - mu) ** 2 / 2 / sigma ** 2) / (sqrt_two_pi * sigma)) def normal_cdf(x, mu=0, sigma=1): return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2 def inverse_normal_cdf(p, mu=0, sigma=1, tokerance=0.00001): #ไบŒๅˆ†ๆŽข็ดขใ‚’็”จใ„ใฆใ€้€†้–ขๆ•ฐใฎ่ฟ‘ไผผๅ€คใ‚’่จˆ็ฎ—ใ™ใ‚‹ #ๆจ™ๆบ–ๆญฃ่ฆๅˆ†ๅธƒใงใชใ„ๅ ดๅˆใ€ๆจ™ๆบ–ๆญฃ่ฆๅˆ†ๅธƒใ‹ใ‚‰ใฎๅทฎๅˆ†ใ‚’ๆฑ‚ใ‚ใ‚‹ if mu != 0 or sigma != 1: return mu + sigma * inverse_nomal_cdf(p, tolerance=tolerance) low_z, low_p = -10.0, 0 #nomal_cdf(-10)ใฏใ€0ใซ่ฟ‘ใ„ๅ€คใงใ‚ใ‚‹ hi_z, hi_p = 10.0, 1 #nomal_cdf(10)ใฏ1ใซ่ฟ‘ใ„ๅ€คใงใ‚ใ‚‹ while hi_z - low_z > tolarence: mid_z = (low_z + hi_z) / 2 #ไธญๅคฎใฎๅ€คใŠใ‚ˆใณ mid_p = nomal_cdf(mid_z) #ใใฎๅœฐ็‚นใงใฎcdfใฎๅ€ค if mid_p < p: #ไธญๅคฎๅ€คใฏใพใ ๅฐใ•ใ„ใฎใงๆ›ดใซไธŠใ‚’ไฝฟใ† low_z, low_p = mid_z, mid_p elif mid_p > p: #ไธญๅคฎๅ€คใฏใพใ ๅคงใใ„ใฎใงใ•ใ‚‰ใซไธ‹ใ‚’ไฝฟใ† hi_z, hi_p = mid_z, mid_p else: break return mid_z def bernoulli_trial(p): return 1 if random.randm() < p else 0 def binomial(n, p): return sum(benoulli_trial(p) for _ in range(n)) def make_hist(p, n, num_points): data = [binomial(n, p) for _ in range(num_points)] histogram = Counter(data) plt.bar([x - 0.4 for x in histogram.keys()], [v / num_points for v in histgram.values()], 0.8, color = '0.75') mu = p * n sigma = math.sqrt(n * p * (1 - p)) #ๆญฃ่ฆๅˆ†ๅธƒใฎ่ฟ‘ไผผใ‚’ๆŠ˜ใ‚Œ็ทšใ‚ฐใƒฉใƒ•ใงใƒ—ใƒญใƒƒใƒˆใ™ใ‚‹ xs = range(min(data), max(data) + 1) ys = [nomaL_cdf(i + 0.5, mu, sigma) - nomal_cdf(i - 0.5, mu, sigma) for i in xs] plt.plot(xs, ys) plt.title("Binomial istribution vs. Normal Approximation") plt.show()
15
null
2017-10-21
2017-10-21 04:19:45
2017-10-21
2017-10-21 12:05:57
4
false
ja
2017-10-21
2017-10-21 12:05:57
2
128c20480c3f
13.372
4
0
0
็ขบ็Ž‡ใจใใฎๆ•ฐๅญฆ็š„ๅŸบ็คŽใซๅฏพใ™ใ‚‹ใ‚ใ‚‹็จฎใฎ็†่งฃใ‚’ๆฌ ใ„ใŸใพใพใงใ€ใƒ‡ใƒผใ‚ฟใ‚ตใ‚คใ‚จใƒณใ‚นใ‚’่กŒใ†ใฎใฏๅ›ฐ้›ฃใงใ™ใ€‚ ็›ฎๆจ™ใฎใŸใ‚ใซใ€็พๅฎŸใฎไบ‹่ฑกใฎไธ็ขบๅฎŸใ•ใ‚’ๅฎš้‡ๅŒ–ใ™ใ‚‹ๆ–นๆณ•ใฎ๏ผ‘ใคใจใ—ใฆ็ขบ็Ž‡ใ‚’่€ƒใˆใ‚‹ใ“ใจใฏ้‡่ฆใงใ™ใ€‚่กจ่จ˜ใฏP(E)ใงใ€ใ€Œไบ‹่ฑกEใฎ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใ€ใ‚’่กจใ—ใพใ™ใ€‚โ€ฆ
3
็ขบ็Ž‡ ็ขบ็Ž‡ใจใใฎๆ•ฐๅญฆ็š„ๅŸบ็คŽใซๅฏพใ™ใ‚‹ใ‚ใ‚‹็จฎใฎ็†่งฃใ‚’ๆฌ ใ„ใŸใพใพใงใ€ใƒ‡ใƒผใ‚ฟใ‚ตใ‚คใ‚จใƒณใ‚นใ‚’่กŒใ†ใฎใฏๅ›ฐ้›ฃใงใ™ใ€‚ ็›ฎๆจ™ใฎใŸใ‚ใซใ€็พๅฎŸใฎไบ‹่ฑกใฎไธ็ขบๅฎŸใ•ใ‚’ๅฎš้‡ๅŒ–ใ™ใ‚‹ๆ–นๆณ•ใฎ๏ผ‘ใคใจใ—ใฆ็ขบ็Ž‡ใ‚’่€ƒใˆใ‚‹ใ“ใจใฏ้‡่ฆใงใ™ใ€‚่กจ่จ˜ใฏP(E)ใงใ€ใ€Œไบ‹่ฑกEใฎ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใ€ใ‚’่กจใ—ใพใ™ใ€‚ ็ขบ็Ž‡่ซ–ใ‚’ไฝฟใฃใฆใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ—ใ€็ขบ็Ž‡่ซ–ใ‚’ไฝฟใฃใฆใƒขใƒ‡ใƒซใ‚’่ฉ•ไพกใ—ใพใ™ใ€‚็ขบ็Ž‡ใฏใ‚ใ‚‰ใ‚†ใ‚‹ๅ ด้ขใงไฝฟ็”จใ—ใพใ™ใ€‚ ๅพ“ๅฑžใจ็‹ฌ็ซ‹ ๅคงใพใ‹ใซ่จ€ใ†ใจใ€Eใฎ็™บ็”Ÿใซ้–ขใ™ใ‚‹ไฝ•ใ‚‰ใ‹ใฎๆƒ…ๅ ฑใŒFใฎ็™บ็”Ÿใซใคใ„ใฆใฎๆƒ…ๅ ฑใ‚’ไธŽใˆใ‚‹ๅ ดๅˆใ€๏ผ’ใคใฎไบ‹่ฑกEใจFใฏๅพ“ๅฑž้–ขไฟ‚ใซใ‚ใ‚‹ใจใ„ใ„ใพใ™ใ€‚ ไพ‹ใˆใฐใ€ใ‚ณใ‚คใƒณใ‚’๏ผ’ๅ›žๆŠ•ใ’ใŸใจใ—ใพใ™ใ€‚๏ผ‘ๅ›ž็›ฎใงใ‚ชใƒขใƒ†ใŒๅ‡บใŸใ‹ๅฆใ‹ใ‚’็Ÿฅใฃใฆใ„ใŸใจใ—ใฆใ‚‚ใ€๏ผ’ๅ›ž็›ฎใฎ็ตๆžœใซใฏๅฝฑ้ŸฟใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใใฎใŸใ‚ใ“ใ‚Œใ‚‰ใฎไบ‹่ฑกใฏ็‹ฌ็ซ‹ใงใ™ใ€‚ ไธ€ๆ–นใ€๏ผ‘ๅ›ž็›ฎใŒใ‚ชใƒขใƒ†ใงใ‚ใฃใŸใ“ใจใฏ๏ผ’ๅ›žใจใ‚‚่ฃใŒๅ‡บใ‚‹ไบ‹่ฑกใซใคใ„ใฆๆ˜Žใ‚‰ใ‹ใชๆƒ…ๅ ฑใ‚’ไธŽใˆใพใ™ใ€‚ใ“ใฎไบ‹่ฑกใฏๅพ“ๅฑžใงใ™ใ€‚ ๆ•ฐๅญฆ็š„ใซใ€ไบ‹่ฑกEใจFใŒ็‹ฌ็ซ‹ใงใ‚ใ‚‹ๅ ดๅˆใ€ใใฎไธกๆ–นใŒ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใฏใ€ใใ‚Œใžใ‚Œใฎไบ‹่ฑกใŒ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใฎ็ฉใจใชใ‚Šใพใ™ใ€‚ P(E, F) =P(E)P(F) ใ“ใฎไพ‹ใงใฏใ€๏ผ‘ๅ›ž็›ฎใซ่กจใŒๅ‡บใ‚‹็ขบ็Ž‡ใฏ1/2ใงใ‚ใ‚Šใ€๏ผ’ๅ›žใจใ‚‚่ฃใŒๅ‡บใ‚‹็ขบ็Ž‡ใฏ1/4ใงใ™ใ€‚ใ—ใ‹ใ—ใ€๏ผ‘ๅ›ž็›ฎใซ่กจใŒๅ‡บใฆใ€๏ผ’ๅ›ž็›ฎใจใ‚‚่ฃใŒใงใ‚‹็ขบ็Ž‡ใฏ๏ผใจใชใ‚Šใพใ™ใ€‚ ๆกไปถไป˜ใ็ขบ็Ž‡ ็‹ฌ็ซ‹ใชใ‚‰ๅฎš็พฉใฏใ€P(E, F) =P(E)P(F) ไธก่€…ใŒ็‹ฌ็ซ‹ใงใ‚ใ‚‹ๅฟ…่ฆใŒใชใ„๏ผˆใใ—ใฆใ€Fใฎ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใŒ0ใงใชใ„๏ผ‰ใฎใงใ‚ใ‚Œใฐใ€FใซใŠใ‘ใ‚‹Eใฎๆกไปถไป˜ใ็ขบ็Ž‡ใฏๆฌกใฎๅผใงๅฎš็พฉใงใใพใ™ใ€‚ P(E|F) = P(E, F)/P(F) ใ“ใ‚Œใฏใ€ไบ‹่ฑกFใŒ็™บ็”Ÿใ—ใŸใ“ใจใ‚’็Ÿฅใฃใฆใ„ใ‚‹็ŠถๆณใงEใŒ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใจ่€ƒใˆใ‚‹ไบ‹ใŒใงใใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซ็ฝฎใๆ›ใˆใ‚‰ใ‚Œใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ P(E, F) = P(E|F)P(F) EใจFใŒ็‹ฌ็ซ‹ใชใ‚‰ใ€ๆฌกใฎๅผใŒๆˆใ‚Š็ซ‹ใกใพใ™ใ€‚ P(E|F) = P(E) ใ“ใ‚ŒใฏFใŒ็™บ็”Ÿใ—ใŸใ“ใจใ‚’็Ÿฅใฃใฆใ„ใฆใ‚‚ใ€Eใฎ็™บ็”Ÿใซไฝ•ใ‚‰ๅฝฑ้Ÿฟใ‚’ไธŽใˆใชใ„ใ“ใจใ‚’ๆ•ฐๅญฆ็š„ใซ่กจใ—ใŸใ‚‚ใฎใงใ™ใ€‚ ใ‚ˆใ็Ÿฅใ‚‰ใ‚ŒใŸไพ‹ใจใ—ใฆใ€2ไบบใฎๅญไพ›๏ผˆๆ€งๆ ผใฏใ‚ใ‹ใ‚‰ใชใ„๏ผ‰ใŒใ„ใ‚‹ๅฎถๆ—ใ‚’่€ƒใˆใพใ™ใ€‚ๆฌกใฎใ“ใจใ‚’ไปฎๅฎšใ—ใŸๅ ดๅˆใ€ ๅญไพ›ใŒใใ‚Œใžใ‚Œ็”ทใฎๅญใ‹ๅฅณใฎๅญใงใ‚ใ‚‹ๅฏ่ƒฝๆ€งใฏ็ญ‰ใ—ใ„ 2ไบบ็›ฎใฎๅญไพ›ใฎๆ€งๅˆฅใฏใ€1ไบบ็›ฎใฎๅญไพ›ใฎๆ€งๅˆฅใจใฏ็‹ฌ็ซ‹ใ—ใฆใ„ใ‚‹ใ€‚ ๅฅณใฎๅญใŒใ„ใชใ„็ขบ็Ž‡ใฏ1/4ใ€1ไบบใŒๅฅณใฎๅญใง1ไบบใŒ็”ทใฎๅญใงใ‚ใ‚‹็ขบ็Ž‡ใฏ1/2ใ€2ไบบใจใ‚‚ๅฅณใฎๅญใงใ‚ใ‚‹็ขบ็Ž‡ใฏ1/4ใจใชใ‚Šใพใ™ใ€‚ 1ไบบใƒกใŒๅฅณใฎๅญใงใ‚ใ‚‹๏ผˆG๏ผ‰ๅ ดๅˆใซใ€2ไบบใจใ‚‚ใŠใ‚“ใชใฎใ“ใงใ‚ใ‚‹๏ผˆB)็ขบ็Ž‡ใฏใฉใ†ใชใ‚‹ใงใ—ใ‚‡ใ†ใ‹ใ€‚ไบ‹่ฑกBใ‹ใคGใฏไบ‹่ฑกBใซ็ญ‰ใ—ใ„ใฎใงใ€ๆกไปถไป˜ใ็ขบ็Ž‡ใฎๅฎš็พฉใซใ‚ˆใ‚‹ใจๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ P(B|G) = P(B,G)/P(G) =P(B)/P(G) =1/2 ใŠใใ‚‰ใใ“ใฎ็†่งฃใฏ็›ดๆ„Ÿ็š„็†่งฃใจไธ€่‡ดใ—ใฆใพใ™ใ€‚ ๅŒใ˜ใใ€ๅฐ‘ใชใใจใ‚‚1ไบบใŒๅฅณใฎๅญใงใ‚ใ‚‹๏ผˆL๏ผ‰ใซใ€2ไบบใจใ‚‚ๅฅณใฎๅญใงใ‚ใ‚‹็ขบ็Ž‡ใ‚‚ๆฑ‚ใ‚ใ‚‰ใ‚Œใพใ™ใ€‚ๅ€คใฏ็•ฐใชใ‚Šใพใ™ใ€‚ ๅ…ˆ็จ‹ใจๅŒๆง˜ใซใ€ไบ‹่ฑกBใ‹ใคLใฏไบ‹่ฑกใจใ—ใฆ็ญ‰ใ—ใ„ใฎใงใ€ๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ P(B|L) = P(B,L)/P(L) =P(B)/P(L) = 1/3 ใ“ใ‚Œใฏใฉใ†ใ„ใ†ใ“ใจใงใ—ใ‚‡ใ†ใ‹ใ€‚ใคใพใ‚Šใ€ๅฐ‘ใชใใจใ‚‚1ไบบใŒๅฅณๅญๅ‡บไผšใฃใŸๅ ดๅˆใ€2ไบบใจใ‚‚ๅฅณใฎๅญใงใ‚ใ‚‹ๅ ดๅˆใ‚ˆใ‚Šใ‚‚็”ทใฎๅญใจๅฅณใฎๅญใŒ1ไบบใฅใคใงใ‚ใ‚‹ๅฏ่ƒฝๆ€งใฏ2ๅ€ใ‚ใ‚‹ใจใ„ใ†ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ๅฎถๆ—ๆง‹ๆˆใ‚’ๅคง้‡ใซ็”Ÿๆˆใ—ใฆใ€ใ“ใฎ็Šถๆณใ‚’็ขบ่ชใ—ใพใ™ใ€‚ ใƒ™ใ‚คใ‚บใฎๅฎš็† ใƒ‡ใƒผใ‚ฟใ‚ตใ‚คใ‚จใƒณใ‚นใงๆœ€่‰ฏใฎใƒ‘ใƒผใƒˆใƒŠใƒผใฎ1ใคใŒใƒ™ใ‚คใ‚บใฎๅฎš็†ใงใ™ใ€‚ ใ“ใ‚Œใฏใ€ๆกไปถไป˜ใ็ขบ็Ž‡ใ‚’่ฃ่ฟ”ใ—ใซใ™ใ‚‹ๆ‰‹ๆณ•ใงใ™ใ€‚ ไบ‹่ฑกFใŒ็™บ็”Ÿใ—ใŸ็Šถๆณใงใ€ใใ‚Œใจใฏ็‹ฌ็ซ‹ใ—ใŸไบ‹่ฑกEใŒ่ตทใใ‚‹็ขบ็Ž‡ใ‚’ๆฑ‚ใ‚ใ‚‹ใจใ—ใพใ—ใ‚‡ใ†ใ€‚ใ—ใ‹ใ—ใ€ไบ‹่ฑกEใŒ็™บ็”Ÿใ—ใŸ็Šถๆณใงใ€ไบ‹่ฑกFใฎ็™บ็”Ÿใ™ใ‚‹็ขบ็Ž‡ใ ใ‘ใŒๆ—ข็Ÿฅใงใ‚ใ‚‹ใจใ—ใพใ™ใ€‚ๆฌกใฎๅผใซใชใ‚Šใพใ™ใ€‚ P(E|F) = P(E, F)/P(F) = P(F) = P(F|E)P(E)/P(F) ไบ‹่ฑกFใฏใ€็›ธไบ’ใซๆŽ’ไป–็š„ใช2ใคใฎไบ‹่ฑกใ€ŒFใ‹ใคEใ€ใจใ€ŒFใ‹ใคnotEใ€ใซๅˆ†ๅ‰ฒใงใใพใ™ใ€‚ใ€Œnot Eใ€๏ผˆใคใพใ‚Šใ€ใ€ŒEใŒ็™บ็”Ÿใ—ใชใ„ใ€๏ผ‰ใ‚’ยฌEใจ่กจ่จ˜ใ—ใฆๆฌกใฎๅผใซใชใ‚Šใพใ™ใ€‚ P(F) =P(F, E)+P(F, ยฌE๏ผ‰ ไปฅไธŠใ‚ˆใ‚Šใ€ๆฌกใฎๅผใŒๅฐŽใๅ‡บใ•ใ‚Œใพใ™ใ€‚ P(E|F) = P(F|E)P(E)/[P(F|E)P(E)+P(F|ยฌE)P(ยฌE)] ใ“ใ‚ŒใŒใƒ™ใ‚คใ‚บใฎๅฎš็†ใงใ™ใ€‚ ไพ‹ใˆใฐใ€10,000ไบบใ‚ใŸใ‚Š1ไบบใŒ็™บ็—‡ใ™ใ‚‹็–พๆ‚ฃใŒใ‚ใ‚‹ใจใ—ใพใ—ใ‚‡ใ†ใ€‚ใใ—ใฆใ“ใฎ็–พๆ‚ฃใ‚’99%ใฎๆญฃ็ขบใ•ใงๆคœๅ‡บใงใใ‚‹ๆคœๆŸปใŒใ‚ใ‚‹ใจใ—ใพใ™ใ€‚๏ผˆ้™ฝๆ€งใจ้™ฐๆ€ง๏ผ‰ ๆคœๆŸปใฎ้™ฝๆ€งใŒๆ„ๅ‘ณใ™ใ‚‹ใ“ใจใฏไฝ•ใงใ—ใ‚‡ใ†ใ€‚ใ“ใ“ใงใ€ใ€ŒๆคœๆŸปใŒ้™ฝๆ€งใงใ‚ใ‚‹ใ€ไบ‹่ฑกใ‚’Tใจใ€ใ€Œ็–พๆ‚ฃใ‚’ๆŒใฃใฆใ„ใ‚‹ใ€ไบ‹่ฑกใ‚’Dใจใ—ใพใ™ใ€‚ใƒ™ใ‚คใ‚บใฎๅฎš็†ใงใฏๆคœๆŸปใŒ้™ฝๆ€งใงใ‚ใฃใŸๅ ดๅˆใซใ€็–พๆ‚ฃใ‚’ๆŒใฃใฆใ„ใ‚‹็ขบ็Ž‡ใฏไปฅไธ‹ใซใชใ‚Šใพใ™ใ€‚ P(D|T) = P(T|D)P(D) / [P(T|D)P(T) + P(T|ยฌD)P(ยฌD)] ใ“ใ“ใงใ€P(T|D)ใคใพใ‚Šใ€Œ็–พๆ‚ฃใ‚’ๆŒใคไบบใŒๆคœๆŸปใง้™ฝๆ€งใซใชใ‚‹็ขบ็Ž‡ใ€ใฏ0.99%ใงใ‚ใ‚‹ใ“ใจใŒๅˆ†ใ‹ใฃใฆใ„ใพใ™ใ€‚ใ‚ใ‚‹ไบบใŒ็–พๆ‚ฃใ‚’ๆŒใค็ขบ็Ž‡P(D)ใฏใ€1/10,000 = 0.00001ใ€‚ ็–พๆ‚ฃใ‚’ๆŒใฃใฆใ„ใชใ„ใŒใ€ใƒ†ใ‚นใƒˆใง้™ฝๆ€งใจใชใ‚‹็ขบ็Ž‡P(T|ยฌD)ใฏ0.01ใ€‚ใใ—ใฆใ‚ใ‚‹ไบบใŒ็–พๆ‚ฃใ‚’ๆŒใŸใชใ„็ขบ็Ž‡P(ยฌD)ใฏใ€0.9999ใงใ™ใ€‚ใ“ใ‚Œใ‚‰ใ‚’ไปฃๅ…ฅใ™ใ‚‹ใจใ€ P(D|T) = 0.98% ใคใพใ‚Šใ€ๆคœๆŸปใง้™ฝๆ€งใŒๅ‡บใŸไบบใŒๅฎŸ้š›ใซ็–พๆ‚ฃใ‚’ๆŒใฃใฆใ„ใ‚‹็ขบ็Ž‡ใฏ1%ไปฅไธ‹ใงใ‚ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ็ขบ็Ž‡ๅค‰ๆ•ฐ ็ขบ็Ž‡ๅค‰ๆ•ฐใจใฏใ€็ขบ็Ž‡ๅˆ†ๅธƒใซ้–ข้€ฃใฅใ„ใŸๅ€คใ‚’ๆŒใคๅค‰ๆ•ฐใงใ™ใ€‚ใ‚ณใ‚คใƒณใฎใ‚ชใƒขใƒ†ใŒใงใŸใ‚‰1ใ€่ฃใŒๅ‡บใŸใ‚‰0ใจใชใ‚‹ใ‚ˆใ†ใช็ขบ็Ž‡ๅค‰ๆ•ฐใŒ้žๅธธใซๅ˜็ด”ใชไพ‹ใงใ™ใ€‚ ใ‚ˆใ‚Š่ค‡้›‘ใชใ‚‚ใฎใฏใ€ใ‚ณใ‚คใƒณใ‚’10ๅ›žๆŠ•ใ’ใŸ้š›ใซใ‚ชใƒขใƒ†ใŒๅ‡บใŸๅ›žๆ•ฐใ‚’ใจใ‚‹ใ‚‚ใฎใ‚„ใ€range(10)ใ‹ใ‚‰ๅ‡็ญ‰ใซ็ขบใ‹ใ‚‰ใ—ใ•ใงๅ€คใ‚’ๅ–ใ‚Šๅ‡บใ™ใ‚‚ใฎใชใฉใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ ้–ข้€ฃใ™ใ‚‹ๅˆ†ๅธƒใฏใ€็ขบ็Ž‡ๅค‰ๆ•ฐใŒๅ–ใ‚Šใ†ใ‚‹ๅ€คใใ‚Œใžใ‚Œใฎ่ตทใ“ใ‚Šใ‚„ใ™ใ•ใ‚’ไธŽใˆใพใ™ใ€‚ใ‚ณใ‚คใƒณๆŠ•ใ’ใงใฏใ€ๅ€คใŒ0ใจใชใ‚‹็ขบ็Ž‡ใŒ0.5ใ€ๅ€คใŒ1ใจใชใ‚‹็ขบ็Ž‡ใŒ0.5ใงใ™ใ€‚range(10)ใฏ0ใ‹ใ‚‰9ใพใงใใ‚Œใžใ‚Œใฎๆญฃ็ขบใ•ใŒ0.1ใจใชใ‚‹ๅˆ†ๅธƒใ‚’ๆŒใกใพใ™ใ€‚ ็ขบ็Ž‡ๅค‰ๆ•ฐใฎๅ€คใ‚’็ขบ็Ž‡ใฎ้‡ใฟไป˜ใๅนณๅ‡ใง่จˆ็ฎ—ใ•ใ‚Œใ‚‹ๆœŸๅพ…ๅ€คใ‚’่ฉฑ้กŒใซใ™ใ‚‹ใ“ใจใŒใ‚ใ‚‹ใ€‚ไพ‹ใˆใฐใ€ใ‚ณใ‚คใƒณๆŠ•ใ’ใฎๆœŸๅพ…ๅ€คใฏ1/2๏ผˆ0*1/2 + 1*1/2๏ผ‰ใงใ‚ใ‚Šใ€range(10)ใฎๆœŸๅพ…ๅ€คใฏ4.5ใซใชใ‚Šใพใ™ใ€‚ ไป–ใฎไบ‹่ฑกใจๅŒๆง˜ใซๆกไปถไป˜ใใฎไบ‹่ฑกใซใคใ„ใฆใ‚‚ๅฎš็พฉใงใใพใ™ใ€‚ๅ…ˆ็จ‹ใฎๅญไพ›ใฎใƒฌใ‚’ไฝฟใ†ใจใ€Xใ‚’ๅฅณใฎๅญใฎๆ•ฐใ‚’่กจใ™็ขบ็Ž‡ๅค‰ๆ•ฐใจใ™ใ‚‹ใจใ€XใŒ0ใฎๅ ดๅˆใฏ1/4 ใ€1ใฎๅ ดๅˆใฏ1/2ใ€2ใฎๅ ดๅˆใฏ1/4ใ€‚2ไบบใฎใ†ใกๆ•ฐใชใใจใ‚‚1ไบบใŒๅฅณใฎๅญใงใ‚ใฃใŸๅ ดๅˆใฎๅฅณใฎๅญใฎๆ•ฐใ‚’ๆ–ฐใ—ใ„็ขบ็Ž‡ๅค‰ๆ•ฐYใจใ—ใฆๅฎš็พฉใงใใพใ™ใ€‚YใŒ1ใฎๅ ดๅˆใฎ็ขบ็Ž‡ใฏ2/3ใ€YใŒ2ใฎๅ ดๅˆใฏ1/3ใงใ™ใ€‚1ไบบ็›ฎใฎๅญไพ›ใŒๅฅณใฎๅญๅ‡บไผšใฃใŸๅ ดๅˆใฎใ€ๅฅณใฎๅญใฎๆ•ฐใ‚’็ขบ็Ž‡ๅค‰ๆ•ฐZใจใ™ใ‚‹ใจใ€ZใŒ๏ผ‘ใฎๅ ดๅˆใฎ็ขบ็Ž‡ใฏ1/2ใ€ZใŒ2ใฎๅ ดๅˆใฏ1/2ใจใชใ‚Šใพใ™ใ€‚ ใ“ใฎใ‚ใจใ€ๅคšใใฎๅ ด้ขใง็‰นๅˆฅใช้…ๆ…ฎใ‚’ๆ‰•ใ‚ใšใซ็ขบ็Ž‡ๅค‰ๆ•ฐใ‚’ไฝฟใ†ใ“ใจใซใชใ‚Šใพใ™ใ€‚ใ—ใ‹ใ—ใ€ๆณจๆ„ๆทฑใๆŽ˜ใ‚Šไธ‹ใ’ใฆใฟใ‚Œใฐใ€ใใ“ใซ็ขบ็Ž‡ๅค‰ๆ•ฐใŒไฝฟใ‚ใ‚Œใฆใ„ใ‚‹ใ“ใจใŒๅˆ†ใ‹ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ้€ฃ็ถš็ขบ็Ž‡ๅˆ†ๅธƒ ใ‚ณใ‚คใƒณใชใ’ใฏ้›ขๆ•ฃๅž‹ๅˆ†ๅธƒใซ็›ธๅฝ“ใ—ใ€้›ขๆ•ฃ็š„ใช็ตๆžœใซๅฏพใ™ใ‚‹็”Ÿใฎ็ขบ็Ž‡ใซ้–ข้€ฃไป˜ใ‘ใ‚‰ใ‚Œใพใ™ใ€‚ใจใใซ้€ฃ็ถšใ—ใŸ็ตๆžœใฎๅˆ†ๅธƒใ‚’ใƒขใƒ‡ใƒซๅŒ–ใ™ใ‚‹ๅฟ…่ฆๆ€งใŒ็”Ÿใ˜ใพใ™ใ€‚๏ผˆใ“ใฎ็ตๆžœใฏๅธธใซๅฎŸๆ•ฐใงๅพ—ใ‚‰ใ‚Œใพใ™ใŒใ€ใ™ในใฆๅฎŸ็”ŸๆดปไธŠใฎไบ‹่ฑกใ‚’่กจใ—ใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“๏ผ‰ไพ‹ใˆใฐใ€ไธ€ๆง˜ๅˆ†ๅธƒๆดพใ€0ใ‹ใ‚‰1ใฎใ™ในใฆใฎๆ•ฐใซๅฏพใ—ใฆ็ญ‰ใ—ใ„้‡ใฟใ‚’ไธŽใˆใพใ™ใ€‚ 0ใ‹ใ‚‰1ใฎ้–“ใซใฏ็„ก้™ใฎๆ•ฐใŒๅญ˜ๅœจใ™ใ‚‹ใ“ใจใ‚’่€ƒใˆใ‚‹ใจใ€ใ“ใ“ใฎๆ•ฐใซๅฏพใ™ใ‚‹้‡ใฟใฏ0ใจใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎใŸใ‚ใ€้€ฃ็ถšๅˆ†ๅธƒใฏ็ขบ็Ž‡ๅฏ†ๅบฆ้–ขๆ•ฐ๏ผˆprobability density function:pdf๏ผ‰ใง่กจ็พใ—ใ€็ขบ็Ž‡ๅค‰ๆ•ฐใŒใ‚ใ‚‹็ฏ„ๅ›ฒใฎๅ€คใ‚’ใจใ‚‹็ขบ็Ž‡ใฏใ€ใใฎ็ฏ„ๅ›ฒใง็ขบ็Ž‡ๅฏ†ๅบฆ้–ขๆ•ฐใ‚’็ฉๅˆ†ใ™ใ‚‹ใ“ใจใงๅพ—ใ‚‰ใ‚Œใพใ™ใ€‚ ไธ€ๆง˜ๅˆ†ๅธƒใฎๅฏ†ๅบฆ้–ขๆ•ฐใฏๆฌกใฎใ‚ˆใ†ใซ่กจใ—ใพใ™ใ€‚ ใ“ใฎๅˆ†ๅธƒใซๅพ“ใ†็ขบ็Ž‡ๅค‰ๆ•ฐใŒ0.2ใ‹ใ‚‰0.3ใฎๅ€คใ‚’ๅ–ใ‚‹็ขบ็Ž‡ใฏ1/10ใจใชใ‚Šใพใ™ใ€‚Pythonใฎrandom.random()ใฏไธ€ๆง˜ๅˆ†ๅธƒใง๏ผˆ็–‘ไผผ๏ผ‰ไนฑๆ•ฐใงใ™ใ€‚ ็ขบ็Ž‡ๅค‰ๆ•ฐใฎๅ€คใŒใ‚ใ‚‹ๅ€คไปฅไธ‹ใจใชใ‚‹็ขบ็Ž‡ใ‚’่กจใ™็ดฏ็ฉๅˆ†ๅธƒ้–ขๆ•ฐ๏ผˆcumulative distribution function:cdf๏ผ‰ใฎๆ–นใ‚’ไฝฟใ†ใ“ใจใ‚‚ใŸใพใซใ‚ใ‚Šใพใ™ใ€‚ ๆญฃ่ฆๅˆ†ๅธƒ ๆญฃ่ฆๅˆ†ๅธƒใฏใ€ใ‚ใ‚‰ใ‚†ใ‚‹ๅˆ†ๅธƒใฎไธญใงๆœ€ใ‚‚้‡่ฆใชๅญ˜ๅœจใงใ™ใ€‚ใ“ใฎ้‡ฃ้˜ๅž‹ใฎๅˆ†ๅธƒใฏ2ใคใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ€ๅนณๅ‡ยตใจๆจ™ๆบ–ๅๅทฎฯƒใงๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚ๅนณๅ‡ใฏ้‡ฃ้˜ใฎไธญๅฟƒใ‚’่กจใ—ใ€ๆจ™ๆบ–ๅๅทฎใฏ้‡ฃใ‚Š้˜ใฎๆจชๅน…ใ‚’่กจใ—ใพใ™ใ€‚ ็ขบ็Ž‡ๅฏ†ๅบฆ้–ขๆ•ฐใฏๆฌกใฎๅผใงไธŽใˆใ‚‰ใ‚Œใพใ™ใ€‚ ๅฎŸ่ฃ…ใฏๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ ใ‚ฐใƒฉใƒ•ใซใ™ใ‚‹ใจใ“ใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ Various Normal pdfs(ๆญฃ่ฆๅˆ†ๅธƒใฎ็ขบ็Ž‡ๅฏ†ๅบฆ้–ขๆ•ฐ) ยต=0, ฯƒ=1ใฎๅ ดๅˆใ‚’ใ€ๆจ™ๆบ–ๆญฃ่ฆๅˆ†ๅธƒใจใ„ใ„ใพใ™ใ€‚ ZใŒๆจ™ๆบ–ๆญฃ่ฆๅˆ†ๅธƒใซๅพ“ใ†็ขบ็Ž‡ๅค‰ๆ•ฐใงใ‚ใฃใŸๅ ดๅˆใ€ X = ฯƒZ + ยต ็ขบ็Ž‡ๅค‰ๆ•ฐXใฏๅนณๅ‡ยตใ€ๆจ™ๆบ–ๅๅทฎฯƒใฎๆญฃ่ฆๅˆ†ๅธƒใจใชใ‚Šใพใ™ใ€‚ ้€†ใซXใŒๅนณๅ‡ยตๆจ™ๆบ–ๅๅทฎฯƒใฎๆญฃ่ฆๅˆ†ๅธƒใซๅพ“ใ†็ขบ็Ž‡ๅค‰ๆ•ฐใงใ‚ใ‚‹ใชใ‚‰ใ€ Z = (X โ€” ยต) / ฯƒ ็ขบ็Ž‡ๅค‰ๆ•ฐZใฏๆจ™ๆบ–ๆญฃ่ฆๅˆ†ๅธƒใซๅพ“ใ„ใพใ™ใ€‚ ๆญฃ่ฆๅˆ†ๅธƒใฎ็ดฏ็ฉๅˆ†ๅธƒ้–ขๆ•ฐใฏๅˆๆญฉ็š„ใชๆ–นๆณ•ใงใฏๅฎŸ่ฃ…ใงใใพใ›ใ‚“ใŒใ€Pythonใฎmath.erfใ‚’ไฝฟใˆใฐๆฌกใฎใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚ Various Normal cdfs๏ผˆๆญฃ่ฆๅˆ†ๅธƒใฎ็ดฏ็ฉๅˆ†ๅธƒ้–ขๆ•ฐ๏ผ‰ ็‰นๅฎšใฎ็ขบ็Ž‡ใจใชใ‚‹ๅ€คใ‚’่ฆ‹ใคใ‘ใ‚‹ใŸใ‚ใซnormal_cdfใฎ้€†้–ขๆ•ฐใŒๅฟ…่ฆใจใชใ‚‹ๅ ดๅˆใŒใ‚ใ‚Šใพใ™ใ€‚้€†้–ขๆ•ฐใ‚’ๅ˜็ด”ใซ่จˆ็ฎ—ใ™ใ‚‹ๆ–นๆณ•ใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€nomal_cdfใฏ้€ฃ็ถšใงๅ˜่ชฟๅข—ๅŠ ใงใ‚ใ‚‹ใŸใ‚ใ€ไบŒๅˆ†ๆŽข็ดขใŒไฝฟใˆใพใ™ใ€‚ ใ“ใฎ้–ขๆ•ฐใฏใ€็›ฎ็š„ใฎ็ขบ็Ž‡ใซๅๅˆ†่ฟ‘ใฅใใพใงZใฎๅŒบ้–“ใฎไบŒ็ญ‰ๅˆ†ใ‚’็นฐใ‚Š่ฟ”ใ—ใพใ™ใ€‚ ไธญๅฟƒๆฅต้™ๅฎš็† ๆญฃ่ฆๅˆ†ๅธƒใŒๆœ‰็”จใงใ‚ใ‚‹1ใคใฎ็†็”ฑใŒไธญๅฟƒๆฅต้™ๅฎš็†ใงใ™ใ€‚ ็ฐกๆฝ”ใซ่ชฌๆ˜Žใ™ใ‚‹ใจใ€ๅธธใซๅคšๆ•ฐใฎ็‹ฌ็ซ‹ใงๅŒไธ€ใฎๅˆ†ๅธƒใซๅพ“ใ†็ขบ็Ž‡ๅค‰ๆ•ฐใฎๅนณๅ‡ใจใ—ใฆๅฎš็พฉใ•ใ‚Œใ‚‹็ขบ็Ž‡ๅค‰ๆ•ฐใŒใ€ใŠใ‚ˆใๆญฃ่ฆๅˆ†ๅธƒใจใชใ‚‹ใจใ„ใ†ใฎใŒไธญๅฟƒๆฅต้™ๅฎš็†ใงใ™ใ€‚ ไพ‹ใˆใฐใ€ๅนณๅ‡ใŒยตใ€ๆจ™ๆบ–ๅๅทฎใŒฯƒใฎ็ขบ็Ž‡ๅค‰ๆ•ฐ x1, โ€ฆ , xn ใŒใ‚ใ‚‹ใจใ—ใพใ™ใ€‚๏ฝŽใฏๅๅˆ†ใซๅคงใใ„ใ‚‚ใฎใจใ—ใพใ™ใ€‚ใ“ใฎๆ™‚ใ€1/n(x1 + โ€ฆ + xn)ใฏใŠใ‚ˆใๅนณๅ‡ฮผใ€ๆจ™ๆบ–ๅๅทฎฯƒ/โˆšnใฎๆญฃ่ฆๅˆ†ๅธƒใจใชใ‚Šใพใ™ใ€‚ๆฌกใฎๅผใฏๅŒๆง˜ใซๅนณๅ‡0ใ€ๆจ™ๆบ–ๅๅทฎ1ใฎๆญฃ่ฆๅˆ†ๅธƒใงใ™ใ€‚ ((x1 + โ€ฆ + xn) / ยตn) / ฯƒโˆšn ใ“ใ‚Œใ‚’็ฐกๅ˜ใซ่ชฌๆ˜Žใ™ใ‚‹ใซใฏใ€nใจpใง่กจใ•ใ‚Œใ‚‹ไบŒ้ …็ขบ็Ž‡ๅค‰ๆ•ฐใ‚’่ฆ‹ใพใ—ใ‚‡ใ†ใ€‚็ขบ็Ž‡pใง1ใ€็ขบ็Ž‡(1-p)ใง0ใจใชใ‚‹nๅ€‹ใฎ็‹ฌ็ซ‹ใ—ใŸ็ขบ็Ž‡ๅค‰ๆ•ฐBernoulli(p)ใ‚’ๅˆ่จˆใ—ใŸใ‚‚ใฎใŒBinomial(n, p)็ขบ็Ž‡ๅค‰ๆ•ฐใงใ™ใ€‚ Bernoulli(p)ใฎๅนณๅ‡ใฏp,ๆจ™ๆบ–ๅๅทฎใฏ(p(1-p)**0.5ใงใ™ใ€‚ไธญๅฟƒๆฅต้™ๅฎš็†ใซๅคœใจใ€nใŒๅคงใใ‘ใ‚ŒใฐBinomial(n, p)ใฏใŠใŠใ‚ˆใๅนณๅ‡ ยต=np,ๆจ™ๆบ–ๅๅทฎฯƒ = (np(1-p)**0.5ใฎๆญฃ่ฆๅˆ†ๅธƒใจใชใ‚Šใพใ™ใ€‚ไธฆในใฆใƒ—ใƒญใƒƒใƒˆใ™ใ‚Œใฐใ€ใใฎ้กžไผผๆ€งใŒๆŠŠๆกใงใใ‚‹ใงใใพใ™ใ€‚ ๅ‡บๅ…ธ๏ผšไบŒ้ …ๅˆ†ๅธƒใจๆญฃ่ฆๅˆ†ๅธƒ ใ“ใฎ่ฟ‘ไผผใ‹ใ‚‰ๅพ—ใ‚‰ใ‚Œใ‚‹ๆ•™่จ“ใฏใ€ๆญชใฟใŒใชใ„ใจใ•ใ‚Œใฆใ„ใ‚‹ใ‚ณใ‚คใƒณใ‚’๏ผ‘๏ผ๏ผๅ›žๆŠ•ใ’ใŸ้š›ใซใ€ใ‚ชใƒขใƒ†ใฏ๏ผ–๏ผๅ›žไปฅไธŠๅ‡บใ‚‹็ขบ็Ž‡ใ‚’ๆฑ‚ใ‚ใ‚‹ใซใฏใ€Nomal(50, 5)ใŒ๏ผ–๏ผๅธ‚ๅ ดใจใชใ‚‹็ขบ็Ž‡ใ‚’ๆฑ‚ใ‚ใ‚Œใฐ่‰ฏใ„ใ“ใจใซใชใ‚Šใพใ™ใ€‚ใ“ใ‚ŒใฏBinomial(100, 0.5)ใฎ็ดฏ็ฉๅˆ†ๅธƒ้–ขๆ•ฐใ‚’่จˆ็ฎ—ใ™ใ‚‹ใ‚ˆใ‚Šใ‚‚็ฐกๅ˜ใงใ™ใ€‚ ่ฃœ่ถณ scipy.statsใฏๅคงๆŠตใฎ็ขบ็Ž‡ๅˆ†ๅธƒใซๅฏพใ™ใ‚‹PDF, CDFใ‚’ใ‚‚ใจใ‚ใ‚‹้–ขๆ•ฐใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ 1.5. Scipy: ้ซ˜ๆฐดๆบ–ใฎ็ง‘ๅญฆๆŠ€่ก“่จˆ็ฎ— โ€” Scipy lecture notes ใฏ GSL (GNU Scientific Library for C and C++) ใ‚„ Matlab ใฎใƒ„ใƒผใƒซใƒœใƒƒใ‚ฏใ‚นใฎใ‚ˆใ†ใชไป–ใฎๆจ™ๆบ–็š„ใช็ง‘ๅญฆๆŠ€่ก“่จˆ็ฎ—ใƒฉใ‚คใƒ–ใƒฉใƒชใจๆฏ”่ผƒใ•ใ‚Œใพใ™ใ€‚ ใฏ Python ใงใฎ็ง‘ๅญฆๆŠ€่ก“่จˆ็ฎ—ใƒซใƒผใƒใƒณใฎไธญๆ ธใจใชโ€ฆwww.turbare.net
็ขบ็Ž‡
5
็ขบ็Ž‡-128c20480c3f
2018-06-02
2018-06-02 09:05:23
https://medium.com/s/story/็ขบ็Ž‡-128c20480c3f
false
509
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Okazawa Ryusuke
I wanna be a DataScientist&Psychologist @Saga University economics.
2f57c3ad8306
SEKAINOOKAZAWA
80
65
20,181,104
null
null
null
null
null
null
0
null
0
aa31de4e789
2018-04-06
2018-04-06 16:28:08
2018-04-12
2018-04-12 19:38:14
1
false
en
2018-04-12
2018-04-12 19:38:14
2
128d83722a73
1.437736
1
0
0
The United States Food and Drug Administration has recently approved the sale of the first medical device that uses artificial intelligenceโ€ฆ
5
FDA approves artificial intelligence device to detect diabetic eye disease Image courtesy of Amanda Dalbjรถrn/Unsplash The United States Food and Drug Administration has recently approved the sale of the first medical device that uses artificial intelligence (AI) to detect the most common cause of vision loss among patients with diabetes. The device is called IDx-DR and it is produced in Iowa City, Iowa by a company called IDx LLC. It is able to detect diabetic retinopathy, a condition in which high blood sugar levels lead to damage in the blood vessels of the retina and ultimately vision loss. The IDx-DR program works by using AI software to analyze images of a patientโ€™s eye taken with a retinal camera, and then provides an interpretation of the result to the doctor of whether or not the patient has more than mild diabetic retinopathy. If positive, the patient should be referred to an eye care professional for possible treatment, and if negative, the patient should be re-screened in 12 months. This device approval will make a big impact because it will provide a screening decision without the need for a specialist to interpret the results. Because of this, the device becomes functional for healthcare providers that are not specialized in eye care, like primary care physicians who are interacting with diabetes patients far more frequently. The use of this new AI device may potentially take away the need for a referral to a specialist, reduce costs for patients and insurance companies, and improve the turnaround time to start treatment after a diagnosis. Since most diabetes patients are not adequately screened for diabetic retinopathy each year, getting this early detection will be a crucial part of their care and may prevent vision loss in the long run. For additional information, please see Reuters. Questions: Are you aware of other biotech companies exploring artificial intelligence for medical devices? How practical do you think it will be to implement this device into standard evaluation for diabetes?
FDA approves artificial intelligence device to detect diabetic eye disease
5
fda-approves-artificial-intelligence-device-to-detect-diabetic-eye-disease-128d83722a73
2018-04-25
2018-04-25 15:13:04
https://medium.com/s/story/fda-approves-artificial-intelligence-device-to-detect-diabetic-eye-disease-128d83722a73
false
328
This is a publication for the Center for Drug Information and Natural Products at MCPHS University
null
medicationhealthnews
null
Medication Health News
null
sustainable-health
HEALTH,WELLNESS,SAFETY,MEDICATION,FEATURED STORIES
MedHealthNws
Health And Wellness
health-and-wellness
Health And Wellness
1,653
Shandel
Sixth year PharmD student in Boston, MA. Lover of cats, coffee, and the NE Patriots.
85451008dd67
shandelcorin
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-21
2018-05-21 01:44:14
2018-05-23
2018-05-23 22:18:21
9
false
en
2018-05-23
2018-05-23 22:18:21
0
128f9bb5e4b3
2.811321
0
0
0
Abstract โ€” Multivariate, Time Series analysis is a very common statistical application in many fields. Such analysis is also applied inโ€ฆ
1
Financial analysis with plots Abstract โ€” Multivariate, Time Series analysis is a very common statistical application in many fields. Such analysis is also applied in large scale industrial processes like batch Processing which is heavily used in Chemical, Textile, Pharmaceutical, Food processing and many more industrial domains. The goals of this project are 1. Explore the usage of Supervised learning techniques like Logistic regression and Linear SVM to simplify and convert the dimensionality of multivariate dependent variable to a confidence score, predicting the sale. 2. Instead of using classical Multivariate classification techniques like PCA/PLS or Nearest neighbour methods explore the usage and application of novel concept of Time Series shapelets towards predicting the qualitative outcome of a batch. In this project we work with a categorical time-series dataset consisting of daily sales data, the objective was to predict total sales for every product and store in the next month. : Summary of the given features we used during the course of the project Below is the description if data set and each description of each criterion Data Wrangling and Manipulation: First we looked for any missing data fortunately which wasnot the case here. Then comes the most important feature in a timeseries analysis is to turn the date column into a DateTime data type and make it the index of the DataFrame. Exploratory data analysis: Pairplot: Pair plot The above plot describes the relativity and correlation between each of the banks Interactive visualization of correlation plot: Correlation plot. Morgan-Stanley normality plot The above figure describes the distribution plot morgan-stanley for the fiscal year 2015 Citibank Distribution plot The citibank had a major crash during the period and thus the probability plot donot resemble a normal distribution. We have explored more on the same. Stock prices across the period. In the previous plot we come across unusual behavior of the Citibank and is evident from the above graph. After the election the stock prices for Citibank crashed tremendously and was never able to cope up. Whereas for Goldman sachs they also saw a drop but recovered in a short time period. Interactive plot of closing. Here we tried to develop a 30 day moving average using and predict the future scenarios. Closing price of the stock for Bank of America vs the 30- day moving average. Here as the period studied is really volatile the data cannot be trusted for future insights. Bollinger band In the bollinger band, 90% of the readings are within the upper and lower limit of the band. Here we can see around August 2015 there was major Chinese market crash and thus the data moving out of the range. I will continue my work with financial problems, do read my stories on time series and forecasting.
Financial analysis with plots
0
financial-analysis-with-plots-128f9bb5e4b3
2018-05-23
2018-05-23 22:24:22
https://medium.com/s/story/financial-analysis-with-plots-128f9bb5e4b3
false
427
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Vikas Mishra
null
a0d8f6b527fa
m.vkumar89
3
4
20,181,104
null
null
null
null
null
null
0
%matplotlib inline import scipy, numpy as np, pandas as pd, sklearn, vincent, matplotlib as mp, matplotlib.pyplot as plt, datetime as dt import geopy #Read the Consumer Complaint Database into a Pandas dataframe. df = pd.read_csv("Consumer_Complaints.csv",low_memory=False, parse_dates=True) #Sort the dataframe by the complaint ID. df = df.sort_values(by='Complaint ID') #Convert Dates into date format for col in df.columns: if 'Date' in col: df[col] = pd.to_datetime(df[col], format="%m/%d/%Y") #Add some useful additional columns. df['year'] = df['Date received'].apply(lambda d: d.year) df['year-month'] = df['Date received'].apply(lambda d: dt.datetime(d.year, d.month, 1)) #This function counts unique elements in columns def count_unique(dataframe, header_name): z = dataframe[header_name].unique() return sum(1 for v in z if pd.notnull(v)) #This function sorts a dictionary of counts by the count in descending order. def sort_by_count(dictionary): ordered_list = sorted(dictionary.items(), key=lambda x: x[1]) ordered_list.reverse() return ordered_list #This function computes the percentage of missing data per header name for a sub-frame of the CCD. def missing_percentage(dataframe, header_name): count = 0 for x in dataframe[header_name]: try: if np.isnan(x): count+=1 except TypeError: pass return (count/dataframe.shape[0])*100 #This function computes the dispute rate for a sub-frame of the CCD. def dispute_rate(dataframe): count = 0 try: for x in dataframe['Consumer disputed?']: if x == 'Yes': count+=1 return 100*(count/dataframe.shape[0]) except ZeroDivisionError: return 0 r = [] for col in df.columns: r.append([col, missing_percentage(df,col),count_unique(df,col)]) missing_data = pd.DataFrame(r, columns=['Column Name', 'Missing (%)', 'Unique']) print(missing_data) Missing and Unique Data in Consumer Complaint Database Column Name Missing(%) Unique 0 Date received 0.000000 2434 1 Product 0.000000 18 2 Sub-product 21.592543 76 3 Issue 0.000000 166 4 Sub-issue 46.407578 218 5 Consumer complaint narrative 71.734584 296475 6 Company public response 67.777925 10 7 Company 0.000000 4914 8 State 1.287822 63 9 ZIP code 1.714586 29109 10 Tags 86.260726 3 11 Consumer consent provided? 49.991369 4 12 Submitted via 0.000000 6 13 Date sent to company 0.000000 2383 14 Company response to consumer 0.000459 8 15 Timely response? 0.000000 2 16 Consumer disputed? 29.434427 2 17 Complaint ID 0.000000 1089126 18 year 0.000000 8 19 year-month 0.000000 80 fig, axis = plt.subplots(1) df['year-month'].value_counts().sort_index().plot(ax=axis,color='r') maxdate = df['Date received'].max().strftime('%Y-%m-%d') axis.set_title('Complaints over time (monthly) - through %s' % maxdate) axis.set_ylim(0, axis.get_ylim()[1]) top_twenty_complaints = df['Company'].value_counts().iloc[:20] print(top_twenty_complaints) EQUIFAX, INC. 90537 Experian Information Solutions Inc. 80169 BANK OF AMERICA, NATIONAL ASSOCIATION 76758 TRANSUNION INTERMEDIATE HOLDINGS, INC. 73049 WELLS FARGO & COMPANY 64795 JPMORGAN CHASE & CO. 53670 CITIBANK, N.A. 43513 CAPITAL ONE FINANCIAL CORPORATION 29123 OCWEN LOAN SERVICING LLC 26734 Navient Solutions, LLC. 25622 NATIONSTAR MORTGAGE 19066 SYNCHRONY FINANCIAL 18326 U.S. BANCORP 15360 Ditech Financial LLC 13220 AMERICAN EXPRESS COMPANY 11417 PNC Bank N.A. 10586 ENCORE CAPITAL GROUP INC. 10134 DISCOVER BANK 8849 PORTFOLIO RECOVERY ASSOCIATES INC 8436 TD BANK US HOLDING COMPANY 8343 for x in df['Company response to consumer'].unique(): print(x, dispute_rate(df[df['Company response to consumer']==x])) Dispute Rates per Response: Closed with relief 13.382408456021139 Closed with non-monetary relief 11.879490707375135 Closed with explanation 21.66699718182761 Closed without relief 26.943962380339247 Closed with monetary relief 10.815735660342977 Closed 20.865678663843802 Untimely response 0.050352467270896276 In progress 0.0 r = [] for i in range(0,19): x = missing_data.iloc[i] if(int(x['Unique']) <= 100): unique_cols = df[x['Column Name']].unique() for y in unique_cols: df1 = df[df[x['Column Name']]==y] count = dispute_rate(df1) if(abs(count-19.544) >= 3 and df1.shape[0]/df.shape[0]>=0.01): r.append([x['Column Name'], y, count]) pd.DataFrame(r, columns=['Column Name', 'Column Value', 'Dispute Rate'])
15
null
2018-08-20
2018-08-20 03:24:42
2018-08-20
2018-08-20 04:56:09
1
false
en
2018-08-20
2018-08-20 04:56:09
2
1295d03cf83b
4.535849
1
0
0
This is a simple exploratory data analysis for the Consumer Complaint Database of the CFPB (Consumer Financial Protection Bureau). Thisโ€ฆ
4
Dispute Rates for Consumer Data This is a simple exploratory data analysis for the Consumer Complaint Database of the CFPB (Consumer Financial Protection Bureau). This publicly available database tracks complaints made to the agency regarding financial products, from the date of the complaint and the issue at stake, to the companyโ€™s response (timely or otherwise) and finally to the consumerโ€™s reaction to the response โ€” whether or not they chose to dispute it. The data can be found at https://www.consumerfinance.gov/data-research/consumer-complaints/#download-the-data. Weโ€™ll end up using the following helper functions in the analysis. Letโ€™s begin by asking some basic questions: Is the data complete? How much of the data (if any) is missing? How many unique options are there per column? Running the following script presents us a table with the answers to the above questions. The following table contains details about missing data and unique options the columns in our dataframe: We can see from above that a big chunk of the missing data is about the narrative. In other words, most people prefer to fill out a form made out of check boxes rather than write out a detailed narrative. The companyโ€™s public response is also notably lacking. Letโ€™s plot out the complaints over time. The above graph shows a general rise in complaints since the creation of the CFPB which could indicate one of two things: A growing acceptance/awareness of the CFPB and its purpose. A growing tendency towards consumer failure in the marketplace. Itโ€™s interesting to note the curious spike in 2017 which raises several questions about politics since the election of Donald Trump and how consumers reacted to a new administration unfriendly towards consumer protection. However thatโ€™s not the main focus of this essay, so I wonโ€™t dwell on the matter. Since the ratio of total number of complaints to the number of companies is 1089126 : 4914 or roughly 222 : 1, letโ€™s figure out the top twenty companies with the most complaints against them. The following list turns out to be quite interesting. We see several famous banks and credit card companies, but at the top of the list is Equifax whose complaints have drawn attention in the news. An interesting quantity to consider is the dispute rate: the percentage of times when consumers dispute the responses given by companies to consumers. There are 8 unique responses given by companies to consumers (โ€™Company response to customerโ€™). Letโ€™s compute the dispute rates for them. The following are the dispute rates per response. We see that whenever thereโ€™s relief (monetary or otherwise), consumers dispute the response at significantly lower rates. Are there any other factors that influence this? To do this, we first look at the overall dispute rate using dispute_rate(df) which turns out to be 19.544659584375644. Letโ€™s say that a significant deviation from this dispute rate is 3%. Now, we compute the dispute rates when columns are set to particular values (which are at least 1% of their column, for statistical significance). The following table contains the results of the above code snippet. We can draw some conclusions from this data. People tend to dispute the company response more when itโ€™s their home on the line โ€” mortgages and home loans being the key examples. People tend to dispute the company response on credit scores less. They also tend to dispute things less when the company has a certain public response (see row 21 above). When thereโ€™s an issue about communication (rows 17โ€“20) it appears that the company response is satisfactory enough to not warrant as many disputes. Interestingly, when the complaint was submitted via referral, postal mail or telephone, dispute rates are significantly lower.
Dispute Rates for Consumer Data
2
dispute-rates-for-consumer-data-1295d03cf83b
2018-08-20
2018-08-20 04:56:09
https://medium.com/s/story/dispute-rates-for-consumer-data-1295d03cf83b
false
1,149
null
null
null
null
null
null
null
null
null
Python
python
Python
20,142
Ashwath Rabindranath
Writing about what interests me.
77173c7f52b7
ashwathrabindranath
47
63
20,181,104
null
null
null
null
null
null
0
X:1 T:sooranbushi M:2/4 L:1/8 K:F C2 DF | A2 GF | A2 GF | G2 FC | D2 FC | D2 D2 | z2 z2 || zG AA | GA AA | GA AA | GF D2 | zA, CA, | CD GF | zG AF | DC FD | D z CC | DF A2| A3 c | G F2 C | D2 D2 | D2 z2|| (59, False) (56, False) (52, False) (47, False) ||| (59, True) (56, True) (52, True) (47, True) |||
2
null
2017-11-21
2017-11-21 01:08:12
2017-11-21
2017-11-21 10:57:00
28
false
ja
2018-04-18
2018-04-18 08:22:59
68
1298d29f8101
46.802
48
1
0
็›ฎ็š„ใ€ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€ๅญฆ็ฟ’ใฎๆˆฆ็•ฅใชใฉ
4
Deep Learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆๆ‰‹ๆณ•ใฎใพใจใ‚ [ใ‚ตใƒผใƒ™ใ‚ค] AI DJ Project โ€” http://qosmo.jp/aidj/ Visualization: Shoya Dozono ๆฆ‚่ฆ ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใง่‡ชๅ‹•็š„ใซ้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹ใจใ„ใ†ๅคขใฏใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใใฎใ‚‚ใฎใฎๆญดๅฒใจๅŒใ˜ใใ‚‰ใ„ๅคใใพใง้กใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ๅบƒ็พฉใงใฎไบบๅทฅ็Ÿฅ่ƒฝ(AI)ใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆใ‚‚ใ€ใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ไฝœๆ›ฒ/่‡ชๅ‹•ไฝœๆ›ฒใจใ—ใฆๅคใใ‹ใ‚‰่ฉฆใ•ใ‚Œใฆใใพใ—ใŸใ€‚ไธ€ๆ–นใงๆ˜จไปŠAIใซๆณจ็›ฎใŒ้›†ใพใ‚‹ใŠใŠใใชใใฃใ‹ใ‘ใจใ‚‚ใชใฃใŸDeep Learningใ‚’้Ÿณๆฅฝ็”Ÿๆˆใซ็”จใ„ใŸไพ‹ใฏใ€ๅฎŸใฏใพใ ใใ‚Œใปใฉๅคšใใฏใชใ„ใ‚ˆใ†ใงใ™ใ€‚ไปŠๅ›žใ€ใใ†ใ—ใŸDeep Learningใ‚’้Ÿณๆฅฝใฎใ€Œ็”Ÿๆˆใ€ใซ็”จใ„ใŸ็ ”็ฉถไพ‹ใ‚’ใ€ใใฎๆ‰‹ๆณ•ใ€ๅ…ฅๅ‡บๅŠ›ใฎใƒ‡ใƒผใ‚ฟใ€ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€ๅญฆ็ฟ’ใฎๆˆฆ็•ฅใชใฉใ‚’ใ‚‚ใจใซๅˆ†้กžใ€็ตๆžœใจใ—ใฆ็”Ÿๆˆใ•ใ‚ŒใŸ้Ÿณๆฅฝใจใจใจใ‚‚ใซใพใจใ‚ใŸใ„ใจๆ€ใ„ใพใ™ใ€‚ ใชใŠใ“ใฎ่จ˜ไบ‹ใฏใ€ไปฅไธ‹ใฎ็ด ๆ™ดใ‚‰ใ—ใ„ใ‚ตใƒผใƒ™ใ‚คใฎๅ†…ๅฎนใ‚’ๅ‚่€ƒใซใ€็ญ†่€…(ๅพณไบ•)ใŒ่งฃ้‡ˆใƒปใพใจใ‚ใŸใ‚‚ใฎใงใ™ใ€‚ใ“ใฎใ‚ตใƒผใƒ™ใ‚คใซๅ–ใ‚ŠไธŠใ’ใ‚‰ใ‚Œใชใ‹ใฃใŸไพ‹ใ‚’ไธ€้ƒจๅŠ ็ญ†ใ—ใŸใปใ‹ใ€ๅ‰ฒๆ„›ใ—ใฆใ„ใ‚‹ใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎ้ ˜ๅŸŸใฎ็พ็Šถใ‚’็Ÿฅใ‚‹ไธŠใง็ฐกๆฝ”ใ‹ใค็ถฒ็พ…็š„ใซใพใจใ‚ใ‚‰ใ‚Œใฆใ„ใ‚‹ใ ใ‘ใงใชใใ€ไปŠๅพŒใฎ็ ”็ฉถใฎๆ–นๅ‘ๆ€งใฎ็คบๅ”†ใ‚‚้šๆ‰€ใซๆ•ฃใ‚Šใฐใ‚ใ‚‰ใ‚Œใฆใ„ใ‚‹ใฎใงใ€ใ“ใฎ้ ˜ๅŸŸใซ่ˆˆๅ‘ณใฎใ‚ใ‚‹ๆ–นใฏไธ€่ชญใ•ใ‚Œใ‚‹ใ“ใจใ‚’ใ‚ชใ‚นใ‚นใƒกใ—ใพใ™ใ€‚ Briot, J.-P., Hadjeres, G., & Pachet, F. (2017). Deep Learning Techniques for Music Generation โ€” A Survey. [1709.01620] Deep Learning Techniques for Music Generation - A Survey Abstract: This book is a survey and an analysis of different ways of using deep learning (deep artificial neuralโ€ฆarxiv.org ใ“ใ“ใงใฏใ“ใฎ้ ˜ๅŸŸใฎๆฆ‚่ฆใ‚’ใฒใจใจใŠใ‚ŠๆŠผใ•ใˆใ‚‹ใ“ใจใ‚’็›ฎ็š„ใซใ—ใฆใ„ใ‚‹ใฎใงใ€ๅ…ƒใฎใ‚ตใƒผใƒ™ใ‚คใซใ‚ใฃใŸๆŠ€่ก“ๆƒ…ๅ ฑใฎไธ€้ƒจใฏใ”ใใฃใจๅ‰ฒๆ„›ใ—ใฆใ„ใพใ™ใ€‚่ฉณใ—ใ็Ÿฅใ‚ŠใŸใ„ๆ–นใฏไธŠ่จ˜ใฎใ‚ตใƒผใƒ™ใ‚คใ€ใ‚ใ‚‹ใ„ใฏๅŽŸ่ซ–ๆ–‡ใซใ‚ใŸใฃใฃใฆใใ ใ•ใ„ใ€‚้Ÿณๆฅฝ็†่ซ–ใซ้–ขใ—ใฆใฏใ‚ใพใ‚Š่ฉณใ—ใใชใ„ใฎใงใ€ใ‚‚ใ—ใ‹ใ—ใŸใ‚‰ๅ˜่ชžใฎไฝฟใ„ๆ–นใŒใŠใ‹ใ—ใ„ใจใ“ใ‚ใŒใ‚ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ๅค‰ใชใจใ“ใ‚ใ‚’่ฆ‹ใคใ‘ใŸใ‚‰ใœใฒใ”ๆŒ‡ๆ‘˜ใใ ใ•ใ„๏ผ ใกใชใฟใซใ€Deep Learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝใฎใ€Œ่งฃๆžใ€ใซใคใ„ใฆใฏใ“ใกใ‚‰ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใŒใ‚ˆใใพใจใพใฃใฆใ„ใพใ™ใ€‚ๅ‚่€ƒใพใงใ€‚ Choi, K., Fazekas, G., Cho, K., & Sandler, M. (2017). A Tutorial on Deep Learning for Music Information Retrieval. Retrieved from http://arxiv.org/abs/1709.04396 Deep Learningใ‚’้Ÿณๆฅฝใฎใ€Œ็”Ÿๆˆใ€ใซ็”จใ„ใŸ็ ”็ฉถไพ‹ใ‚’ใ€ใใฎๆ‰‹ๆณ•ใ€ๅ…ฅๅ‡บๅŠ›ใฎใƒ‡ใƒผใ‚ฟใ€ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ€ๅญฆ็ฟ’ใฎๆˆฆ็•ฅใชใฉใ‚’ใ‚‚ใจใซๅˆ†้กžใ€็ตๆžœใจใ—ใฆ็”Ÿๆˆใ•ใ‚ŒใŸ้Ÿณๆฅฝใจใจใจใ‚‚ใซใพใจใ‚ใ‚‹ ่ฉ•ไพก่ปธ ็›ฎ็š„ Objective ใพใšใฏ็ ”็ฉถไพ‹ใ‚’ๅˆ†้กžใ™ใ‚‹ไธŠใง่€ƒๆ…ฎใ™ใ‚‹ๅฟ…่ฆใฎใ‚ใ‚‹ใ„ใใคใ‹ใฎ่ฉ•ไพก่ปธใซใคใ„ใฆๆ•ด็†ใ—ใพใ—ใ‚‡ใ†ใ€‚ใพใšใฏ็”Ÿๆˆใฎๅฏพ่ฑกใ€็›ฎ็š„ (Objective)ใ‹ใ‚‰ใงใ™ใ€‚ใ€Œ้Ÿณๆฅฝใ€ใ‚’็”Ÿๆˆใจใฒใจใ“ใจใซ่กŒใฃใฆใ‚‚็”Ÿๆˆใฎๅฏพ่ฑกใฏใ„ใใคใ‹่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ ใƒกใƒญใƒ‡ใ‚ฃใƒผ ใƒใƒผใƒขใƒ‹ใƒผ ไผดๅฅ / ใ‚ณใƒผใƒ‰้€ฒ่กŒ ใƒชใƒผใƒ‰ใƒปใ‚ทใƒผใƒˆ (ใ‚ธใƒฃใ‚บ็ญ‰ใงไฝฟใ‚ใ‚Œใ‚‹ใƒกใƒญใƒ‡ใ‚ฃใƒผใจๅ’Œ้Ÿณใ€ๆญŒ่ฉžใฎใฟใ‚’ๆ›ธใ่ตทใ“ใ—ใŸ่จ˜ๆณ•) ใƒชใ‚บใƒ  ใ“ใ‚Œใพใงใฎใจใ“ใ‚ๅคšใใฎ็ ”็ฉถใฏใ‚„ใฏใ‚Šใƒกใƒญใƒ‡ใ‚ฃใƒผใ‚„ใƒใƒผใƒขใƒ‹ใƒผใฎ็”Ÿๆˆใ‚’ๅฏพ่ฑกใซใ—ใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚็ขบ็ซ‹ใ—ใŸ็†่ซ–ใŒใ‚ใ‚Šใ€ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใงๆ‰ฑใ„ใ‚„ใ™ใ„ใŸใ‚ใซไผ็ตฑ็š„ใซ็ ”็ฉถใŒ้€ฒใ‚“ใงใใŸ้ ˜ๅŸŸใจ่จ€ใˆใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ็”Ÿๆˆใ‚ทใ‚นใƒ†ใƒ ใซๅฏพใ™ใ‚‹ๅ…ฅๅŠ›ใƒปๅ‡บๅŠ›ใ‚‚ใ•ใพใ–ใพใงใ™ใ€‚ใƒกใƒญใƒ‡ใ‚ฃใƒผใฎใƒ•ใƒฌใƒผใ‚บใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Œใฐใ€ไธ€้Ÿณใ ใ‘ใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ใ€ใ‚ใ‚‹ใ„ใฏๅ…ฅๅŠ›ใŒใชใ„ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ๅ‡บๅŠ›ใฏใ€MIDIไฟกๅทใ‚„ใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชใŒไธ€่ˆฌ็š„ใงใ™ใŒใ€ๆฅฝ่ญœใ‚’ๅ‡บๅŠ›ใ—ใฆใใ‚Œใ‚’ไบบใŒๆผ”ๅฅใ™ใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ ใพใŸ้Ÿณๆฅฝ็”Ÿๆˆๆ™‚ใฎใ€Œ่‡ชๅพ‹ๆ€งใ€ใ‚‚ไธ€ใคใฎใƒใ‚คใƒณใƒˆใงใ™ใ€‚็”Ÿๆˆๆ™‚ใซใƒฆใƒผใ‚ถใฎๅ…ฅๅŠ›ใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใ‚‹ใฎใ‹ใ€ใใ‚Œใจใ‚‚ๅฎŒๅ…จใซ่‡ชๅพ‹็š„ใƒป่‡ชๅ‹•็š„ใซ็”Ÿๆˆใ™ใ‚‹ใฎใ‹. ใƒฆใƒผใ‚ถใŒ็”Ÿๆˆใฎๆ–นๅ‘ใฅใ‘ใ‚’ใ™ใ‚‹(ใƒŠใƒ“ใ‚ฒใƒผใƒˆใ™ใ‚‹)ใ‚ˆใ†ใชใ‚ทใ‚นใƒ†ใƒ ใ‚‚่€ƒใˆใ‚‰ใ‚Œใพใ™ใ—ใ€ใ‚ธใƒฃใ‚บใฎใ‹ใ‘ใ‚ใ„ใ‚’็›ฎๆŒ‡ใ™ใ‚ˆใ†ใชใ‚ทใ‚นใƒ†ใƒ ใฎๅ ดๅˆใฏใ€็›ธๆ‰‹ใฎๆผ”ๅฅใฎๆƒ…ๅ ฑใŒๅ…ฅๅŠ›ใจใ—ใฆๆ‰ฑใ‚ใ‚Œใ‚‹ใ“ใจใซใชใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ ่กจ็พ Representation AIใซ้™ใ‚‰ใšใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใง้Ÿณๆฅฝใ‚’ๆ‰ฑใ†ๅ ดๅˆใ€ใใฎๆƒ…ๅ ฑใ‚’ใฉใฎใ‚ˆใ†ใซใ€Œ่กจ็พใ€(Representation)ใ™ใ‚‹ใ‹ใŒใฒใจใคใฎใƒใ‚คใƒณใƒˆใซใชใ‚Šใพใ™ใ€‚AIใงใฎ้Ÿณๆฅฝ็”Ÿๆˆใ‚’่€ƒใˆใŸๅ ดๅˆใ€ ๅญฆ็ฟ’ใซไฝฟใ†ใƒ‡ใƒผใ‚ฟ ็”Ÿๆˆๆ™‚ใซๅ…ฅๅŠ›ใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟ ็”Ÿๆˆใ•ใ‚Œๅ‡บๅŠ›ใ•ใ‚Œใ‚‹ใƒ‡ใƒผใ‚ฟ ใ“ใฎไธ‰ใคใใ‚Œใžใ‚Œใ‚’ใฉใ†่กจ็พใ™ใ‚‹ใ‹ใงใ€ใƒใƒชใ‚จใƒผใ‚ทใƒงใƒณใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ใพใŸใ“ใ‚Œใ‚‰ใฎ่กจ็พใฏไธŠ่จ˜ใฎ็›ฎ็š„ใจใ‚‚ๅฏ†ๆŽฅใซ้–ขใ‚ใฃใฆใใพใ™. ้Ÿณๅฃฐไฟกๅท Audio Signal ้Ÿณๆฅฝใ‚’ใ‚ใคใ‹ใ†ใฎใงใ‚ใ‚Œใฐใ€้Ÿณๅฃฐไฟกๅทใ‚’ๆ‰ฑใ†ใฎใŒไธ€็•ช่‡ช็„ถใ ใจๆ€ใ‚ใ‚Œใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ใ”ๅญ˜็Ÿฅใฎใ‚ˆใ†ใซCDใงใฏไธ€็ง’้–“ใซ44100ๅ›žใ€้ŸณใฎๅœงๅŠ›ใ‚’16bit(็ด„32000ๆฎต้šŽ)ใงใ‚ตใƒณใƒ—ใƒชใƒณใ‚ฐใ™ใ‚‹ใ“ใจใงใ€้Ÿณใ‚’่กจ็พใ—ใฆใ„ใพใ™ใ€‚ๅฟ ๅฎŸใซ้Ÿณๆฅฝใ‚’่กจ็พใงใใ‚‹ไธ€ๆ–นใงใ€(ๅฐ‘ใชใใจใ‚‚)่จˆ็ฎ—ใ‚ณใ‚นใƒˆใŒ้ซ˜ใ™ใŽใ‚‹ใŸใ‚ใ€็พ็Šถใงใฏใพใ ใ‚ใพใ‚Šๅบƒใไฝฟใ‚ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚ใใฎๆ•ฐใ™ใใชใ„ไพ‹ใจใ‚‚ใ„ใˆใ‚‹Google/DeepMindใฎWaveNetใงใฏใ€1็ง’้–“ใฎ้Ÿณ(ใ—ใ‹ใ‚‚16k, 8bit)ใ‚’็”Ÿๆˆใ™ใ‚‹ใฎใซใ€Googleใฎ้€Ÿใ„ใƒžใ‚ทใƒณใ‚’ใคใ‹ใฃใฆใ‚‚ๆ•ฐๅˆ†ใ‹ใ‹ใฃใŸใใ†ใงใ™(ๆœ€่ฟ‘ใซใชใฃใฆๅคงๅน…ใซ็Ÿญ็ธฎใ•ใ‚ŒใŸใใ†ใงใ™ใŒ)ใ€‚ ไปŠๅพŒใฎ็ ”็ฉถใงใ€ๅคงใใช่บ้€ฒใŒ่ฆ‹ใ‚‰ใ‚Œใ‚‹ใจใ—ใŸใ‚‰ใ“ใฎๅˆ†้‡Žใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚WaveNetใฎใ‚ˆใ†ใซๆณขๅฝขใ‚’ใ‚ผใƒญใ‹ใ‚‰็”Ÿๆˆใ™ใ‚‹ใฎใงใฏใชใใ€้Ÿณใฎ็ด ๆใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ‚ˆใ†ใชๆ–นๅ‘ๆ€งใ‚‚่€ƒใˆใ‚‰ใ‚Œใ‚‹ใจๆ€ใ„ใพใ™ใ€‚ https://deepmind.com/blog/wavenet-generative-model-raw-audio/ 2. ่จ˜ๅท Symbolic 2.1 โ€” MIDI ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟไธŠใงๆฅฝ่ญœใฎๆƒ…ๅ ฑใ‚’ๆ‰ฑใ†ใŸใ‚ใซ้–‹็™บใ•ใ‚ŒใŸMIDIไฟกๅทใฏใ€ๅคใใ‹ใ‚‰ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟ้Ÿณๆฅฝใฎไธ–็•Œใงๆ‰ฑใ‚ใ‚Œใฆใใพใ—ใŸใ€‚้Ÿณใฎ้ซ˜ใ•(pitch), ๅผทใ•(velocity)ใ€้•ทใ•(duration)ใฎๆƒ…ๅ ฑใจใ—ใฆใ€ๅ„้Ÿณ็ฌฆใŒๆ‰ฑใ‚ใ‚Œใพใ™ใ€‚ๆฅฝ่ญœใ‚’ๆฏ”่ผƒ็š„ๅฟ ๅฎŸใซ็ฐกๆฝ”ใซ่กจ็พใงใใ‚‹ใจใ„ใ†ๅˆฉ็‚นใŒใ‚ใ‚Šใ€ๆœฌใ‚ตใƒผใƒ™ใ‚คใฎ็ ”็ฉถใงใ‚‚MIDIใ‚’ๆ‰ฑใ†็ ”็ฉถใฏๅฐ‘ใชใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ MIDIใฏ้žๅธธใซไฝฟใ„ๅ‹ๆ‰‹ใฎใ‚ˆใ„้Ÿณๆฅฝ่กจ็พใงใ™ใŒใ€ๆฅฝ่ญœใซใชใ‚‰ใชใ„ใƒป่จ˜่ฟฐใงใใชใ„้Ÿณใ‚„้Ÿณๆฅฝ(ไพ‹. ๅพฎๅˆ†้Ÿณใชใฉ)ใŒๅคšๆ•ฐใ‚ใ‚‹ใ“ใจใ‚’ๅฟต้ ญใซใŠใ„ใฆไฝฟใ†ในใใงใ—ใ‚‡ใ†ใ€‚ 2.2. โ€” ใƒ”ใ‚ขใƒŽใƒญใƒผใƒซ Piano Roll ๆœฌใ‚ตใƒผใƒ™ใ‚คใฎใชใ‹ใงไธ€็•ชๅคšใไฝฟใ‚ใ‚Œใฆใ„ใŸ่กจ็พใงใ™ใ€‚ๅ…ƒใฏใ‚ชใƒซใ‚ดใƒผใƒซใฎใŸใ‚ใฎ่จ˜่ฟฐๆณ•ใงใ™ใŒใ€ใ„ใ‚ใ‚†ใ‚‹DAW(Logicใ‚„Ableton Liveใฎใ‚ˆใ†ใช้Ÿณๆฅฝๅˆถไฝœ็”จใฎใ‚ฝใƒ•ใƒˆใ‚ฆใ‚งใ‚ข)ใงMIDIใ‚’่จ˜่ฟฐใ™ใ‚‹ๆ–นๆณ•ใจใ—ใฆๅฎš็€ใ—ใŸ่จ˜่ฟฐๆณ•ใงใ™ใ€‚x่ปธๆ–นๅ‘ใซๆ™‚้–“ใ€y่ปธๆ–นๅ‘ใซใจใ‚Šใ†ใ‚‹ใ™ในใฆใฎ้Ÿณใฎใƒ”ใƒƒใƒใ‚’ใƒžใƒƒใƒ”ใƒณใ‚ฐใ—ใŸใ‚ฐใƒชใƒƒใƒ‰ใจใ—ใฆ้Ÿณๆฅฝใ‚’ๆ‰ฑใ„ใพใ™ใ€‚้Ÿณๆฅฝใ‚’่กŒๅˆ—(ใƒžใƒˆใƒชใ‚ฏใ‚น)ใฎใ‚ˆใ†ใซๆ‰ฑใ†ใ“ใจใŒใงใใ‚‹ใฎใง้ƒฝๅˆใŒใ„ใ„ใ€ใจใ„ใ†ใฎใฏใ€ใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐใง้…ๅˆ—ใ‚’ๆ‰ฑใฃใŸใ“ใจใŒใ‚ใ‚‹ไบบใงใ‚ใ‚Œใฐใ€ใชใ‚“ใจใชใๆƒณๅƒใŒใคใใจๆ€ใ„ใพใ™ใ€‚ ใŸใ ใ—ใ€้€ฃ็ถšใ™ใ‚‹ๅ…ซๅˆ†้Ÿณ็ฌฆใตใŸใคใจๅ››ๅˆ†้Ÿณ็ฌฆไธ€ใคใฎๅŒบๅˆฅใŒใคใ‹ใชใ„ใจใ„ใฃใŸใƒžใ‚คใƒŠใ‚น้ขใ‚‚ใ‚ใ‚Šใ€ใใ‚Œใซๅฏพใ™ใ‚‹ๆ”นๅ–„็ญ–ใ‚‚ๆๆกˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ https://en.wikipedia.org/wiki/Piano_roll#/media/File:Computer_music_piano_roll.png 2.3. โ€” ใƒ†ใ‚ญใ‚นใƒˆ Text ใ™ใ“ใ—ๆ„ๅค–ใงใ™ใŒใ€ใƒ†ใ‚ญใ‚นใƒˆใง้Ÿณๆฅฝใ‚’ๆ‰ฑใ†ใจใ„ใ†็ ”็ฉถใ‚‚ใฟใ‚‰ใ‚Œใพใ—ใŸใ€‚ใใฎๅคšใใฏABC่จ˜ๆณ•ใจใ„ใ†่จ˜ๆณ•ใ‚’ไฝฟใฃใฆใ„ใพใ™ใ€‚๏ผˆABC่จ˜ๆณ•ใซใฏๅ˜ๆ—‹ๅพ‹ใฎ้Ÿณๆฅฝใ—ใ‹ๆ‰ฑใˆใชใ„ใจใ„ใ†้™็•ŒใŒใ‚ใ‚Šใพใ™) ใ€‚ไธ‹่จ˜ใฏใ‚ฝใƒผใƒฉใƒณ็ฏ€ใ‚’ABC่จ˜ๆณ•ใงๆ›ธใ„ใŸใ‚‚ใฎใ€‚ 4. โ€” ใ‚ณใƒผใƒ‰ Chord ใ‚ณใƒผใƒ‰้€ฒ่กŒใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚ทใ‚นใƒ†ใƒ ใฎๅ ดๅˆใ€ใ‚ณใƒผใƒ‰ใ‚’ใใฎใพใพๆ–‡ๅญ—ใ‚„ๆ•ฐๅญ—ใง็›ดๆŽฅ่กจ็พ(C, D-, E7 etc)ใ™ใ‚‹ๆ‰‹ๆณ•ใ‚ใ‚‹ใ„ใฏ่ค‡ๆ•ฐใฎ้Ÿณใฎ็ต„ใฟๅˆใ‚ใ›ใจใ—ใฆ่กจ็พใ™ใ‚‹ๆ‰‹ๆณ•ใฎไบŒใคใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ ไบŒใคใ‚ใฎๆ‰‹ๆณ•ใฎไพ‹ใจใ—ใฆๅพŒ่ฟฐใ™ใ‚‹DeepBachใ‚ทใ‚นใƒ†ใƒ ใงใฏๆฌกใฎใ‚ˆใ†ใช่กจ็พใŒๅ–ใ‚‰ใ‚Œใฆใ„ใพใ™. ๅŒบๅˆ‡ใ‚Šๆ–‡ๅญ—|||ใฎ้–“ใซๆŒŸใพใ‚Œใฆใ„ใ‚‹ใฎใŒไธ€ใคใฎๅ’Œ้Ÿณใงใ€ใƒ”ใƒƒใƒใจใƒ•ใ‚งใƒซใƒžใƒผใ‚ฟใฎๆœ‰็„กใฎไบŒๅ€คใง่กจ็พใ•ใ‚Œใพใ™ใ€‚ 5. โ€” ใƒชใƒผใƒ‰ใƒปใ‚ทใƒผใƒˆ Lead Sheat Jazzใ‚„Popsใฎไธ–็•Œใงๅบƒใใคใ‹ใ‚ใ‚Œใ‚‹่จ˜ๆณ•ใงใ™ใŒใ€็ ”็ฉถใงไฝฟใ‚ใ‚Œใฆใ„ใ‚‹ไพ‹ใฏใใ‚Œใปใฉๅคšใใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ https://en.wikipedia.org/wiki/Lead_sheet#/media/File:Lead-sheet-wikipedia.svg 6. โ€” ใƒชใ‚บใƒ  Rhythm ใƒชใ‚บใƒ ใ‚’ๆ‰ฑใ†็ ”็ฉถใ‚‚ใใ‚Œใปใฉๅคšใใฏใชใ„ใ‚ˆใ†ใงใ™ใ€‚ใƒชใ‚บใƒ ใฎ่กจ็พใจใ—ใฆใฏใ€ๆฅฝๅ™จใฎ็จฎ้กž(ใŸใจใˆใฐใ‚ญใƒƒใ‚ฏใ€ใ‚นใƒใ‚ขใ€ใ‚ฟใƒ ใ€ใƒใ‚คใƒใƒƒใƒˆใ€ใ‚ทใƒณใƒใƒซ)ใชใฉใ‚’้™ๅฎšใ—ใŸไธŠใงใ€ใใฎ้Ÿณใฎๆœ‰็„กใจใ—ใฆ่กจ็พใ™ใ‚‹ๅ ดๅˆใŒๅคšใ„ใ‚ˆใ†ใงใ™ใ€‚ ๆ™‚้–“ใฎ่กจ็พ ้Ÿณๆฅฝใ‚’ๆ‰ฑใ†ไปฅไธŠใ€ๆ™‚้–“ใ‚’ใฉใ†ๆ‰ฑใ†ใ‹ใจใ„ใ†ใฎใ‚‚ๅคงไบ‹ใชใƒใ‚คใƒณใƒˆใซใชใ‚Šใพใ™ใ€‚ๅคงใใๅˆ†ใ‘ใฆใ€ใ‚นใƒ†ใƒƒใƒ—ใƒใ‚คใ‚นใƒ†ใƒƒใƒ—ใง็”Ÿๆˆใ™ใ‚‹ๆ–นๆณ•ใจๅ…จไฝ“ใ‚’ไธ€ๆฐ—ใซ็”Ÿๆˆใ™ใ‚‹ๆ–นๆณ•ใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใ€‚ ๆ™‚้–“ใฎ่กจ็พ ใ‚ฐใƒญใƒผใƒใƒซ global ไธ€ใคใฎๆ›ฒ/ใƒ•ใƒฌใƒผใ‚บๅ…จไฝ“ใ‚’ไธ€ๆฐ—ใซๅ‡บๅŠ›ใ™ใ‚‹ๆ–นๆณ•. (ใ“ใฎๅ ดๅˆใ€ใƒขใƒ‡ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใจใ—ใฆใฏใ€ใ‚ทใƒณใƒ—ใƒซใชFeedforward Networkใ‚„AutoencoderใŒไฝฟใ‚ใ‚Œใ‚‹ใ“ใจใŒๅคšใใ€Recurrent Neural Networkใฏไฝฟใ‚ใ‚Œใพใ›ใ‚“) ๆ™‚้–“ใ‚’็ญ‰ๅˆ†ใ™ใ‚‹ๆ–นๆณ• time step ๆ™‚้–“ใ‚’็ญ‰ๅˆ†ใ™ใ‚‹ๆ–นๆณ•ใงใ™. ใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฎ่กจ็พใŒใใ‚Œใซใ‚ใŸใ‚Šใพใ™. ้€šๅธธใฏๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใฎใชใ‹ใงๆœ€ๅฐใฎๆ™‚้–“ๅ˜ไฝ(ใŸใจใˆใฐ16ๅˆ†้Ÿณ็ฌฆ)ใ‚’ๅ˜ไฝใจใ—ใฆใ€ใใฎๆ•ดๆ•ฐๅ€ใจใ—ใฆๆ™‚้–“ใ‚’ๆ‰ฑใ„ใพใ™ใ€‚ ้Ÿณ็ฌฆใ”ใจใซๆ‰ฑใ†ๆ–นๆณ• note step ไธ€่ˆฌ็š„ใงใฏใชใ„ใงใ™ใŒใ€ๆฑบใพใฃใŸ้•ทใ•ใฎๆ™‚้–“ใฎๅ˜ไฝใ‚’ๆŒใŸใชใ„ๆ–นๆณ•ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐๆœ€่ฟ‘ใฎGoogleใฎPerformance RNNใงใฏใ€้Ÿณใ”ใจใซใƒ”ใƒƒใƒใจๅผทใ•ใซใ‚ใ‚ใ›ใฆใใฎ้•ทใ•ใ‚’็ดฐใ‹ใ(16ๅˆ†้Ÿณ็ฌฆใ€8ๅˆ†้Ÿณ็ฌฆโ€ฆใงใฏใชใ)ๆŒ‡ๅฎšใ—ใฆใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ใ‚ˆใ†ใซใชใฃใฆใ„ใพใ™(ๆœ‰้™ใฎ้ธๆŠž่‚ขใฎไธญใ‹ใ‚‰ใงใฏใ‚ใ‚Šใพใ™ใŒโ€ฆ)ใ€‚ ๅ…ฅๅŠ›ใฎใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐ Input Encoding ๅ…ฅๅŠ›ใฎ่กจ็พใŒๆฑบใพใฃใŸใจใ—ใฆใ€ใคใŽใซใใ‚Œใ‚’ๅฎŸ้š›ใซใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใงๆ‰ฑใˆใ‚‹ๅฝขๅผใซใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใ“ใงใ‚‚ใ„ใใคใ‹ใฎๆ–นๅผใŒ่€ƒใˆใ‚‰ใ‚Œใพใ™ใŒใ€ๆฌกใฎไบŒใคใŒไธปๆตใงใ™. ๆ•ฐๅ€คใจใ—ใฆๅ…ฅๅŠ›ใ™ใ‚‹ๆ–นๆณ• one-hot ใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆๅ…ฅๅŠ›ใ™ใ‚‹ๆ–นๆณ• one-hot vector โ€” MIDIใง52โ€“76ใพใงใฎ2ใ‚ชใ‚ฏใ‚ฟใƒผใƒ–ๅผทใ‚’ๆ‰ฑใ†ใจใ—ใฆใƒŽใƒผใƒˆใƒŠใƒณใƒใƒผ64ใ‚’่กจ็พใ™ใ‚‹ๅ ดๅˆ ๆ•ฐๅ€คใฏใ‚ใ‹ใ‚Šใ‚„ใ™ใ„ใงใ™ใญใ€‚ใƒ”ใƒƒใƒใงใ‚ใ‚Œใฐใ€MIDIใฎใƒ”ใƒƒใƒใฎๆ•ฐๅ€คใ‚’ใใฎใพใพใคใ‹ใฃใŸใ‚Šใ€ใ‚ใ‚‹ใ„ใฏใใ‚Œใ‚’0ใ‹ใ‚‰1ใฎ้–“ใงๆญฃ่ฆๅŒ–ใ—ใฆๆ‰ฑใ„ใพใ™ใ€‚ไบŒใค็›ฎใฎone-hotใƒ™ใ‚ฏใƒˆใƒซใฏๆฉŸๆขฐๅญฆ็ฟ’ใฎไธ–็•Œใงไธ€่ˆฌ็š„ใซไฝฟใ‚ใ‚Œใ‚‹ๆ–นๆณ•ใงใ€ใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฎ่€ƒใˆๆ–นใ‚’ใใฎใพใพใƒ‡ใƒผใ‚ฟใซใ—ใŸใ‚‚ใฎใจ่€ƒใˆใฆใใ ใ•ใ„ใ€‚ใจใ‚Šใ†ใ‚‹ๅ€คใŒไพ‹ใˆใฐ0ใ‹ใ‚‰9ใพใงใฎ10ๅ€‹ใ‚ใ‚‹ใจใ—ใŸใ‚‰ใ€10ๆฌกๅ…ƒใฎใƒใ‚คใƒŠใƒชใฎใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่€ƒใˆใ€0ใฏ[1,0,0,0,0,โ€ฆ], 1ใฏ[0,1,0,0,0โ€ฆ], 2ใฏ[0,0,1,0,0โ€ฆ] ใจ่จ€ใ†ๅ…ทๅˆใซๅฏพๅฟœใ™ใ‚‹ๆฌกๅ…ƒใฎใฟ1ใ€ใใ‚Œไปฅๅค–ใŒ0(ใฒใจใคใ ใ‘1ใชใฎใงone-hotใจใ„ใ†ๅๅ‰ใŒใ‚ใ‚Šใพใ™.่ค‡ๆ•ฐใฎ1ใ‚’่จฑใ™k-hotใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟใฃใฆใ„ใ‚‹่ซ–ๆ–‡ใ‚‚ใ‚ใ‚Šใพใ™)ใงใ‚ใ‚‹ใ‚ˆใ†ใชใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่€ƒใˆใพใ™ใ€‚ ๆ•ฐๅ€คใ‚’็›ดๆŽฅไฝฟใ†ๆ‰‹ๆณ•ใฏ้Ÿณๅฃฐไฟกๅทใ‚’ๆ‰ฑใ†ๅ ดๅˆใ‚’้™คใ„ใฆใ‚ใšใ‚‰ใ—ใใ€ใปใจใ‚“ใฉใŒone-hotใƒ™ใ‚ฏใƒˆใƒซใ‚’ไฝฟใฃใŸๅฎŸ่ฃ…ใซใชใฃใฆใ„ใพใ™ใ€‚ๆ•ฐๅ€คใ‚’็›ดๆŽฅๆ‰ฑใ†ๆ‰‹ๆณ•ใŒใ‚ใพใ‚Šไฝฟใ‚ใ‚Œใชใ„ใฎใฏใฏใ€็นฐใ‚Š่ฟ”ใ—่กŒใ†ๆ•ฐๅ€ค่จˆ็ฎ—ใฎ็ตๆžœใ€็ฒพๅบฆใŒ่ฝใกใ‚‹ = ใƒŽใ‚คใ‚บใŒไน—ใ‚Šใ‚„ใ™ใ„ใฎใŒไธ€ๅ› ใงใ™(ใ‚ขใƒŠใƒญใ‚ฐไฟกๅทใจใƒ‡ใ‚ธใ‚ฟใƒซไฟกๅทใฎ้•ใ„ใ‚’ๆƒณๅƒใ—ใฆใใ ใ•ใ„)ใ€‚ใพใŸใ€one-hotใƒ™ใ‚ฏใƒˆใƒซใ‚’ไฝฟใ†ใ“ใจใงใ€้Ÿณๆฅฝใฎ็”Ÿๆˆใ‚’ใ‚ใ‚‹็จฎใฎใ‚ฏใƒฉใ‚ทใƒ•ใ‚ฃใ‚ฑใƒผใ‚ทใƒงใƒณใจใ—ใฆๆ‰ฑใˆใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ๆฌกใซๆฅใ‚‹้Ÿณ็ฌฆใจใ—ใฆไธ€็•ชใ‚‚ใฃใจใ‚‚ใ‚‰ใ—ใ„ใ‚‚ใฎใ‚’้ธๆŠž่‚ขใฎไธญใ‹ใ‚‰้ธใถใจใ„ใ†ๅ•้กŒใซ็ฝฎๆ›ใงใใพใ™ใ€‚ ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ Datasets Deep Learningใฎใƒขใƒ‡ใƒซใฎๅญฆ็ฟ’ใซใฏๅฝ“็„ถๅคง้‡ใฎใƒ‡ใƒผใ‚ฟใŒๅฟ…่ฆใงใ™ใ€‚่‡ช็”ฑใซๅญฆ็ฟ’ใซไฝฟใˆใ‚‹้Ÿณๆฅฝใƒ‡ใƒผใ‚ฟใฎๆฌ ๅฆ‚ใŒใ“ใฎๅˆ†้‡Žใฎ็™บๅฑ•ใ‚’้˜ปๅฎณใ—ใฆใใŸ้ขใŒใ‚ใ‚Šใพใ™ใŒใ€ใ“ใ“ใซใใฆๅฐ‘ใ—ใšใคใƒ‘ใƒ–ใƒชใƒƒใ‚ฏใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒๅข—ใˆใฆใใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚ ใจใ„ใฃใฆใ‚‚โ€ฆ ็ญ†่€…(ๅพณไบ•)ใฎๆ‰€ๆ„Ÿใจใ—ใฆใ€ใใ‚Œใ‚‰ใฎ้Ÿณๆฅฝใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๅคšใใฏใ€ใ‚ฏใƒฉใ‚ทใƒƒใ‚ฏ้Ÿณๆฅฝ็ญ‰ใซๅใฃใฆใ„ใ‚‹ใ‚ˆใ†ใซๆ„Ÿใ˜ใพใ™๏ผˆ่‘—ไฝœๆจฉใŒๅˆ‡ใ‚Œใฆใ„ใ‚‹ใ‹ใ‚‰ใจใ„ใ†็†็”ฑใŒๅคงใใ„ใจใฏๆ€ใ†ใฎใงใ™ใŒ)ใ€‚ใพใŸใใฎ่ฆๆจกใ‚‚ใ€ๆ•ฐ็™พๆ›ฒ็จ‹ๅบฆใจใ„ใฃใŸใ‚‚ใฎใŒใปใจใ‚“ใฉใงใ€ไฝ•็™พไธ‡ใจใ„ใฃใŸๅ˜ไฝใฎใƒ‡ใƒผใ‚ฟใŒใ‚ใ‚‹็”ปๅƒ่ช่ญ˜็”จใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๆฏ”ในใฆ้‡ใซใŠใ„ใฆใ‚‚่ฆ‹ๅŠฃใ‚Šใ—ใพใ™ใ€‚่‰ฏ่ณชใงๅคง่ฆๆจกใช้Ÿณๆฅฝใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ใฉใ†ๆง‹็ฏ‰ใ™ใ‚‹ใ‹ใ€ไปŠๅพŒๆœŸๅพ…ใ•ใ‚Œใ‚‹็ ”็ฉถ้ ˜ๅŸŸใฎไธ€ใคใงใฏใชใ„ใงใ—ใ‚‡ใ†ใ‹ใ€‚(ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใƒชใƒณใ‚ฏ้›†ใ‚’ใ‚ทใ‚งใ‚ขใ—ใฆใŠใใพใ™) Audio Content Analysis This is yet another attempt of maintaining a list of datasets directly related to MIR. Other lists that I have foundโ€ฆwww.audiocontentanalysis.org ็งป่ชฟ Transposition ่ฆๆจกใŒๅฐใ•ใ„้Ÿณๆฅฝ็”จใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๆฌ ็‚นใ‚’่ฃœใ†ใŸใ‚ใซใ‚ˆใ็”จใ„ใ‚‰ใ‚Œใ‚‹ใฎใŒใ€็งป่ชฟ Transpositionใงใ™(MIDIใ‚„ใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฎใ‚ˆใ†ใซ่จ˜ๅทใง่กจใ•ใ‚ŒใŸ้Ÿณๆฅฝใฎๅ ดๅˆใซ้™ใ‚‹) ใ€‚ ็”ปๅƒ่ช่ญ˜ใƒขใƒ‡ใƒซใฎๅญฆ็ฟ’ๆ™‚ใซใ€ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใฎ็”ปๅƒใ‚’ใ™ใ“ใ—ๅ›ž่ปขใƒปๆ‹กๅคงใ€ใ‚ใ‚‹ใ„ใฏๅ่ปขใ—ใŸใ‚Šใ€ใƒŽใ‚คใ‚บใ‚’ๆ„ๅ›ณ็š„ใซ่ผ‰ใ›ใ‚‹ใ“ใจใงใƒ‡ใƒผใ‚ฟใ‚’ใ€Œๆฐดๅข—ใ—ใ€ใ™ใ‚‹ๆ‰‹ๆณ•ใ€Data AugmentationใŒใ‚ˆใ็”จใ„ใ‚‰ใ‚Œใพใ™ใ€‚ใใ‚ŒใจๅŒๆง˜ใซใ€ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใซใ‚ใ‚‹ๆ›ฒใ‚’ๅˆฅใฎใ‚ญใƒผใซ็งป่ชฟใ™ใ‚‹ใ“ใจใงใ€ใƒ‡ใƒผใ‚ฟ้‡ใ‚’ๅข—ใ‚„ใ™ใจใ„ใ†ๆ–นๆณ•ใŒใ‚ˆใใจใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใ“ใ†ใ™ใ‚‹ใ“ใจใงใƒ‡ใƒผใ‚ฟ้‡ใ‚’ๅข—ใ‚„ใ™ใ ใ‘ใงใชใใ€็‰นๅฎšใฎใ‚ญใƒผใฎใฟใซใจใ‚‰ใ‚ใ‚Œใšๅน…ๅบƒใๅญฆ็ฟ’ใงใใ‚‹ใจใ„ใฃใŸใƒกใƒชใƒƒใƒˆใŒใ‚ใ‚Šใพใ™ใ€‚ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ Architecture ๅ…ฅๅ‡บๅŠ›ใซใคใ„ใฆ่€ƒๆ…ฎๆœซในใ็‚นใŒใ‚ใ‹ใฃใŸใจใ“ใ‚ใงใ€่‚ๅฟƒใฎๅญฆ็ฟ’ใฎไธญ่บซใ‚’ใฟใฆใ„ใใพใ—ใ‚‡ใ†ใ€‚ๅฎŸ้š›ใซใฏใฉใ†ใ„ใฃใŸDeep Learningใฎใƒขใƒ‡ใƒซใŒ็”จใ„ใ‚‰ใ‚Œใฆใ„ใ‚‹ใฎใงใ—ใ‚‡ใ†ใ‹ใ€‚ 1. Multilayer Neural Network / Feedforward Neural Network ไธ€็•ชไธ€่ˆฌ็š„ใชใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใงใ™ใญใ€‚ๅพŒใซ่ฟฐในใ‚‹RNNใจใใ‚‰ในใฆๆ™‚้–“ใฎ็ตŒ้Ž(ๆ™‚็ณปๅˆ—)ใ‚’่€ƒๆ…ฎใ—ใชใ„ใฎใงใ€ไธ€้Ÿณไธ€้Ÿณ็”Ÿๆˆใ™ใ‚‹ใฎใงใฏใชใใ‚ใ‚‹ๅ…ฅๅŠ›ใซๅฏพใ—ใฆๅ…จไฝ“ใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใชไฝฟใ„ๆ–นใซใชใ‚Šใพใ™ใ€‚ http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/ 2. Recurrent Neural Network (RNN) ๅ†ๅธฐๅž‹ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ. ๅ‡บๅŠ›ใ‚’ๅ…ฅๅŠ›ใซๆˆปใ™ๅ†ๅธฐ็š„ใชใ‚ณใƒใ‚ฏใ‚ทใƒงใƒณใ‚’่จญใ‘ใ‚‹ใ“ใจใงใ€็ตๆžœ็š„ใซๆ™‚็ณปๅˆ—ใƒ‡ใƒผใ‚ฟใ‚’ๆ‰ฑใˆใ‚‹ใ‚ˆใ†ใซใ—ใŸใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ. 1ใจใฏ้€†ใซไธ€้Ÿณไธ€้Ÿณใ‚นใƒ†ใƒƒใƒ—ใƒใ‚คใ‚นใƒ†ใƒƒใƒ—ใง็”Ÿๆˆใ™ใ‚‹ใ‚ˆใ†ใชไฝฟใ‚ใ‚Œๆ–นใ‚’ใ—ใพใ™ใ€‚ใใฎ็™บๅฑ•็ณปใงใ‚ใ‚‹LSTM(Long Short-Term Memory)ใจใ—ใฆใ€RNNใฏ้Ÿณๆฅฝ็”Ÿๆˆใซๅบƒใ็”จใ„ใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ https://qiita.com/kiminaka/items/87afd4a433dc655d8cfd 3. Autoencoder ใ€Œ่‡ชๅทฑๅพฉๅทๅŒ–ๅ™จใ€ใจใ„ใ†ๆ—ฅๆœฌ่ชž่จณใŒ็คบใ™ใ‚ˆใ†ใซใ€ๅ…ฅๅŠ›ใซ็ญ‰ใ—ใ„ๅ‡บๅŠ›ใ€ๅ…ฅๅŠ›ใ‚’็œŸไผผใ™ใ‚‹ใ‚ˆใ†ใชๅ‡บๅŠ›ใ‚’ๅ‡บใ™ใ‚ˆใ†ใซๅญฆ็ฟ’ใ™ใ‚‹ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใงใ™ใ€‚้€šๅธธใฏๅ…ฅๅ‡บๅŠ›ใ‚ˆใ‚Šใ‚‚ๅฐใ•ใ„ๆฌกๅ…ƒใฎ้š ใ‚Œๅฑคใ‚’ไฝฟใ„ใพใ™(ไธ‹ๅ›ณไธญๅคฎใฎLayer 2)ใ€‚ใ“ใ†ใ™ใ‚‹ใ“ใจใงใ€ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใ‚’ใ‚ˆใ‚Šๅฐใ•ใ„ๆฌกๅ…ƒใง่กจ็พใ™ใ‚‹๏ผๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟๅ†…ใฎใฐใ‚‰ใคใใฎๆœฌ่ณช็š„ใช้ƒจๅˆ†ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ“ใจใŒๆœŸๅพ…ใ•ใ‚Œใพใ™ใ€‚ Autoencoderใฏ้Ÿณๆฅฝใฎใ‚ณใƒณใƒ†ใ‚ฏใ‚นใƒˆใงใ‚‚ใ‚ˆใไฝฟใ‚ใ‚ŒใฆใŠใ‚Šใ€ใ„ใพใพใงไบบใŒๆ‰‹ไฝœๆฅญใงใคใ‘ใฆใ„ใŸ้Ÿณๆฅฝ็š„ใช็‰นๅพด้‡ใ‚’ใ€Autoencoderใซใ‚ˆใฃใฆ่‡ชๅ‹•็š„ใซๆŠฝๅ‡บใงใใ‚‹ใ‚ˆใ†ใซใชใ‚Šใพใ—ใŸใ€‚Autoencoderใซ่€ƒใˆๆ–นใŒ่ฟ‘ใ„ใ‚‚ใฎใจใ—ใฆใ€Restiricted Boltzman Machine(RBM)ใ‚„Variational Autoencoder(VAE)ใชใฉใ‚‚ไฝฟใ‚ใ‚Œใฆใ„ใพใ™ใ€‚ http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/ 4. Convolutional Neural Network (CNN) ็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ€‚็”ปๅƒ่ช่ญ˜ใชใฉใฎใ‚ฟใ‚นใ‚ฏใงๅบƒใ็”จใ„ใ‚‰ใ‚Œใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ™ใŒใ€้Ÿณๆฅฝใฎ็”Ÿๆˆใซใฏใ“ใ‚Œใพใงใ‚ใพใ‚Šๅคšใไฝฟใ‚ใ‚Œใฆใฏใใพใ›ใ‚“ใงใ—ใŸใ€‚ๆ™‚้–“่ปธๆ–นๅ‘ใซ็•ณใ“ใ‚€ใ“ใจใงใ€ๆ™‚็ณปๅˆ—ใƒ‡ใƒผใ‚ฟใ‚’ๆ‰ฑใˆใใ†ใชใ‚‚ใฎใงใ™ใŒใ€้•ทๆœŸ้–“ใฎๅ‰ๅพŒใฎไพๅญ˜้–ขไฟ‚ใ‚’ๅญฆ็ฟ’ใงใใ‚‹LSTMใชใฉใซๆฏ”ในใ‚‹ใจใ€CNNใง็•ณ่พผใ‚ใ‚‹ๆ™‚้–“ๆ–นๅ‘ใฎๆƒ…ๅ ฑใฏ้™ใ‚‰ใ‚Œใฆใ„ใพใ™ใ€‚ใ—ใ‹ใ—WaveNetใŒใ€Dilated Convolutionใจใ„ใ†ๆ‰‹ๆณ•ใซใ‚ˆใฃใฆ้•ทๆ™‚้–“ใฎไพๅญ˜้–ขไฟ‚ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ“ใจใซๆˆๅŠŸใ—ใŸใ“ใจใ‚‚ใ‚ใ‚Š(ๅพŒ่ฟฐ)ใ€ไปŠๅพŒใ€CNNใ‚‚ใ‚ˆใ‚Šไธ€่ˆฌ็š„ใซ็”จใ„ใ‚‰ใ‚Œใ‚‹ใ‚ˆใ†ใซใชใ‚Šใใ†ใงใ™ใ€‚ ใ€Œ็•ณใฟ่พผใฟใ€ใฎๆฆ‚ๅฟต โ€” http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/ CNNใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ โ€” https://devblogs.nvidia.com/parallelforall/deep-learning-nutshell-core-concepts/ 5. Generative Adversarial Networks (GAN) ็”Ÿๆˆ็š„ๆ•ตๅฏพใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใจใ„ใ†ๆ—ฅๆœฌ่ชž่จณใ‚ˆใ‚Šใ‚‚GANใฎๆ–นใŒ้ฆดๆŸ“ใฟใŒๆทฑใ„ใจๆ€ใ„ใพใ™ใ€‚ๆ˜จไปŠ่ฉฑ้กŒใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ(ใจใ„ใฃใฆใ‚‚CNNใ‚„RNNใจใฏ้•ใ„ใ€ใ‚ˆใ‚Šใƒกใ‚ฟใƒฌใƒ™ใƒซใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ)ใจใ—ใฆใ€GANใซ่งฆใ‚Œใชใ„ใ‚ใ‘ใซใฏใ„ใใพใ›ใ‚“ใ€‚ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใซใชใ‚‹ในใ่ฟ‘ใ„ใƒ‡ใƒผใ‚ฟใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚ˆใ†ใซๅญฆ็ฟ’ใ™ใ‚‹Generator(็”Ÿๆˆๅ™จ)ใจใ€็”Ÿๆˆใ•ใ‚ŒใŸใ„ใ‚ใฐใ€Œๅฝ็‰ฉใ€ใจๆœฌ็‰ฉใฎใƒ‡ใƒผใ‚ฟ(ๅญฆ็ฟ’ๅ…ƒใฎใƒ‡ใƒผใ‚ฟ)ใ‚’่ญ˜ๅˆฅใ™ใ‚‹Discriminator(่ญ˜ๅˆฅๅ™จ)ใฎไบŒใคใฎใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ไบ’ใ„ใซ็ซถใ‚ใ›ใ‚‹ใจใ„ใ†็”ปๆœŸ็š„ใชใ‚ขใ‚คใƒ‡ใ‚ขใ‚’ใ‚‚ใจใซใ—ใฆใ„ใพใ™ใ€‚ ็”ปๅƒใฎๅˆ†้‡Žใงใฏใ€ๆœฌ็‰ฉใใฃใใ‚Šใฎ้ก”ๅ†™็œŸใ‚’็”Ÿๆˆใ—ใŸใ‚Šใ€pix2pixใฎใ‚ˆใ†ใซ็”ปๅƒใ‚’ๅค‰ๆ›ใ—ใŸใ‚Šใจใ„ใฃใŸใ‚ฟใ‚นใ‚ฏใซๅบƒใไฝฟใ‚ใ‚Œใฆใ„ใพใ™ใŒใ€้Ÿณๆฅฝใธใฎๅฟœ็”จใฏใพใ ใใ‚Œใปใฉๅคšใใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ไธ€ใคใซใฏGANใฏๅฎ‰ๅฎš็š„ใซๅญฆ็ฟ’ใ•ใ›ใ‚‹ใฎใŒ้›ฃใ—ใ„(Generator/Discriminatorใฎใฉใกใ‚‰ใ‹ใŒ่ณขใใชใ‚Šใ™ใŽใฆ็›ธๆ‰‹ใ‚’้จ™ใ›ใชใใชใฃใฆใ—ใพใ†)ใจใ„ใ†็‚นใŒๆŒ‡ๆ‘˜ใ•ใ‚Œใฆใ„ใพใ™ใ€‚GANใ‚’ๅฎ‰ๅฎš็š„ใซๅญฆ็ฟ’ใ•ใ›ใ‚‹ๆ‰‹ๆณ•ใซ้–ขใ™ใ‚‹็ ”็ฉถใŒ้€ฒใ‚€ใจใจใ‚‚ใซใ€GANใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆใ‚‚ไปŠๅพŒใƒ›ใƒƒใƒˆใชใƒˆใƒ”ใƒƒใ‚ฏใซใชใ‚Šใใ†ใงใ™ใ€‚ GANใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ https://www.slideshare.net/xavigiro/deep-learning-for-computer-vision-generative-models-and-adversarial-training-upc-2016 6. Reinforcement Learning (RL) ๅผทๅŒ–ๅญฆ็ฟ’ใ€‚็’ฐๅขƒใ‹ใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒใƒƒใ‚ฏ(reward ๅ ฑ้…ฌ)ใ‚’ใ‚‚ใจใซ่ฉฆ่กŒ้Œฏ่ชคใ‚’็นฐใ‚Š่ฟ”ใ™ใ“ใจใงใ€ใƒขใƒ‡ใƒซ(agent)ใŒๅญฆ็ฟ’ใ—ใฆใ„ใใจใ„ใ†ๆ‰‹ๆณ•ใ€‚ใƒญใƒœใƒƒใƒˆใฎๅ‹•ไฝœใฎๅญฆ็ฟ’ใชใฉใ€ๆ˜”ใ‹ใ‚‰ไฝฟใ‚ใ‚ŒใใŸๆ‰‹ๆณ•ใงใ™ใŒใ€ๆœ€่ฟ‘ใฏDeep LearningใซๅŸบใฅใๅผทๅŒ–ๅญฆ็ฟ’ใฎ็ ”็ฉถใŒ้€ฒใ‚“ใงใ„ใพใ™ใ€‚Alpha GoใชใฉใŒๆœ€ใ‚‚ๆœ‰ๅใชไพ‹ใงใ—ใ‚‡ใ†ใ€‚ ้ŸณๆฅฝใซRLใ‚’ไฝฟใ†ไพ‹ใ‚‚ใพใ ใใ‚Œใปใฉๅคšใใฏใ‚ใ‚Šใพใ›ใ‚“ใŒใ€ไปŠๅพŒๅฟœ็”จใŒๆœŸๅพ…ใ•ใ‚Œใ‚‹้ ˜ๅŸŸใงใ™ใ€‚ใƒชใ‚นใƒŠใƒผใ‚„ใƒฆใƒผใ‚ถ(ใŸใจใˆใฐAIใจใ„ใฃใ—ใ‚‡ใซๆผ”ๅฅใ™ใ‚‹ใƒ—ใƒฌใ‚คใƒคใƒผ)ใ‹ใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒใƒƒใ‚ฏใ‚’ๅ—ใ‘ใชใŒใ‚‰ใ€็”Ÿๆˆใ•ใ‚Œใ‚‹้ŸณๆฅฝใŒใ™ใ“ใ—ใšใคใƒ–ใƒฉใƒƒใ‚ทใƒฅใ‚ขใƒƒใƒ—ใ‚ใ‚‹ใ„ใฏใƒŠใƒ“ใ‚ฒใƒผใƒˆใ•ใ‚Œใฆใ„ใใจใ„ใฃใŸใ‹ใŸใกใงใฎๅˆฉ็”จใŒๆƒณๅฎšใ•ใ‚Œใพใ™ใ€‚ ๅผทๅŒ–ๅญฆ็ฟ’ โ€” https://en.wikipedia.org/wiki/Reinforcement_learning#/media/File:Reinforcement_learning_diagram.svg 7. ไธŠ่จ˜ใฎ็ต„ใฟๅˆใ‚ใ› ๅฎŸ้š›ใฎ็ ”็ฉถใฎๅคšใใฏไธŠ่จ˜ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’็ต„ใฟๅˆใ‚ใ›ใŸใ‚‚ใฎใ‚’ไฝฟใฃใฆใ„ใพใ™ใ€‚ไธ‹่จ˜ใฏใใฎไธ€ไพ‹ใงใ™ RNN + Autoenoder RNN-RBM Convolutional GAN RNN GAN ๅฎŸไพ‹ Systems ใ“ใ“ใ‹ใ‚‰ใฏๅ…ทไฝ“็š„ใช็ ”็ฉถใƒปใ‚ทใ‚นใƒ†ใƒ ใฎไพ‹ใ‚’่ฆ‹ใฆใ„ใใพใ™ใ€‚ใพใšใฏไธ€็•ชใ‚ทใƒณใƒ—ใƒซใชFeedforwardใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ไฝฟใฃใŸใ‚ทใ‚นใƒ†ใƒ ใ‹ใ‚‰. MiniBach System โ€” 2017 Hadjeres, G., Pachet, F., & Nielsen, F. (2016). DeepBach: a Steerable Model for Bach Chorales Generation. Retrieved from http://arxiv.org/abs/1612.01010 ใƒใƒƒใƒใฎๅคšๅฃฐใฎ่–ๆญŒใ‚’ๅญฆ็ฟ’ใ€ใ‚ใ‚‹ใƒกใƒญใƒ‡ใ‚ฃใƒผใซๅฏพใ—ใฆใƒใƒƒใƒใฎ่–ๆญŒ้ขจใฎไผดๅฅใ‚’็”Ÿๆˆใ™ใ‚‹ใจใ„ใ†ใ‚ทใ‚นใƒ†ใƒ ใ€‚ๅ˜็ด”ใชFeedforward networkใ‚’็”จใ„ใฆใ„ใพใ™ใ€‚ๅ…ฅๅ‡บๅŠ›ใฏone-hotใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ—ใŸใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฎๅฝขๅผใซใชใ‚Šใพใ™ใ€‚ๆœ€ๅฐใฎๆ™‚้–“ๅ˜ไฝใฏ16ๅˆ†้Ÿณ็ฌฆใ€‚ 356ๆ›ฒใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๅญฆ็ฟ’ใ—ใพใ—ใŸใ€‚ๅ…ฅๅ‡บๅŠ›ใฎใƒŽใƒผใƒ‰ใฏ2800็จ‹ๅบฆใ€200ใƒŽใƒผใƒ‰ใฎ้š ใ‚Œๅฑคใ‹ใ‚‰ใชใ‚‹ใ‚ทใƒณใƒ—ใƒซใชใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใงใ™ใ€‚ๅ…ฅๅŠ›ใฎ้Ÿณ็ฌฆใฎๅˆ—ใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ใจใ€ใใ‚Œใซๅฏพใ™ใ‚‹ไผดๅฅใฎใƒกใƒญใƒ‡ใ‚ฃใƒผใŒๅ‡บๅŠ›ใ•ใ‚Œใ‚‹ใ‚ทใƒณใƒ—ใƒซใชๆง‹้€ ใงใ™ใ€‚ใ“ใ‚Œใฏใ‚ใจใซ่ฟฐในใ‚‹DeepBachใ‚ทใ‚นใƒ†ใƒ ใฎ็ฐกๆ˜“็‰ˆใซใชใ‚Šใพใ™. (ๅๅ‰ใฏmini-batchใซใ‹ใ‘ใฆใ„ใ‚‹ใ‚“ใงใ—ใ‚‡ใ†ใ‹โ€ฆ) (ๅทฆ) MinBach Architecture / (ๅณ) MiniBachใซใ‚ˆใฃใฆ็”Ÿๆˆใ•ใ‚ŒใŸๆฅฝ่ญœใฎไพ‹ Blues Melody Generation โ€” 2002 Eck, D., & Schmidhuber, J. (n.d.). A First Look at Music Composition using LSTM Recurrent Neural Networks. Retrieved from http://people.idsia.ch/~juergen/blues/IDSIA-07-02.pdf ใ“ใกใ‚‰ใฏLSTMใ‚’ไฝฟใฃใฆใƒกใƒญใƒ‡ใ‚ฃใƒผใจใ‚ณใƒผใƒ‰ใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚ใƒกใƒญใƒ‡ใ‚ฃใƒผ็”จใซ13ใฎใƒ”ใƒƒใƒใจ12ใฎใ‚ใ‚Šใ†ใ‚‹ใ‚ณใƒผใƒ‰ใ‚’ใ‚ใ‚ใ›ใŸ25ใƒŽใƒผใƒ‰ใ‚’ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใจใ—ใ€้š ใ‚Œๅฑคใจใ—ใฆใƒกใƒญใƒ‡ใ‚ฃใƒผ็”จใซ4ใคใ€ใ‚ณใƒผใƒ‰็”จใซ4ใคใฎใ‚ใ‚ใ›t8ใคใฎLSTMใฎใƒฆใƒ‹ใƒƒใƒˆใ‚’็”จใ„ใŸใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใ™. (ใใฎๅ‰ๆฎต้šŽใจใ—ใฆใ€ใƒกใƒญใƒ‡ใ‚ฃใƒผใฎใฟใ‚’็”Ÿๆˆใ™ใ‚‹ใƒขใƒ‡ใƒซใ‚‚่ซ–ๆ–‡ใฎใชใ‹ใงใฏใƒ†ใ‚นใƒˆใ—ใฆใ„ใพใ™) ใƒกใƒญใƒ‡ใ‚ฃใƒผใ€ใ‚ณใƒผใƒ‰ใใ‚Œใžใ‚Œใฎๅ…ฅๅ‡บๅŠ›ใฎใƒŽใƒผใƒ‰ใฏfully connectedใ•ใ‚Œใฆใ„ใ‚‹ใ†ใˆใซใ€ใ‚ณใƒผใƒ‰็”จใฎLSTMใฎใฟใ€ใƒกใƒญใƒ‡ใ‚ฃใƒผใฎๅ‡บๅŠ›ๅฑคใจใ‚‚ใคใชใŒใฃใฆใ„ใ‚‹ใฎใŒ็‰นๅพดใ€‚ใ“ใ†ใ™ใ‚‹ใ“ใจใงใ‚ณใƒผใƒ‰ใซๅŸบใฅใ„ใŸใƒกใƒญใƒ‡ใ‚ฃใƒผใฎ็”ŸๆˆใŒๅฏ่ƒฝใซใชใ‚‹ใจใ„ใ†ใฎใŒ่ซ–ๆ–‡ใฎ่‘—่€…ใฎไธปๅผตใงใ™ใ€‚ ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใฏใƒ–ใƒซใƒผใ‚น. ็”Ÿๆˆๆ™‚ใซใฏใ€ใ‚ทใƒผใƒ‰ใจใชใ‚‹ใƒ”ใƒƒใƒ/ใ‚ณใƒผใƒ‰ใ‚’ไธ€ใคๅ…ฅๅŠ›ใ™ใ‚‹ใจใ“ใ‚ใ‹ใ‚‰ใ€ใ‚ใจใ‚’็ถšใ‘ใ•ใ›ใ‚‹ๅ ดๅˆใจใ€ๆ•ฐๅฐ็ฏ€ใฎๅ…ฅๅŠ›ใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ๆ–นๆณ•ใ€ใใ‚Œใžใ‚Œใ‚’่ฉฆใ—ใฆใ„ใพใ™ใ€‚ใ„ใšใ‚Œใ‚‚ใƒ–ใƒซใƒผใ‚นใ‚‰ใ—ใ„ใƒ•ใ‚ฃใƒผใƒชใƒณใ‚ฐใŒๅ†็พใงใใŸใจใ—ใฆใ„ใพใ™ใ€‚ไธ€ๆ–นใงใ€ใ‚ณใƒผใƒ‰ใฎๆƒ…ๅ ฑใชใฉใ‚’ๆฅฝ่ญœใ‹ใ‚‰่ชญใฟ่งฃใๆ‰‹ไฝœๆฅญใŒๅฟ…่ฆใซใชใ‚‹ใ€ๅŒใ˜ๅ…ฅๅŠ›ใซๅฏพใ—ใฆใฏๅธธใซๅŒใ˜ๅ‡บๅŠ›ใซใ—ใ‹ใชใ‚‰ใชใ„(deterministic)ใจใ„ใฃใŸๆฌ ็‚นใ‚‚ๆŒ‡ๆ‘˜ใ•ใ‚Œใฆใ„ใพใ™. Composing Music with LSTM Recurrent Networks -็”Ÿๆˆไพ‹ Composing Music with LSTM Recurrent Networks Here are some multimedia files related to the LSTM music composition project. The files are in MP3 (hi-resolutionโ€ฆpeople.idsia.ch DeepHear 2016 Felix Sun. (n.d.). DeepHear โ€” Composing and harmonizing music with neural networks. Retrieved November 21, 2017, from https://fephsun.github.io/2015/09/01/neural-music.html ๆฌกใซAutoencoderใจFeedforward Networkใ‚’ใคใ‹ใฃใŸใ‚ทใ‚นใƒ†ใƒ ใ€DeepHearใ‚’็ดนไป‹ใ—ใพใ™. Autoencoderใซใ‚ˆใ‚‹้Ÿณๆฅฝใฎ้ซ˜ๆฌกใฎ็‰นๅพดใ‚’ๆŠฝๅ‡บใ€‚Autoencoderใฎ้š ใ‚Œๅฑคใฎๅ‡บๅŠ›ใซใ‚ใŸใ‚‹ไนฑๆ•ฐใ‹ใ‚‰ใ‚’ใ€ใƒ‡ใ‚ณใƒผใƒ€ใƒผใซใ‚ใŸใ‚‹Feedforward Networkใซๅ…ฅๅŠ›ใ™ใ‚‹ใ“ใจใงใ€ๆ–ฐใŸใช้Ÿณๆฅฝใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚ ๅญฆ็ฟ’ใซไฝฟใฃใŸใƒ‡ใƒผใ‚ฟใฏใƒฉใ‚ฐใ‚ฟใ‚คใƒ ใจๅ‘ผใฐใ‚Œใ‚‹ใ‚ธใƒฃใ‚บใฎๆบๆตใซใชใฃใŸ้Ÿณๆฅฝใ‚’600ๅฐ็ฏ€ใปใฉใ€‚16ๅˆ†้Ÿณ็ฌฆใ‚’ๆœ€ๅฐๅ˜ไฝใจใ—ใฆใ€4ๅฐ็ฏ€ๅ˜ไฝ(ใ—ใŸใŒใฃใฆ64ใ‚ฟใ‚คใƒ ใ‚นใƒ†ใƒƒใƒ—)ใงๅญฆ็ฟ’ใ—ใพใ—ใŸใ€‚80ใปใฉใฎใƒ”ใƒƒใƒใ‚’ๅ–ใ‚Šใ†ใ‚‹ๅ€คใจใ—ใฆๅˆถ้™ใ™ใ‚‹ใ“ใจใงใ€64x80ใง็ด„5000ๆฌกๅ…ƒใฎone-hotใƒ™ใ‚ฏใƒˆใƒซใงๅ…ฅๅ‡บๅŠ›ใ‚’ๆ‰ฑใ„ใพใ™ใ€‚autoencoderใฎๅญฆ็ฟ’ใฏใ€ใ„ใ‚ใ‚†ใ‚‹pre-trainingๅŒๆง˜ใซใƒฌใ‚คใƒคใƒผใ”ใจใซ่กŒใฃใŸใใ†ใงใ™ใ€‚ DeepHear Architecture ๅญฆ็ฟ’ใŒ็ต‚ใ‚ใ‚‹ใจใ€ไธญๅคฎใฎๅฑค(ใ‚จใƒณใ‚ณใƒผใƒ€ใซใ‚ˆใฃใฆ16ๆฌกๅ…ƒใซใพใงๆฌกๅ…ƒใŒๅ‰Šๆธ›ใ•ใ‚Œใฆใ„ใพใ™)ใซใƒฉใƒณใƒ€ใƒ ใชใ‚ทใƒผใƒ‰ใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ใ“ใจใงใƒกใƒญใƒ‡ใ‚ฃใƒผใŒ็”Ÿๆˆใ•ใ‚Œใพใ™ใ€‚ ็”Ÿๆˆใ•ใ‚ŒใŸใƒกใƒญใƒ‡ใ‚ฃใƒผใซ้–ขใ—ใฆใฏใ€ไธญๅคฎใฎใƒฌใ‚คใƒคใƒผใฎๆฌกๅ…ƒใŒๅฐใ•ใ™ใŽใŸ(16ๆฌกๅ…ƒ)ใŒๆ•…?ใซ็”Ÿๆˆใ•ใ‚Œใ‚‹้ŸณๆฅฝใŒใ‚‚ใจใฎๅญฆ็ฟ’ๅ…ƒใฎใƒ‡ใƒผใ‚ฟใซไผผ้€šใฃใฆใ—ใพใฃใŸใ€ใจ่‘—่€…ใ‚‰ใฏๅˆ†ๆžใ—ใฆใ„ใพใ™ใ€‚ https://soundcloud.com/deeplearning-music/deephear DeepHear - Composing and harmonizing music with neural networks As you can tell, it's not perfect at making beautiful music yet, but it clearly understands basic chords and rhythms asโ€ฆfephsun.github.io deepAutoController 2014 Sarroff, A. M., & Casey, M. (2014). Musical Audio Synthesis Using Autoencoding Neural Nets. Proceedings of the International Computer Music Conference, (September), 14โ€“20. http://www.cs.dartmouth.edu/sarroff/papers/sarroff2014a.pdf. ไธŠ่จ˜ใฎDeepHearใซไผผ้€šใฃใŸใ‚ทใ‚นใƒ†ใƒ ใงใ™ใŒใ€ใ‚นใƒšใ‚ฏใƒˆใƒซใ‚’้€šใ—ใฆใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชไฟกๅทใ‚’็›ดๆŽฅๆ‰ฑใฃใฆใ„ใ‚‹ใฎใŒ็‰นๅพดใงใ™ใ€‚ใƒฉใƒณใƒ€ใƒ ใชใ‚ทใƒผใƒ‰ใ‹ใ‚‰ใ ใ‘ใงใชใใ€ใƒฆใƒผใ‚ถใ‚คใƒณใ‚ฟใƒ•ใ‚งใƒผใ‚นใ‚’้€šใ—ใฆใƒฆใƒผใ‚ถใŒ็›ดๆŽฅๅ…ฅๅŠ›ใ•ใ‚Œใ‚‹ใ‚ทใƒผใƒ‰ใฎๅ€คใ‚’ใ‚ณใƒณใƒˆใƒญใƒผใƒซใงใใ‚‹ใ‚ˆใ†ใซใ—ใฆใ„ใพใ™ใ€‚ๅญฆ็ฟ’ใซไฝฟใฃใŸๆ›ฒใฏ10ใฎใ‚ธใƒฃใƒณใƒซใฎ8000ๆ›ฒใปใฉใงใ™ใ€‚ไธ‹่จ˜ใฎใƒ‡ใƒขใ‚’่ฆ‹ใ‚‹้™ใ‚Šๅ‡บ้Ÿณใฏใ‹ใชใ‚ŠๅฎŸ้จ“็š„?!ใงใ™ใญใ€‚ AutoController ใƒ‡ใƒข Musical Audio Synthesis Using Autoencoding Neural Nets With an optimal network topology and tuning of hyperparameters, artificial neural networks (ANNs) may be trained toโ€ฆwww.cs.dartmouth.edu DeepBach โ€” 2016 Hadjeres, G., Pachet, F., & Nielsen, F. (2016). DeepBach: a Steerable Model for Bach Chorales Generation. Retrieved from http://arxiv.org/abs/1612.01010 ใ™ใงใซ็ดนไป‹ใ—ใฆใ„ใ‚‹MiniBachใฎๆ‹กๅผต็‰ˆใงใ™ใ€‚ๅŒๆง˜ใซใƒใƒƒใƒใฎ่–ๆญŒใ‚’ใ‚ใคใ‹ใ†ใฎใงใ™ใŒใ€feedforward networkใจLSTMใ‚’็ต„ใฟๅˆใ‚ใ›ใฆใ„ใ‚‹ใฎใŒ็‰นๅพดใงใ™ใ€‚ใ•ใ‚‰ใซใ“ใฎLSTMใฏไบŒใค็”จๆ„ใ•ใ‚Œใฆใ„ใฆ(200ใƒฆใƒ‹ใƒƒใƒˆ)ใ€ใ‚ใ‚‹ๆ™‚็‚นใซๅฏพใ—ใฆใใฎ็›ดๅ‰ใฎ้Ÿณ็ฌฆ/ใ‚ณใƒผใƒ‰ใฎๆƒ…ๅ ฑใ‚’่ชญใ‚€ใ ใ‘ใงใชใใ€ใ‚‚ใ†ใฒใจใคใฏใ‚ใ‚‹ๆ™‚็‚นใ‹ใ‚‰ๅ…ˆใฎๆƒ…ๅ ฑใ‚’้€†ๅ‘ใใซๅ…ˆ่ชญใฟใ™ใ‚‹LSTMใŒใ‚ใ‚‹ใฎใŒ็‰นๅพดใงใ™ใ€‚ ใ“ใ‚Œใ‚‰ไบŒใคใฎใ‚ขใ‚ฆใƒˆใƒ—ใƒƒใƒˆใฏใƒžใƒผใ‚ธใ•ใ‚Œใ€feedforwardใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’้€šใ—ใฆๆฌกใฎ้ŸณใŒ้ธใฐใ‚Œใพใ™ใ€‚(4ๅฃฐใฎ่–ๆญŒใ‚’ๅฏพ่ฑกใซใ—ใฆใ„ใ‚‹ใฎใงใ€ใ“ใฎใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใŒ4ใคใ‚ใ‚‹ใ“ใจใซใชใ‚Šใพใ™) DeepBach ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ ็”Ÿๆˆใ•ใ‚ŒใŸ้Ÿณๆฅฝใซๅฏพใ—ใฆ1200ไบบใฎ่ขซ้จ“่€…ใ‚’็”จๆ„ใ—ใฆ่žใ„ใฆใ‚‚ใ‚‰ใฃใŸใ‚‰ใจใ“ใ‚ใ€ๆœฌ็‰ฉใฎใƒใƒƒใƒใฎๆ›ฒใจๅŒบๅˆฅใ™ใ‚‹ใŒ้›ฃใ—ใ„ใจใ„ใ†็ตๆžœใŒใงใŸใใ†ใงใ™ใ€‚ Polyphonic music generation in the style of Bach We developed a model of polyphonic music generation, which learns to compose chorales in the style of Bach. This modelโ€ฆwww.flow-machines.com Celtic Melody Generation โ€” 2016 Bob L. Sturm and Joao Felipe Santos. The endless traditional music session, Accessed on 21/12/2016. http://www.eecs.qmul.ac.uk/%7Esturm/research/RNNIrishTrad/ ๅŒๆง˜ใซRNNใ‚’็”จใ„ใŸ็ ”็ฉถใงใ‚ฑใƒซใƒˆ้ขจใฎๆฐ‘ๆ—้Ÿณๆฅฝใฎใƒกใƒญใƒ‡ใ‚ฃใƒผใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚ทใ‚นใƒ†ใƒ ใ€‚ใใ‚Œใžใ‚Œ512ใฎใƒŽใƒผใƒ‰ใ‚’ใ‚‚ใคLSTMใงใ€ABC่จ˜ๆณ•ใซใ‚ˆใฃใฆ่กจ็พใ•ใ‚ŒใŸๆฅฝ่ญœใ‚’ๅญฆ็ฟ’ใ—ใพใ™ใ€‚ABC่จ˜ๆณ•ใงๅฏพ่ฑกใจใชใฃใŸๆฅฝๆ›ฒใ‚’่กจใ™ใฎใซๅฟ…่ฆใชใ‚ขใƒซใƒ•ใ‚กใƒ™ใƒƒใƒˆใฎๆ•ฐใ ใ‘ใฎๆฌกๅ…ƒใฎone-hotใƒ™ใ‚ฏใƒˆใƒซใงๅ…ฅๅ‡บๅŠ›ใ‚’ใ‚จใƒณใ‚ณใƒผใƒ‰ใ—ใฆใ„ใพใ™ใ€‚ The Endless Traditional Music Session Edit descriptionwww.eecs.qmul.ac.uk Hexahedria Polyphony Generation โ€” 2015 Daniel Johnson. (n.d.). Composing Music With Recurrent Neural Networks ยท hexahedria. Retrieved November 21, 2017, from http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/ Feedforward networkใจRNNใ‚’็ต„ใฟๅˆใ‚ใ›ใŸใƒฆใƒ‹ใƒผใ‚ฏใชใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใงใƒใƒชใƒ•ใ‚ฉใƒ‹ใƒƒใ‚ฏใชใƒกใƒญใƒ‡ใ‚ฃใƒผใ‚’็”Ÿๆˆใ™ใ‚‹ใจใ„ใ†็ ”็ฉถใ€‚ 4ๅฑคใฎ้š ใ‚Œๅฑคใ‚’ๆŒใกใ€ๆœ€ๅˆใฎ2ๅฑคใŒRecurrentใชๅฑค(300ใƒŽใƒผใƒ‰)ใ€ๆฎ‹ใ‚Šใฎ2ๅฑค(ใใ‚Œใžใ‚Œ100ใ€50ใƒŽใƒผใƒ‰)ใŒfeedforwardใชใƒฌใ‚คใƒคใƒผใงใ™ใ€‚RNNใงๆ™‚้–“ๆ–นๅ‘ใฎ้–ขไฟ‚ๆ€งใ‚’ใ€Feedforwardใงใ‚ใ‚‹ใ‚ฟใ‚คใƒŸใƒณใ‚ฐใงๅŒๆ™‚ใซ้ณดใ‚‹้ŸณๅŒๅฃซใฎ้–ขไฟ‚=ใ‚ณใƒผใƒ‰ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใจใ„ใ†็›ฎ่ซ–่ฆ‹ใงใ™ใ€‚ Hexahedria Polyphony Generation โ€” ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ ๅ…ฅๅ‡บๅŠ›ใฏMIDIใฎใƒ”ใƒƒใƒใฎๅ€คใ‚’ใคใ‹ใฃใŸใƒ”ใ‚ขใƒŽใƒญใƒผใƒซๅฝขๅผ. ใใฎ้Ÿณใฎใ‚ฏใƒฉใ‚น(C, D็ญ‰)ใŒ็›ดๅ‰ใซๅผพใ‹ใ‚ŒใŸใ‹ใฉใ†ใ‹ใ€ๅฐ็ฏ€ใฎไธญใงใฎไฝ็ฝฎใชใฉใŒ่ฃœ่ถณๆƒ…ๅ ฑใจใ—ใฆๅ…ฅๅŠ›ใซ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๅญฆ็ฟ’ใซใฏClassical Piano Midi PageใฎMIDIใƒ‡ใƒผใ‚ฟใ‚’ๅˆฉ็”จใ—ใฆใ„ใพใ™ใ€‚ ็”Ÿๆˆใ•ใ‚ŒใŸๆฅฝๆ›ฒใฎไพ‹ Composing Music With Recurrent Neural Networks (Update: A paper based on this work has been accepted at EvoMusArt 2017! See here for more details.) It's hard not toโ€ฆwww.hexahedria.com VRAE Video Game Melody Generation โ€” 2014 Fabius, O., & van Amersfoort, J. R. (2014). Variational Recurrent Auto-Encoders. Retrieved from https://arxiv.org/pdf/1412.6581.pdf Variational AutoencoderใซRNN(LSTM)ใ‚’็ต„ใฟ่พผใ‚“ใ ็‹ฌ่‡ชใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ VAREใงTVใ‚ฒใƒผใƒ ้ขจใฎ้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹่ฉฆใฟ. RNNใ‚’ไบŒใค็”จๆ„ใ—ใ€ใใ‚Œใžใ‚Œใ‚’ใ‚จใƒณใ‚ณใƒผใƒ€ใ€ใƒ‡ใ‚ณใƒผใƒ€ใจใ—ใฆๆ‰ฑใ„ใพใ™ใ€‚ๅ…ฅๅŠ›ใฎใƒ•ใƒฌใƒผใ‚บใ‚’ไธ€้€šใ‚Šๅ…ฅๅŠ›ใ—ใŸใจใใฎใ‚จใƒณใ‚ณใƒผใƒ€ใฎใ‚ขใ‚ฆใƒˆใƒ—ใƒƒใƒˆใจ้š ใ‚Œๅฑคใฎ็Šถๆ…‹ใ‚’ใƒ‡ใ‚ณใƒผใƒ€ใซๅ…ฅๅŠ›ใ€ใใฎๅ‡บๅŠ›ใŒๅ…ฅๅŠ›ใจ่ฟ‘ใใชใ‚‹ใ‚ˆใ†ใซCross-Entropyใฎๆœ€ๅฐๅŒ–ใ‚’ๅ›ณใ‚Šใพใ™ใ€‚ใ“ใ“ใงใ‚จใƒณใ‚ณใƒผใƒ€ใฎๅ‡บๅŠ›ใ‚’VAEใฎ็ขบ็Ž‡ๅˆ†ๅธƒใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟ(ฮผ, ฯƒ)ใจใ—ใฆๆ‰ฑใ†ใฎใŒใƒใ‚คใƒณใƒˆใงใ™(ไธ‹ใฎๅ›ณใฎz)ใ€‚ VRAE ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ โ€” https://www.slideshare.net/KentaOono/vaetype-deep-generative-models ๅญฆ็ฟ’ใซไฝฟใฃใŸใฎใฏ8ใค(!!)ใฎๆœ‰ๅใชใ‚ฒใƒผใƒ ้ŸณๆฅฝใฎMIDI. 49ใฎใƒ”ใƒƒใƒใฎใฟใ‚’ใ‚ใคใ‹ใฃใฆใ„ใ‚‹ใใ†ใงใ™ใ€‚็”Ÿๆˆใ•ใ‚ŒใŸใ‚‚ใฎใ‚’่žใใจใ€ใ‹ใชใ‚Š้Žๅญฆ็ฟ’ใ—ใฆใ„ใ‚‹ใ‚ˆใ†ใซใ‚‚ๆ€ใˆใ‚‹ใฎใงใ™ใŒโ€ฆไฝ•ใ‚’ๅญฆ็ฟ’ใ—ใŸใฎใ‹ใ™ใใซใ‚ใ‹ใฃใฆใ—ใพใ„ใพใ™ใญใ€‚ VRAE Video Game Melody Generation โ€” ใ‚ตใƒณใƒ—ใƒซ Maris et alโ€™s โ€” Rhythm Generation system โ€” 2017 Makris, D., Kaliakatsos-papakostas, M., & Karydis, I. (n.d.). Combining LSTM and Feed Forward Neural Networks for Conditional Rhythm Composition. ใ“ใฎใ‚ตใƒผใƒ™ใ‚คใฎไธญใงๅ”ฏไธ€ใฎใƒชใ‚บใƒ ใซใƒ•ใ‚ฉใƒผใ‚ซใ‚นใ—ใŸ็ ”็ฉถใ€‚ ใƒ‡ใƒผใ‚ฟใฏ45ใฎใƒญใƒƒใ‚ฏใฎใƒ‰ใƒฉใƒ ใจใƒ™ใƒผใ‚นใฎใƒ‘ใ‚ฟใƒผใƒณใ‚’ใใ‚Œใžใ‚Œ16ๅฐ็ฏ€(http://www.911tabs.com/ใ‹ใ‚‰ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰)ใ€‚ใƒ‰ใƒฉใƒ ใฏใ‚ญใƒƒใ‚ฏใ€ใ‚นใƒใ‚ขใ€ใ‚ฟใƒ ใ€ใƒใ‚คใƒใƒƒใƒˆใ€ใ‚ทใƒณใƒใƒซใฎ5็จฎ้กžใ‚’ใƒ”ใ‚ขใƒŽใƒญใƒผใƒซๅฝขๅผใงๆ‰ฑใ„ใพใ™๏ผˆใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฏใƒใ‚คใƒŠใƒชใƒผใฎๆ–‡ๅญ—ๅˆ—ใง่กจ็พ)ใ€‚ใƒ™ใƒผใ‚นใƒฉใ‚คใƒณใ‚‚ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใซๅ…ฅใฃใฆใ„ใ‚‹ใฎใงใ™ใŒใ€ใพใš้Ÿณ็ฌฆใŒใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใจใใ“ใงใƒ”ใƒƒใƒใŒๅ‰ใฎ้Ÿณใ‹ใ‚‰ไธŠใŒใฃใฆใ„ใ‚‹ใ‹ใ€ๅŒใ˜ใ‹ใ€ไธ‹ใŒใฃใฆใ„ใ‚‹ใฎใ‹ไธ‰ใคใฎๅ€คใ€ใ“ใฎๅˆ่จˆ4ๅ€‹ใฎใƒใ‚คใƒŠใƒชใง่กจ็พใ€‚ใ•ใ‚‰ใซๅฐ็ฏ€ๅ†…ใงใฎไฝ็ฝฎใ‚’3ใคใฎใƒใ‚คใƒŠใƒชใง่กจ็พใ—ใŸใ‚‚ใฎใ‚‚ๅ…ฅๅŠ›ใซใคใ„่ฒทใ„ใพใ™ใ€‚ 128ใ€512ใฎใƒŽใƒผใƒ‰ใ‚’ๆŒใฃใŸใƒฌใ‚คใƒคใƒผใ‚’ไบŒใค้‡ใญใŸLSTMใ‚’ไบŒใค็”จๆ„ใ—ใ€ไธ€ใคใซใฏใƒ‰ใƒฉใƒ ใฎๆƒ…ๅ ฑ(5ๅ€‹ใฎใƒใ‚คใƒŠใƒช)ใ€ใ‚‚ใ†ไธ€ใคใซใƒ™ใƒผใ‚นใƒฉใ‚คใƒณใจๅฐ็ฏ€ๅ†…ใงใฎไฝ็ฝฎใฎๆƒ…ๅ ฑ(่จˆ7ๅ€‹ใฎใƒใ‚คใƒŠใƒช)ใ‚’ๅ…ฅๅŠ›ใ—ใพใ™ใ€‚ใใ‚Œใžใ‚Œใฎๅ‡บๅŠ›ใ‚’ใƒžใƒผใ‚ธใ—ใŸใ‚‚ใฎใ‚’ๆœ€ๅพŒใฎFeedfowardๅฑคใซๅ…ฅๅŠ›ใ—ๆœ€็ต‚็š„ใซSoftmaxใ‚’้€šใ—ใฆใƒ‰ใƒฉใƒ ใฎๆƒ…ๅ ฑใŒๅ‡บๅŠ›ใ•ใ‚Œใพใ™ใ€‚ ๅทฆ โ€” ใ‚ทใ‚นใƒ†ใƒ ๆง‹ๆˆ / ๅณ โ€” ็”Ÿๆˆใ•ใ‚ŒใŸใƒชใ‚บใƒ ใฎใƒ”ใ‚ขใƒŽใƒญใƒผใƒซ ใƒ™ใƒผใ‚นใ‚„ๅฐ็ฏ€ๅ†…ใงใฎไฝ็ฝฎใฎๆƒ…ๅ ฑใฏใƒ‰ใƒฉใƒ ใฎ็”Ÿๆˆใƒขใƒ‡ใƒซใซๅฏพใ™ใ‚‹ๆกไปถไป˜ใ‘(conditioning)ใฎๅ…ฅๅŠ›ใจใฟใชใ™ใ“ใจใŒใงใใพใ™ใ€‚่‘—่€…ใ‚‰ใฏใ“ใฎๆƒ…ๅ ฑใซใ‚ˆใฃใฆใ€ๅญฆ็ฟ’ๅŠน็Ž‡ใจ็”Ÿๆˆใ•ใ‚Œใ‚‹ใƒชใ‚บใƒ ใฎ่ณชใŒๅ‘ไธŠใ—ใŸใจ่ฟฐในใฆใพใ™ใ€‚ ็”Ÿๆˆใ•ใ‚ŒใŸใƒชใ‚บใƒ ใฎ้ŸณๆบใŒ่ฆ‹ใคใ‹ใ‚‰ใชใ‹ใฃใŸใฎใงใ™ใŒใ€ไธŠใฎๅ›ณใง่ฆ‹ใ‚‹้™ใ‚Šใฏใ‹ใชใ‚Šใ‚ทใƒณใƒ—ใƒซใชใ‚‚ใฎใซใจใฉใพใฃใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚ WaveNet โ€” 2015 Oord, A. van den, Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., โ€ฆ Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio. Retrieved from http://arxiv.org/abs/1609.03499 Google/DeepMindใฎใƒใƒผใƒ ใซใ‚ˆใฃใฆ้–‹็™บใ•ใ‚ŒใŸใ‚ชใƒผใƒ‡ใ‚ฃใ‚ชไฟกๅทใ‚’็›ดๆŽฅๆ‰ฑใ†ใƒขใƒ‡ใƒซใ€‚ใ‚‚ใจใ‚‚ใจ้Ÿณๅฃฐๅˆๆˆใฎ็ฒพๅบฆใ‚’ใ‚ใ’ใ‚‹ใŸใ‚ใซ้–‹็™บใ•ใ‚ŒใŸใƒขใƒ‡ใƒซใงใ™ใŒใ€้Ÿณๆฅฝใงใ‚‚ใƒ†ใ‚นใƒˆใ‚’ใ—ใฆใ„ใพใ™ใ€‚้Ÿณๅฃฐใƒ‡ใƒผใ‚ฟใ‚’ฮผ-lowใจใ„ใ†ใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใง8ใƒ“ใƒƒใƒˆใฎๆƒ…ๅ ฑใจใ—ใฆๆ‰ฑใ„ใ€256ใฎใจใ‚Šใ†ใ‚‹ๅ€คใฎใ†ใกใงๆฌกใฎใ‚ฟใ‚คใƒ ใ‚นใƒ†ใƒƒใƒ—ใงใฉใฎๅ€คใŒไธ€็•ชใ‚‚ใฃใจใ‚‚ใ‚‰ใ—ใ„ใ‹ใจใ„ใ†็ขบ็Ž‡ๅˆ†ๅธƒใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ Dilated Convolution โ€” https://deepmind.com/blog/wavenet-generative-model-raw-audio/ ๅ‰ใซใ‚‚่ฟฐในใŸใ‚ˆใ†ใซใ€Dilated Convolution(ๆญฃ็ขบใซใฏใ‚ใ‚‹ๆ™‚็‚นไปฅๅ‰ใฎๆƒ…ๅ ฑใฎใฟใ‚’ใŸใŸใฟใ“ใ‚€ใ€้žๅฏพ็งฐใฎDilated Causal Convolution) ใจใ„ใ†่€ƒใˆๆ–นใ‚’ๅฐŽๅ…ฅใ™ใ‚‹ใ“ใจใงไธ€่ˆฌ็š„ใซ้•ทๆ™‚้–“ใฎๆ™‚้–“ไพๅญ˜้–ขไฟ‚ใ‚’ๅญฆ็ฟ’ใงใใชใ„ใจใ•ใ‚Œใ‚‹CNNใฎๅ•้กŒ็‚นใ‚’ๅ›ž้ฟใ—ใฆใ„ใ‚‹ใฎใŒ็”ปๆœŸ็š„ใงใ™ใ€‚Dialated Convolutionใฏใ€ใƒฌใ‚คใƒคใƒผใŒไธŠใŒใ‚‹้š›ใซใ„ใใคใ‹ใฎใ‚คใƒณใƒ—ใƒƒใƒˆใ‚’ใ‚นใ‚ญใƒƒใƒ—ใ™ใ‚‹ใ“ใจใงไธŠไฝใซ่กŒใ‘ใฐ่กŒใใปใฉๅบƒ็ฏ„ๅ›ฒใ‚’ใ‚ซใƒใƒผใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚poolingใ‚„strideใจๅŒใ˜ใ‚ˆใ†ใช่€ƒใˆๆ–นใงใ™ใŒใ€ๅ…ฅๅŠ›ใจๅ‡บๅŠ›ใฎใ‚ตใ‚คใ‚บใŒๅŒใ˜ใงใ‚ใ‚‹ใจใ„ใ†็‚นใงDilated Convolutionใฏใ™ใใ‚Œใฆใพใ™ใ€‚ๅญฆ็ฟ’ใซใฏMagnaTagATuneใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎ60ๆ™‚้–“ใ‚’่ถ…ใˆใ‚‹ใƒ”ใ‚ขใƒŽใ‚ฝใƒญใฎๆฅฝๆ›ฒใ‚’ไฝฟใฃใฆใ„ใพใ™ WaveNetใง็”Ÿๆˆใ•ใ‚ŒใŸใƒ”ใ‚ขใƒŽๆ›ฒใฎไพ‹ (unofficial) ็พๅœจใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฎ้ซ˜้€ŸๅŒ–ใŒๅ›ณใ‚‰ใ‚Œใ€Google Assistantใฎ้Ÿณๅฃฐๅˆๆˆใฎ่‹ฑ่ชžใจๆ—ฅๆœฌ่ชžใซWaveNetใฎใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใŒไฝฟใ‚ใ‚Œใฆใ„ใ‚‹ใใ†ใงใ™ใ€‚ใพใŸWaveNetใฎๆŠ€่ก“ใ‚’ๅฟœ็”จใ—ใฆใ€ๆ–ฐใ—ใ„้Ÿณ่‰ฒใ‚’ไฝœใ‚Šๅ‡บใ™NSynthใจใ„ใ†ใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใ‚‚้ข็™ฝใ„ใงใ™ใ€‚ WaveNet: A Generative Model for Raw Audio | DeepMind This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generateโ€ฆdeepmind.com C-RNN-GAN Classical Polyphony Generation โ€” 2016 Mogren, O. (2016). C-RNN-GAN: Continuous recurrent neural networks with adversarial training. Retrieved from http://arxiv.org/abs/1611.09904 ใ‚ˆใ†ใ‚„ใใ“ใ“ใงGANใ‚’ใคใ‹ใฃใŸใ‚ทใ‚นใƒ†ใƒ ใฎ็™ปๅ ดใงใ™. ใƒใƒชใƒ•ใ‚ฉใƒ‹ใƒƒใ‚ฏใชใ‚ฏใƒฉใ‚ทใƒƒใ‚ฏ้Ÿณๆฅฝใฎ็”Ÿๆˆใ‚’็›ฎ็š„ใจใ—ใŸใƒ—ใƒญใ‚ธใ‚งใ‚ฏใƒˆใงใ€MIDIใซใ‚ˆใไผผใŸ้Ÿณๆฅฝใฎ่กจ็พใ‚’ใจใฃใฆใ„ใพใ™ใ€‚MIDIใจ้•ใ†ใฎใฏใ€ๅ‰ใฎใ‚คใƒ™ใƒณใƒˆใ‹ใ‚‰ๆ™‚้–“ใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ๆŒใฃใฆใ„ใ‚‹ใ“ใจใงใ™ใ€‚Webใง้›†ใ‚ใŸใ‚ฏใƒฉใ‚ทใƒƒใ‚ฏใฎMIDIใ€3700็จ‹ๅบฆใ‚’ๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใจใ—ใฆใ„ใพใ™ใ€‚ C-RNN-GAN โ€” ใ‚ทใ‚นใƒ†ใƒ ๆง‹ๆˆ GANใฎGenerator, Discriminatorใฏใจใ‚‚ใซLSTMใง350ใƒฆใƒ‹ใƒƒใƒˆใง2ใคใฎใƒฌใ‚คใƒคใƒผใ‚’ๆŒใฃใฆใ„ใพใ™. Discriminatorใฎใฟใ€้ †ๆ–นๅ‘ใƒป้€†ๆ–นๅ‘(ๅ…ˆ่ชญใฟ)ใฎไธกๆ–นใฎๆƒ…ๅ ฑใ‚’ๅ…ฅๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใซใ—ใชใฃใฆใ„ใพใ™. GANใฎๅญฆ็ฟ’ใ‚’ๅฎ‰ๅฎšๅŒ–ใ•ใ›ใ‚‹ใจ่จ€ใ‚ใ‚Œใ‚‹feature matchingใฎๆ‰‹ๆณ•ใŒใ“ใ“ใงใ‚‚ๆœ‰ๅŠนใ ใฃใŸใจๅ ฑๅ‘Šใ•ใ‚Œใฆใ„ใพใ™ใ€‚ C-RNN-GAN: Continuous recurrent neural networks with adversarial training Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. Weโ€ฆmogren.one MidiNet โ€” 2017 Yang, L.-C., Chou, S.-Y., & Yang, Y.-H. (2017). MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Conditions. Retrieved from http://arxiv.org/abs/1703.10847 ไธŠ่จ˜ใจๅŒๆง˜ใซGANใฎๆง‹้€ ใงใ™ใŒใ€RNNใงใฏใชใCNNใ‚’ไฝฟใฃใฆใ„ใพใ™ใ€‚1022ใฎใƒใƒƒใƒ—ใ‚นใฎใƒกใƒญใƒ‡ใ‚ฃใƒผใ‚’ๅญฆ็ฟ’ใ—ใฆใ„ใพใ™ใ€‚2ใ‚ชใ‚ฏใ‚ฟใƒผใƒ–ใฎ็ฏ„ๅ›ฒใงๆ™‚้–“ใฎๆœ€ๅฐๅ˜ไฝใŒ16ๅˆ†้Ÿณ็ฌฆใจใ—ใฆใƒ”ใ‚ขใƒŽใƒญใƒผใƒซใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใง่กจ็พใ—ใฆใ„ใพใ™ใ€‚ใพใŸone-hotใƒ™ใ‚ฏใƒˆใƒซใง่กจ็พใ—ใŸใ‚ณใƒผใƒ‰ใฎๆƒ…ๅ ฑใ‚’ๆกไปถไป˜ใ‘(conditioning)ใฎๅ…ฅๅŠ›ใจใ—ใฆไฝฟใฃใฆใ„ใ‚‹ใฎใ‚‚็‰นๅพดใงใ™(ไธ‹ใฎๅ›ณใฎๅทฆไธŠใฎ้’ใง่กจ็พ)ใ€‚ Generator, Discriminatorใจใ‚‚ใซCNNใฎๆง‹้€ ใงใ€ใกใ‚‡ใ†ใฉๅฏพ็งฐใซใชใ‚‹ใ‚ˆใ†ใชๆง‹้€ ใซใชใฃใฆใ„ใพใ™ใ€‚GeneratorใŒ2ๅฑคใฎfully connectedใƒฌใ‚คใƒคใƒผใจใใ‚Œใซ็ถšใ4ๅฑคใฎ็•ณใฟ่พผใฟๅฑคใ‹ใ‚‰ใชใ‚‹ใฎใซๅฏพใ—ใฆใ€Discriminatorใฏ2ๅฑคใฎ็•ณใฟ่พผใฟๅฑคใฎใ‚ใจใซfully connectedใƒฌใ‚คใƒคใƒผใŒ็ถšใใพใ™ใ€‚ MidiNet โ€” ใ‚ทใ‚นใƒ†ใƒ ๆง‹ๆˆ (ๅทฆ Generator / ๅณ Discriminator) ่ซ–ๆ–‡ใฎไธญใงใฏใ€ใ‚ณใƒผใƒ‰ใฎๆƒ…ๅ ฑใฎๆกไปถไป˜ใ‘ใŒๆœ‰ๅŠนใซๅƒใ„ใฆใ„ใ‚‹ใจใ„ใ†ใ“ใจใŒๆคœ่จผใ•ใ‚Œใฆใ„ใพใ™ใ€‚ใพใŸใ€feature mathcingใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใ‚’ใ‚ณใƒณใƒˆใƒญใƒผใƒซใ™ใ‚‹ใ“ใจใงใ€ใฉใฎใใ‚‰ใ„ๅ…ƒใฎๅญฆ็ฟ’ใƒ‡ใƒผใ‚ฟใซ่ฟ‘ใ„ใ‚‚ใฎใ‚’็”Ÿๆˆใ•ใ›ใ‚‹ใ‹ใ€่ฃใ‚’่ฟ”ใ›ใฐใ€ใฉใฎใใ‚‰ใ„็”Ÿๆˆใฎ่‡ช็”ฑๅบฆใ‚’ไธŽใˆใ‚‹ใ‹ใ‚’ใ‚ณใƒณใƒˆใƒญใƒผใƒซใงใใ‚‹ใฎใงใฏใจใ„ใ†ๆๆกˆใŒใชใ•ใ‚Œใฆใ„ใพใ™ใ€‚โ€Creativeโ€ Adversarial Networkใฎ็ ”็ฉถใฎใ‚ˆใ†ใชใ€Computational Creativityใฎๆœฌ่ณชใซ้–ขใ™ใ‚‹่ญฐ่ซ–ใซใ‚‚้–ขใ‚ใฃใฆใใใ†ใชๆๆกˆใงใ™ใ€‚ RichardYang - The Blog Edit descriptionrichardyang40148.github.io RL-Tuner Melody Generation System โ€” 2016 Jaques, N., Gu, S., Turner, R. E., & Eck, D. (n.d.). TUNING RECURRENT NEURAL NETWORKS WITH RE- INFORCEMENT LEARNING. Retrieved from https://arxiv.org/pdf/1611.02796v2.pdf ้Ÿณๆฅฝใฎ็”Ÿๆˆใ‚’ๅผทๅŒ–ๅญฆ็ฟ’(RL)ใฎๅ•้กŒใจใ—ใฆๅฎš็พฉใ—ใŸ็ ”็ฉถ. ใ‚ทใ‚นใƒ†ใƒ ใฏๅฐ‘ใ—่ค‡้›‘ใงไบŒใคใฎRNNใจไบŒใคใฎDeep Q-Networkใ‹ใ‚‰ๆˆใ‚Š็ซ‹ใฃใฆใพใ™ใ€‚ๆœ€ๅˆใซไธŠ่ฟฐใฎBluesใฎใƒกใƒญใƒ‡ใ‚ฃใƒผ็”Ÿๆˆใฎใ‚ทใ‚นใƒ†ใƒ ใจๅŒๆง˜ใซๆฌกใฎ้Ÿณ็ฌฆใ‚’ไบˆๆƒณใ™ใ‚‹RNN(Note RNN)ใ‚’ๅญฆ็ฟ’ใ€‚ใใฎใ‚ณใƒ”ใƒผใ‚’RLใซๅฏพใ—ใฆๅ ฑ้…ฌ(reward)ใ‚’ไธŽใˆใ‚‹ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใจใ—ใฆๅˆฉ็”จใ—ใพใ™(Reward RNN)ใ€‚ๅŒๆง˜ใซQ-Networkใฎใ‚ฟใ‚นใ‚ฏใฏใ€ใใ‚Œใพใงใฎ้Ÿณ็ฌฆใ‹ใ‚‰ๆฌกใฎ้Ÿณ็ฌฆใ‚’้ธๆŠžใ™ใ‚‹ใ‚ฟใ‚นใ‚ฏใจใชใ‚Šใพใ™ใŒใ€ใ“ใฎQ-Networkใฏใ‚‚ใ†ใฒใจใคใฎQ-Network(Target Q Network)ใŒNote RNNใŒๅญฆ็ฟ’ๆธˆใฟใฎๆƒ…ๅ ฑใซๅŸบใฅใ„ใฆgainใ‚’ไบˆๆƒณใ™ใ‚‹ใ“ใจใงๅญฆ็ฟ’ใงใใพใ™ใ€‚ RL-Tuner ใ‚ทใ‚นใƒ†ใƒ ๆง‹ๆˆ Q Networkใฎๅ ฑ้…ฌใฏใ€Reward RNNใŒไบˆๆƒณใ™ใ‚‹้Ÿณใซใฉใฎใใ‚‰ใ„่ฟ‘ใ„ใ‹ใ ใ‘ใงใฏใชใใ€ใƒฆใƒผใ‚ถใŒไป˜ใ‘ๅŠ ใˆใŸๅˆถ้™(ใŸใจใˆใฐ้Ÿณๆฅฝ็†่ซ–ใ‚„ใ‚ณใƒผใƒ‰ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใชใฉ)ใ‚‚ๅŠ ๅ‘ณใ—ใฆ่จˆ็ฎ—ใ•ใ‚Œใพใ™ใ€‚ใ—ใŸใŒใฃใฆใ€Q NetworkใฏRNNใซใ‚ˆใฃใฆๅญฆ็ฟ’ใ—ใŸๅ†…ๅฎนใ ใ‘ใงใชใใ€ใƒฆใƒผใ‚ถใฎใ‚ณใƒณใƒˆใƒญใƒผใƒซใซใ‚‚ๅพ“ใ†ใ“ใจใซใชใ‚Šใพใ™ใ€‚ Tuning Recurrent Neural Networks with Reinforcement Learning We are excited to announce our new RL Tuner algorithm, a method for enchancing the performance of an LSTM trained onโ€ฆmagenta.tensorflow.org Unit Selection & Concatenation Melody Generation โ€” 2016 Bretan, M., Weinberg, G., & Heck, L. (2016). A Unit Selection Methodology for Music Generation Using Deep Neural Networks. Retrieved from http://arxiv.org/abs/1612.03789 ๆ—ขๅญ˜ใฎใƒกใƒญใƒ‡ใ‚ฃใƒผใฎๆ–ญ็‰‡ใ‚’ใคใชใ’ใ‚‹(concatenate)ใ“ใจใงๆ–ฐใ—ใ„ใƒกใƒญใƒ‡ใ‚ฃใƒผใŒ็”Ÿๆˆใงใใ‚‹ใฎใงใฏใจใ„ใ†็ใ—ใ„ใ‚ฟใ‚คใƒ—ใฎ็ ”็ฉถใงใ™. ้Ÿณ็ด ใ‚’ใคใชใ’ใฆใ„ใใ“ใจใงไธ€้€ฃใฎ้Ÿณใ‚’ใคใใ‚‹ใจใ„ใ†ใ€ไปŠใ‚‚ๅบƒใไฝฟใ‚ใ‚Œใฆใ„ใ‚‹้Ÿณๅฃฐๅˆๆˆใฎๆ‰‹ๆณ•ใ‹ใ‚‰็™บๆƒณใ‚’ๅพ—ใฆใ„ใ‚‹ใจๆ€ใ‚ใ‚Œใพใ™ใ€‚ ใ“ใ“ใงใคใชใ’ใ‚‹ใƒกใƒญใƒ‡ใ‚ฃใƒผใฎๅ˜ไฝใฏไธ€ๅฐ็ฏ€ใ€‚ใ‚ธใƒฃใ‚บใ‚„ใƒญใƒƒใ‚ฏใ€ใ‚ฏใƒฉใ‚ทใƒƒใ‚ฏใชใฉ4200็จ‹ๅบฆใฎใƒชใƒผใƒ‰ใ‚ทใƒผใƒˆใจ120ใฎใ‚ธใƒฃใ‚บใฎใ‚ฝใƒญใฎๆƒ…ๅ ฑใ‚’ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใจใ—ใฆ็”จใ„ใพใ™ใ€‚ใ‚ใ‚‰ใ‹ใ˜ใ‚็งป่ชฟใ—ใฆใŠใใ“ใจใง็‰นๅฎšใฎ้ŸณๅŸŸใซๅใ‚‰ใชใ„ใ‚ˆใ†ใซใ—ใพใ™ใ€‚ ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใƒผใจใ—ใฆใฏใพใšใฏRNNใ‚’ใ‚‚ใกใ„ใŸAutoencoderใ‚’ไฝฟใ„ใพใ™ใ€‚ๆ‰‹ไฝœๆฅญใงใ‚ใ‚‰ใ‹ใ˜ใ‚ใใ‚ใฆใŠใ„ใŸ10ๅ€‹ใฎ็‰นๅพด้‡ใ‚’ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆๅ†…ใฎๅ„ๅฐ็ฏ€ใซใŸใ„ใ—ใฆbags-of-words(BOW)ใฎๆ‰‹ๆณ•ใงๆ•ฐใˆใฆใŠใใพใ™ใ€‚็‰นๅพด้‡ใจใ—ใฆใฏใ€้Ÿณใฎใ‚ฏใƒฉใ‚น(Cใ‚„D, etc)ใฎๆ•ฐใ‚„ใ€ๆœ€ๅˆใฎ้ŸณใŒ็›ดๅ‰ใฎๅฐ็ฏ€ใจใ‚นใƒฉใƒผใงใคใชใŒใฃใฆใ„ใ‚‹ใ‹..ใชใฉใชใฉใ€‚็ตๆžœ็š„ใซใฐ9675ใฎใƒใ‚คใƒŠใƒชใฎ็‰นๅพด้‡ใซใชใ‚Šใพใ™ใ€‚Autoencoderใฏใ“ใฎๅ…ฅๅŠ›ใ‹ใ‚‰ใ€500ๆฌกๅ…ƒใซใพใงๅ‰Šๆธ›ใ—ใŸใ†ใˆใงใ€ๅ†ๅบฆ9675ๆฌกๅ…ƒใงๅ‡บๅŠ›ใ—ใพใ™ใ€‚ใ“ใ‚Œใซใ‚ˆใฃใฆๅ„ๅฐ็ฏ€ใŒ500ๆฌกๅ…ƒใฎembeddingใง่กจ็พใ•ใ‚Œใ‚‹ใ“ใจใซใชใ‚Šใพใ™(ๅทฆไธ‹ใฎๅ›ณใฎ็ท‘ใฎ้ƒจๅˆ†)ใ€‚ ๅทฆ Autoencoderใซใ‚ˆใ‚‹Embedding / ๅณ ็”Ÿๆˆใฎใƒ—ใƒญใ‚ปใ‚น ใ“ใฎๅญฆ็ฟ’ใŒใ™ใ‚“ใ ใจใ“ใ‚ใงใ€ๅฎŸ้š›ใฎ็”Ÿๆˆใซใฏใ„ใ‚‹ใ‚ใ‘ใงใ™ใŒใ€ใ“ใ“ใงใฏไบŒใคใฎ่ฆ็ด ใŒ่€ƒๆ…ฎใ•ใ‚Œใฆใ„ใพใ™ใ€‚ ้€ฃ็ถšใ™ใ‚‹ๅฐ็ฏ€ใฎ้–ข้€ฃๆ€งใฎ้ซ˜ใ• (semantic relevance) LSTMใฎใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ๅˆฅ้€”็”จๆ„ใ—ใฆใ€ไธŠ่จ˜ใฎ500ๆฌกๅ…ƒใง่กจ็พใ•ใ‚ŒใŸใƒ™ใ‚ฏใƒˆใƒซใฎ้€ฃ็ถšๆ€งใ‚’ๅญฆ็ฟ’ใ—ใพใ™. 128ใƒฆใƒ‹ใƒƒใƒˆใฎLSTMใƒฌใ‚คใƒคใƒผใŒไบŒๅฑคใ€ๅ…ฅๅ‡บๅŠ›ใฏ512ใƒŽใƒผใƒ‰(ใ‚ใ‚Œ500ใ˜ใ‚ƒใชใ„ใฎ๏ผŸ๏ผŸ)ใ€‚ๅ…ฅๅŠ›ใซๅฏพใ—ใฆๆฌกใฎๅฐ็ฏ€ใŒๆŒใฃใฆใ„ใ‚‹ในใ็‰นๅพดใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ“ใจใซใชใ‚Šใพใ™ใ€‚ ้€ฃ็ถšใ™ใ‚‹้Ÿณใฎใ‚‚ใฃใจใ‚‚ใ‚‰ใ—ใ• (concatenation cost) ๅ‰ใฎๅฐ็ฏ€ใฎๆœ€ๅพŒใฎ้Ÿณใจๆฌกใฎๅฐ็ฏ€ใฎๆœ€ๅˆใฎ้Ÿณใฎ้–ขไฟ‚ใฎใ‚‚ใฃใจใ‚‚ใ‚‰ใ—ใ•ใ€‚ๅ„้Ÿณใฏใจใ‚Šใ†ใ‚‹ใƒ”ใƒƒใƒใจ้•ทใ•ใฎๆƒ…ๅ ฑใ‚’ๅ…ƒใซ3000ๆฌกๅ…ƒใฎone-hotใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่กจ็พใ•ใ‚Œใ€LSTMใซใ‚ˆใฃใฆใใฎ้–ขไฟ‚ใ‚’ๅญฆ็ฟ’ใ—ใพใ™. ใ“ใ†ใ—ใฆ่จˆ็ฎ—ใ•ใ‚ŒใŸไบŒใคใฎใ‚ณใ‚นใƒˆใ‹ใ‚‰ใ€ๅ‰ใฎๅฐ็ฏ€ใซ็ถšใๆฌกใฎๅฐ็ฏ€ใจใ—ใฆไธ€็•ชใ‚‚ใฃใจใ‚‚ใ‚‰ใ—ใ„ๅฐ่ชฌใŒ้ธใฐใ‚Œใ€ใคใชใ’ใ‚‰ใ‚Œใฆใ„ใใพใ™ใ€‚ ่€ƒๅฏŸใƒปไปŠๅพŒใฎๅฑ•้–‹ ๆœ€ๅพŒใซใ“ใ“ใพใง่ฆ‹ใฆใใŸใ‚ทใ‚นใƒ†ใƒ ใฎๅฎŸไพ‹ใ‚’ใ‚‚ใจใซใ€Deep Learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝใฎ็”Ÿๆˆๆ‰‹ๆณ•ใฎ็พ็Šถใจ็ญ†่€…(ๅพณไบ•)ใŒ่€ƒใˆใ‚‹ไปŠๅพŒใฎๅฑ•้–‹ใซใคใ„ใฆ็ทๆ‹ฌใ—ใŸใ„ใจๆ€ใ„ใพใ™ใ€‚ ็พ็Šถ ๆ—ฅ้€ฒๆœˆๆญฉใง็ ”็ฉถใŒ้€ฒใ‚“ใงใ„ใ‚‹็”ปๅƒใฎ็”Ÿๆˆใƒขใƒ‡ใƒซใฎ็ ”็ฉถใซๆฏ”ในใฆใ€้Ÿณๆฅฝใฎ็”Ÿๆˆใƒขใƒ‡ใƒซใฎ็ ”็ฉถใฏ่ณชใƒป้‡ใจใ‚‚ใซใ ใ„ใถ่ฆ‹ๅŠฃใ‚ŠใŒใ™ใ‚‹ใจใ„ใ†ๅฐ่ฑกใ‚’ใ†ใ‘ใพใ—ใŸใ€‚ใ‚‚ใกใ‚ใ‚“ๅธ‚ๅ ดใฎใƒ‹ใƒผใ‚บใฎๅฝฑ้ŸฟใŒๅคงใใ„ใฎใฏ้–“้•ใ„็„กใ„ใจๆ€ใ†ใฎใงใ™ใŒใ€ๅญฆ็ฟ’ใซไฝฟใˆใ‚‹ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒใพใ ใพใ ๆœชๆ•ดๅ‚™ใงใ‚ใ‚‹ใจใ„ใ†ใฎใ‚‚็†็”ฑใฎไธ€ใคใงใฏใชใ„ใงใ—ใ‚‡ใ†ใ‹ใ€‚ใพใŸๅ€‹ไบบๅ€‹ไบบใฎๅฅฝใฟใŒๅคงใใ็•ฐใชใ‚‹ใ€ๆ˜ ๅƒไปฅไธŠใซไธๅ”ๅ’Œใชใ‚‚ใฎใซๅฏพใ—ใฆๆ•ๆ„Ÿใงใ‚ใ‚‹ใจใ„ใฃใŸ้Ÿณๆฅฝใชใ‚‰ใงใฏ็‰นๅพดใ‚‚ๅฝฑ้Ÿฟใ—ใฆใ„ใ‚‹ใฎใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใพใŸ็พ็Šถใฎ็ ”็ฉถใฎๅคšใใŒใ€ๅพ“ๆฅใฎใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟ้Ÿณๆฅฝใฎ็ ”็ฉถ้ ˜ๅŸŸใงๆ‰ฑใ‚ใ‚ŒใฆใใŸๆฅฝ่ญœใจ้Ÿณๆฅฝ็†่ซ–ใ‚’ใƒ™ใƒผใ‚นใซใ—ใŸ็ ”็ฉถใฎๅปถ้•ท็ทšไธŠใซใ‚ใ‚Šใ€ใใ‚Œใ‚‰ใซใ‚ˆใฃใฆๆ‰ฑใ„ใ‚„ใ™ใ„ใ‚ฏใƒฉใ‚ทใƒƒใ‚ฏใ‚„ใ‚ธใƒฃใ‚บใ‚’ๅฏพ่ฑกใซใ—ใฆใ„ใ‚‹ใ‚‚ใฎใŒๅคšใ„ใ‚ˆใ†ใซใ‚‚ๆ„Ÿใ˜ใฆใ„ใพใ™ใ€‚ไปŠๅพŒใ€ใใฎไป–ใฎใ‚ธใƒฃใƒณใƒซใซใ‚‚็ ”็ฉถใŒๅบƒใพใ‚‹ใŸใ‚ใซใ‚‚ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎๆ‹กๅ……ใฏๅฟ…้ ˆใงใ‚ใ‚‹ใ‚ˆใ†ใซใ‚‚ๆ€ใ„ใพใ™ใ€‚ ็ ”็ฉถใ‚’้€ฒใ‚ใ‚‹ใŸใ‚ใซใฏใ€็คพไผšใฎใƒ‹ใƒผใ‚บใจใ„ใ†่ฆณ็‚นใฏ้ฟใ‘ใฆ้€šใ‚Œใพใ›ใ‚“(ใ‚‚ใกใ‚ใ‚“ใ€ๅƒ•่‡ช่บซใฏๆ–ฐใ—ใ„้Ÿณๆฅฝใ‚’ไฝœใ‚ŠใŸใ„ใจใ„ใ†็ด”็ฒ‹ใชๆฌฒๆฑ‚ใ“ใใŒใ“ใ†ใ„ใฃใŸ็ ”็ฉถใ‚’ๆŽจใ—้€ฒใ‚ใ‚‹็ฉถๆฅต็š„ใชๅŽŸๅ‹•ๅŠ›ใ ใจใฏๆ€ใฃใฆใ„ใ‚‹ใฎใงใ™ใŒโ€ฆ)ใ€‚AIใŒ็”Ÿๆˆใ•ใ‚ŒใŸ้Ÿณๆฅฝใซๅฏพใ™ใ‚‹ใƒ‹ใƒผใ‚บใจใ„ใ†่ฆณ็‚นใ‹ใ‚‰่€ƒใˆใฆใฟใ‚‹ใฎใ‚‚่‰ฏใ„ใฎใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ™ใ“ใ—่„ฑ็ทšใ—ใพใ™ใŒโ€ฆ Spotifyใงไธ€็•ช่žใ‹ใ‚Œใฆใ„ใ‚‹ใƒ—ใƒฌใ‚คใƒชใ‚นใƒˆใฎใ†ใกใฎใ„ใใคใ‹ใฏใ€ใ€Œๅฏใ‚‹ใŸใ‚ใฎ้Ÿณๆฅฝใ€ใจใ‹ใ€Œ้›†ไธญใ—ใŸใ„ใจใใฎ้Ÿณๆฅฝใ€ใฎใ‚ˆใ†ใช็‰นๅฎšใฎๆฉŸ่ƒฝๆ€งใ‚’ใ†ใŸใ†ใƒ—ใƒฌใ‚คใƒชใ‚นใƒˆใชใ‚“ใ ใใ†ใงใ™ใ€‚ๅŒฟๅๆ€งใŒ้ซ˜ใ(่ชฐใŒไฝœใฃใŸใ‹ใฏๅ•ใ‚ใชใ„)ใ€ใใฎใ€ŒๅŠน่ƒฝใ€ใซใ‚ˆใฃใฆ่ฉ•ไพกใ—ใ‚„ใ™ใ„ใ“ใ‚Œใ‚‰ใฎ้ŸณๆฅฝใŒใ€AIใจ็›ธๆ€งใŒใ‚ˆใ„ใ“ใจใฏใ™ใใซๆƒณๅƒใŒใคใใพใ™ใ€‚ๅฎŸใฏไปŠๅ›žๅ‚่€ƒใซใ—ใŸใ‚ตใƒผใƒ™ใ‚คใ‚’ๆ›ธใ‹ใ‚ŒใŸF.Pachetใ•ใ‚“ใฏใ€ๆœ€่ฟ‘Spotifyใฎ็ ”็ฉถๆ‰€ใซ็งป็ฑใ•ใ‚ŒใŸใใ†ใงใ™ใ€‚่ฟ‘ใ„ๅฐ†ๆฅใ€ๆฐ—ใฅใ‹ใชใ„้–“ใซAIใซใ‚ˆใฃใฆ็”Ÿๆˆใ•ใ‚ŒใŸ้Ÿณๆฅฝใ‚’่žใ‹ใ•ใ‚Œใฆใ‚‹...ใชใ‚“ใฆใ“ใจใซใชใฃใฆใ„ใ‚‹ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ใ‚‚ใ†ไธ€ใคใฎ่ปธใฏใ€ไบบใŒ่žใ้Ÿณๆฅฝใงใฏใชใใ€ใ€ŒๆฉŸๆขฐใŒ่žใใ€้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹ใจใ„ใ†่ฆณ็‚นใงใ™ใ€‚็”ปๅƒ็”Ÿๆˆใƒขใƒ‡ใƒซใ‚‚ใ€็”ปๅƒ่ช่ญ˜ใ‚จใƒณใ‚ธใƒณใฎ็ฒพๅบฆใ‚’ไธŠใ’ใ‚‹ใŸใ‚ใซ็”Ÿๆˆใƒขใƒ‡ใƒซใซใ‚ˆใฃใฆ็”Ÿๆˆใ—ใŸ็”ปๅƒใ‚’ๅญฆ็ฟ’ใซไฝฟใ†ใ“ใจใŒๆๆกˆใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๅŒๆง˜ใซ้Ÿณๆฅฝใฎ่ญ˜ๅˆฅใƒขใƒ‡ใƒซใฎ็ฒพๅบฆใ‚’้ซ˜ใ‚ใ‚‹ใŸใ‚ใซใ€็”Ÿๆˆใƒขใƒ‡ใƒซใŒใคใใฃใŸๆ›ฒใ‚’ไฝฟใ†ใจใ„ใ†ใฎใ‚‚้œ€่ฆใŒใ‚ใ‚Šใใ†ใงใ™ใ€‚ ไปŠๅพŒใฎ็ ”็ฉถใฎใƒ•ใƒญใƒณใƒ†ใ‚ฃใ‚ข ไปŠๅ›žใฎใ‚ตใƒผใƒ™ใ‚คใงไธ€็•ชๆ„ๅค–ใ ใฃใŸใฎใฏใ€GANใ‚’ใคใ‹ใฃใŸ็ ”็ฉถใŒใ‹ใชใ‚Šๅฐ‘ใชใ„ใจใ„ใ†ใ“ใจใงใ—ใŸใ€‚2016โ€“2017ๅนดใฏGANไธ€่‰ฒใจใ„ใฃใฆใ‚‚้Ž่จ€ใงใฏใชใ„ใใ‚‰ใ„ใ€ไธปใซ็”ปๅƒใฎ็”Ÿๆˆใ‚’ไธญๅฟƒใซใ•ใพใ–ใพใช็ ”็ฉถใŒ่ฉฑ้กŒใซใชใ‚Šใพใ—ใŸใŒใ€้Ÿณๆฅฝใฎๆ–นใธใฎๅฟœ็”จใฏใพใ ใพใ ๅฐ‘ใชใ„ใ‚ˆใ†ใงใ™ใ€‚ใฒใจใคใซใฏGANใชใ‚‰ใงใฏใฎๅญฆ็ฟ’ใฎ้›ฃใ—ใ•ใจใ„ใ†ใฎใ‚‚ใ‚ใ‚‹ใฎใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ (่‡ชๅˆ†ใงใ‚‚GANใงใฎใƒชใ‚บใƒ ใฎ็”Ÿๆˆใ‚’่ฉฆใ—ใฆ่ฆ‹ใŸใฎใงใ™ใŒๅญฆ็ฟ’ใŒใพใฃใŸใๅฎ‰ๅฎšใ—ใพใ›ใ‚“ใงใ—ใŸโ€ฆ)ใ€‚ใพใŸๆฅฝ่ญœใง้Ÿณๆฅฝใ‚’ๆ‰ฑใฃใฆใ„ใ‚‹้™ใ‚Šใฏใ€่งฃ็ฉบ้–“ใŒ(็”ปๅƒไปฅไธŠใซ)้›ขๆ•ฃ็š„ใซใชใ‚‰ใ–ใ‚‹ใ‚’ใˆใชใ„ใจใ„ใ†ใฎใ‚‚ๅญฆ็ฟ’ใ‚’้›ฃใ—ใใ—ใฆใ„ใ‚‹ใฎใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ ๅŒๆง˜ใซใ€WaveNetใฎใ‚ˆใ†ใซ้Ÿณๅฃฐไฟกๅทใ‚’็›ดๆŽฅๆ‰ฑใ†ๆ‰‹ๆณ•ใซใคใ„ใฆใ‚‚ไปŠๅพŒใฎ็ ”็ฉถใŒๆœŸๅพ…ใ•ใ‚Œใพใ™ใ€‚่จˆ็ฎ—ใฎๅ‡ฆ็†ใŒ้‡ใ„ใจใ„ใ†ๅ•้กŒใŒใ‚ใ‚Šใพใ™ใŒใ€ๆฅฝ่ญœใชใฉใฎใ‚ทใƒณใƒœใƒชใƒƒใ‚ฏใช่กจ็พใฎๅˆถ้™ใ‚’ๅ—ใ‘ใชใ„ใ“ใจใงใ€ๆ–ฐใ—ใ„้Ÿณๆฅฝ้ ˜ๅŸŸใƒปใ‚ธใƒฃใƒณใƒซใธใฎๅฟœ็”จใŒๆœŸๅพ…ใ•ใ‚Œใพใ™ใ€‚ใพใŸใ‚ˆใ‚Š้€ฃ็ถš็š„ใช่งฃ็ฉบ้–“ใ‚’ๆŒใคใจใ„ใ†ๅˆฉ็‚นใ‚‚ใ‚ใ‚‹ใฎใงใฏใชใ„ใงใ—ใ‚‡ใ†ใ‹ใ€‚ ใ‚‚ใ†ใฒใจใคไปŠๅพŒ้‡่ฆใซใชใฃใฆใใ‚‹ใฎใฏใ€ใใ‚‚ใใ‚‚ใชใ‚“ใฎใŸใ‚ใซAIใง้Ÿณๆฅฝใ‚’ไฝœใ‚‹ใฎใ‹ใจใ„ใ†่ฆ–็‚นใงใ™ใ€‚ใ€Œใƒใƒƒใƒใฎใ‚ˆใ†ใชใ€ใ‚ใ‚‹ใ„ใฏใ€Œใƒ“ใƒผใƒˆใƒซใ‚บใฎใ‚ˆใ†ใชใ€้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹ใ“ใจใŒ็›ฎ็š„ใงใ‚ˆใ„ใฎใงใ—ใ‚‡ใ†ใ‹ใ€‚ใใ†ใ—ใŸ่ฆ–็‚นใ‹ใ‚‰ใ‚‚ใ‚ทใ‚นใƒ†ใƒ ใฎ่ฃฝไฝœ่€…ใŒใ‚ใ‚‰ใ‹ใ˜ใ‚ๅฎšใ‚ใŸใ€ใ‚ขใƒ—ใƒชใ‚ชใƒชใช่ฉ•ไพก้–ขๆ•ฐใ‚’ๆŒใŸใชใ„GANใฎใ‚ˆใ†ใชๆ‰‹ๆณ•ใŒใฒใจใคใฎ็ช็ ดๅฃใซใชใ‚Šใใ†ใงใ™ใ€‚ ใ‚ตใƒผใƒ™ใ‚ค่ซ–ๆ–‡ใฎไธญใงใ‚‚ๅ–ใ‚ŠไธŠใ’ใ‚‰ใ‚Œใฆใ„ใ‚‹ใ€Creative Adversarial Networkใฎ็ ”็ฉถใŒใฒใจใคใฎๅ‚่€ƒใซใชใ‚Šใพใ™ใ€‚ไบŒใคใฎDiscriminatorใ‚’ๆŒใคGANใฎใƒใƒชใ‚จใƒผใ‚ทใƒงใƒณใงใ€ๆ–ฐใ—ใ„ๆŠฝ่ฑก็”ปใ‚’็”Ÿๆˆใ—ใ‚ˆใ†ใจใ„ใ†็ ”็ฉถใชใฎใงใ™ใŒใ€็”Ÿๆˆใ•ใ‚ŒใŸ็ตตใŒๅญฆ็ฟ’ๅ…ƒใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซใ‚ใ‚‹็ตต็”ปใซ่ฟ‘ใ„ใ‹ใฉใ†ใ‹ใ‚’ๅˆคๅฎšใ™ใ‚‹้€šๅธธใฎDiscriminatorใจใฏๅˆฅใซใ€ใใฎ็ตตใ‚’้ŽๅŽปใฎๆญดๅฒไธŠใฎใ‚ธใƒฃใƒณใƒซ(ๅฐ่ฑกๆดพใƒปใ‚ญใƒฅใƒ“ใ‚บใƒ  etc)ใซๅˆ†้กžใ™ใ‚‹Discriminatorใ‚’็”จๆ„ใ—ใพใ™ใ€‚ใใ—ใฆใฉใฎใ‚ธใƒฃใƒณใƒซใซใ‚‚ใ€Œใ‚ใฆใฏใพใ‚‰ใชใ„ใ€ใ‚‚ใฎใ‚’้ซ˜ใ่ฉ•ไพกใ™ใ‚‹ใ“ใจใงใ€ใ€Œใ‚ขใƒผใƒˆใ‚‰ใ—ใ•ใ€ใจใ€Œๆ–ฐ่ฆๆ€งใ€ใฎใƒใƒฉใƒณใ‚นใ‚’ใจใฃใŸใ‚ˆใ‚Šๅ‰ต้€ ็š„ใชไฝœๅ“ใ‚’็”Ÿๆˆใงใใ‚‹ใจใ—ใฆใ„ใพใ™ใ€‚ CAN: Creative Adversarial Networks Generating โ€œArtโ€ by Learning About Styles and Deviating from Style Norms ้ŽๅŽปใฎไฝœๅ“ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ“ใจใงๆœฌๅฝ“ใซๆ–ฐใ—ใ„ไฝœๅ“ใŒไฝœใ‚Œใ‚‹ใฎใ‹?? - CAN: Creative Adversarial Networks Generating "Art" by Learningโ€ฆ ใ“ใฎใ‚ตใ‚คใƒˆใงใ‚‚ใŠใชใ˜ใฟใฎ GANใ‚’็”จใ„ใฆใ€ใ‚ขใƒผใƒˆ(ๆŠฝ่ฑก็ตต็”ป)ใ‚’็”Ÿๆˆใ™ใ‚‹ใจใ„ใ†ๅ–ใ‚Š็ต„ใฟ. ใ€Œ้ŽๅŽปใ€ใฎใ‚ขใƒผใƒˆไฝœๅ“ใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ ใ‘ใงใ€็œŸใซๅ‰ต้€ ็š„ใชใชใ€Œๆ–ฐใ—ใ„ใ€ใ‚ขใƒผใƒˆใ‚’ไฝœใ‚Œใ‚‹ใฎใ‹? ใจใ„ใ†ใ‚‚ใฃใจใ‚‚ใชๅ•ใ„ใซๅ‘ใๅˆใฃใŸ่ซ–ๆ–‡ใงใ™. ใ“ใฎ็ ”็ฉถใฎ้ข็™ฝใ„ใจใ“โ€ฆcreatewith.ai ๅŒๆง˜ใฎ่€ƒใˆๆ–นใงใ€่ชฐใ‚‚่žใ„ใŸใ“ใจใฎใชใ„ใ‚ˆใ†ใช้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚ทใ‚นใƒ†ใƒ ใ‚’ๆง‹็ฏ‰ใงใใ‚‹ใฎใ‹ใ€‚ใฒใจใคๅคงใใชใƒใƒฃใƒฌใƒณใ‚ธใŒ็›ฎใฎๅ‰ใซๆจชใŸใ‚ใฃใฆใ„ใ‚‹ใ‚ˆใ†ใซๆ€ใ„ใพใ™ใ€‚ ใ‚‚ใ†ไธ€ใคใฎๆ–นๅ‘ๆ€งใจใ—ใฆใฏใ€ๅ…จ่‡ชๅ‹•ใง้Ÿณๆฅฝใ‚’็”Ÿๆˆใ™ใ‚‹ใ‹ใ‚ใ‚Šใซใ€็”Ÿๆˆใฎ้Ž็จ‹ใซไบบ้–“ใ‚’ใ†ใพใ็ตกใ‚ใฆใ„ใใจใ„ใ†่€ƒใˆๆ–นใ‚‚ใ‚ใ‚‹ใงใ—ใ‚‡ใ†ใ€‚ใชใ‚“ใ‚‰ใ‹ใฎใƒฆใƒผใ‚ถใ‚คใƒณใ‚ฟใƒ•ใ‚งใƒผใ‚นใง็”Ÿๆˆใฎๆ–นๅ‘ๆ€งใ‚’ใƒŠใƒ“ใ‚ฒใƒผใƒˆใ™ใ‚‹ใ€ไบบ้–“ใฎไผดๅฅ่€…ใจๆŽ›ใ‘ๅˆใ„ใ‚’่กŒใ†ใ€ใƒชใ‚นใƒŠใƒผใฎ่ฉ•ไพก(็”Ÿไฝ“ไฟกๅท??)ใ‚’็”Ÿๆˆใฎใƒญใ‚ธใƒƒใ‚ฏใซใƒ•ใ‚ฃใƒผใƒ‰ใƒใƒƒใ‚ฏใ™ใ‚‹ใชใฉใชใฉใ€ใ„ใใคใ‹ใฎใƒใƒชใ‚จใƒผใ‚ทใƒงใƒณใŒ่€ƒใˆใ‚‰ใ‚Œใใ†ใงใ™ใ€‚ใ“ใ‚ŒใฏDeep Learningไปฅๅ‰ใฎใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟ้Ÿณๆฅฝใฎ็ ”็ฉถใงใ‚‚ๅบƒใ็ ”็ฉถใ•ใ‚ŒใฆใใŸใƒˆใƒ”ใƒƒใ‚ฏใงใ™ใŒใ€Deep Learningใฎ็™ปๅ ดใงใใ†ใ„ใฃใŸๅˆ†้‡Žใซใฉใฎใ‚ˆใ†ใชใ‚ขใƒƒใƒ—ใƒ‡ใƒผใƒˆใŒใชใ•ใ‚Œใ‚‹ใฎใ‹ใ€่ˆˆๅ‘ณใ‚’ๆƒนใ‹ใ‚Œใ‚‹ใจใ“ใ‚ใงใ™ใ€‚ AIใ‚’็”จใ„ใŸ่‡ชๅ‹•ไฝœๆ›ฒใ‚’ๅฃฒใ‚Š็‰ฉใซใ™ใ‚‹ใ‚นใ‚ฟใƒผใƒˆใ‚ขใƒƒใƒ—ใ‚„ใใฎๆŠ€่ก“ใซ้–ขใ™ใ‚‹่จ€ๅŠใŒใพใฃใŸใใชใ‹ใฃใŸใ“ใจใซๆฐ—ใฅใ‹ใ‚ŒใŸๆ–นใ‚‚ใ„ใ‚‰ใฃใ—ใ‚ƒใ‚‹ใงใ—ใ‚‡ใ†ใ‹ใ€‚ๅฝผใ‚‰ใŒAIใงใ€Œ็”Ÿๆˆใ€ใ—ใŸใจ่ชžใ‚‹้Ÿณๆฅฝใฏใ“ใ“ใง็ดนไป‹ใ—ใŸใ‚‚ใฎใ‚ˆใ‚Šใ‚‚ใ‚ˆใ‚ŠๅฎŒๅ…จใชใ€Œ้Ÿณๆฅฝใ‚‰ใ—ใ„ใ€้Ÿณๆฅฝใงใ™ใ€‚ ใ‚นใ‚ฟใƒผใƒˆใ‚ขใƒƒใƒ—ใชใฎใงๆŠ€่ก“็š„ใช่ฉณ็ดฐใŒๆ˜Žใ‚‰ใ‹ใซใชใฃใฆใ„ใชใ„ใจใ„ใ†ใฎใ‚‚ใ‚ใ‚Šใพใ™ใŒใ€้™ใ‚‰ใ‚ŒใŸๆƒ…ๅ ฑใ‚„็”Ÿๆˆใ•ใ‚ŒใŸๆฅฝๆ›ฒใ‹ใ‚‰ๆŽจๆธฌใ™ใ‚‹ใซใ€ๅฝผใ‚‰ใฎไป•็ต„ใฟใฏใ‚ใ‚‹็จฎใฎใƒซใƒผใƒซใƒ™ใƒผใ‚นใซๅŸบใฅใ„ใŸใ‚‚ใฎใงใ€ๅคง้‡ใฎใƒซใƒผใƒ—ใ‚’ใƒซใƒผใƒซใซๅŸบใฅใ„ใฆ็ต„ใฟๅˆใ‚ใ›ใฆใ„ใใ“ใจใงๆฅฝๆ›ฒใ‚’ใ‹ใŸใกใฅใใฃใฆใ„ใ‚‹ใ‚ˆใ†ใงใ™ใ€‚ใใฎไธ€้ƒจใซDeep LearningใŒ็”จใ„ใ‚‰ใ‚Œใฆใ„ใ‚‹ๅฏ่ƒฝๆ€งใ‚‚ใ‚ใ‚Šใพใ™ใŒใ€ไฝฟใ‚ใ‚Œๆ–นใฏ้™ๅฎš็š„ใจ่€ƒใˆใฆใ‚ˆใ„ใงใ—ใ‚‡ใ†ใ€‚AIใซใ‚ˆใ‚‹้Ÿณๆฅฝ็”Ÿๆˆใจใ„ใฃใŸ่จ˜ไบ‹ใ‚’่ฆ‹ใŸใจใใซใ™ใ“ใ—ๅฟต้ ญใซใŠใ„ใฆใŠใใจใ‚ˆใ„ใ‹ใจๆ€ใ„ใพใ™ใ€‚ ๆœ€ๅพŒใซใชใ‚Šใพใ™ใŒใ€ใ™ใฐใ‚‰ใ—ใ„ใ‚ตใƒผใƒ™ใ‚คใ‚’ๆ›ธใ‹ใ‚ŒใŸJean-Pierre Briotๆฐใ€Gaรซtan Hadjeresๆฐใ€Franรงois PachetๆฐใฎใŠไธ‰ๆ–นใซๆ„Ÿ่ฌใฎๆ„ใ‚’่กจใ—ใพใ™ใ€‚
Deep Learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆๆ‰‹ๆณ•ใฎใพใจใ‚ [ใ‚ตใƒผใƒ™ใ‚ค]
87
deep-learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆๆ‰‹ๆณ•ใฎใพใจใ‚-ใ‚ตใƒผใƒ™ใ‚ค-1298d29f8101
2018-06-05
2018-06-05 05:29:17
https://medium.com/s/story/deep-learningใ‚’็”จใ„ใŸ้Ÿณๆฅฝ็”Ÿๆˆๆ‰‹ๆณ•ใฎใพใจใ‚-ใ‚ตใƒผใƒ™ใ‚ค-1298d29f8101
false
1,147
null
null
null
null
null
null
null
null
null
Deep Learning
deep-learning
Deep Learning
12,189
Nao Tokui (Qosmo)
ๅพณไบ•็›ด็”Ÿ โ€” Qosmo, Inc. AIใจ่กจ็พ. http://createwith.ai/ http://naotokui.net/
d9ccd9d1c0fe
naotokui
947
524
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-24
2018-08-24 16:45:03
2018-08-24
2018-08-24 18:03:52
3
false
en
2018-08-24
2018-08-24 18:03:52
3
129942b3470c
1.353774
0
0
0
Anyway, we continue comparing our AI with the one from GOSU!
4
AI Battle for TI bracket predictions, day 3. MoreMMR 2:1 GOSU.AI How MoreMMR and GOSU developers are expecting AI battle to look like: Reality Anyway, we continue comparing our AI with the one from GOSU! Predictions from yesterday: MoreMMR, GOSU โ€‹Predictiong from the first day: MoreMMR, GOSU Day 2 of the main event did not bring many surprises, and both of AIโ€™s managed to guess 3/4 matches correctly, as the only real challenge was to predict VGJ.Storm vs OG, which both of AIโ€™s did wrong. Well played, BigDaddy! MoreMMR: 7/10 matches of the main event predicted correctly GOSU.AI: 6/10 matches of the main event predicted correctlyโ€‹ Here is our AIโ€™s vision on how the bracket will look like from now onโ€‹ โ€‹ Many people in comments are taking it too seriously, just a friendly reminder that this AI was made mostly for the entertainment purposes and for our prediction contest, where users can compete with our AI and win prizes. It is not absolute truth, and we never said it is We literally made it about 2 weeks ago, and it is simply fun for us to see what predictions our AI makes, how it differs from more experienced machine learning systems and how wrong (sometimes even right) we are! Our AI is very young and sensitive and it makes mistakes, but it learns from them
AI Battle for TI bracket predictions, day 3. MoreMMR 2:1 GOSU.AI
0
ai-battle-for-ti-bracket-predictions-day-3-moremmr-2-1-gosu-ai-129942b3470c
2018-08-24
2018-08-24 18:03:52
https://medium.com/s/story/ai-battle-for-ti-bracket-predictions-day-3-moremmr-2-1-gosu-ai-129942b3470c
false
213
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Moremmr.com
International Dota2 e-learning platform. We combine unconventional analytical insights, fascinating video lessons and diverse game tasks.
28ca45ed0e84
inventedd
29
155
20,181,104
null
null
null
null
null
null
0
null
0
f894cb8c470f
2017-09-22
2017-09-22 14:23:38
2017-09-27
2017-09-27 15:57:33
4
false
en
2017-09-27
2017-09-27 16:13:17
3
12994cd00b2f
6.198113
15
0
0
Creating art with technology
5
A Reconciliation of Art and Science Creating art with technology Artists are people who see different things, or perhaps artists are people who see things differently. Scientists are people who see different things, or perhaps scientists are people who see things differently. The line between what is and what is not art has been a debate for millennia, and what is considered science has changed so frequently it can make you dizzy. These fields seem so different. They appear to be in such different realms. Imagine a physicist and a painter sitting at a bar sipping cocktails. What do they talk about? It is hard to imagine that they would have very much to say to each other at all, and the conversation that they would have would likely be forced and awkward. But science and art face similar existential issues. They both carry with them long histories filled with toil and conflict. They both serve as frameworks for people to understand the world and feel meaning. They both have frontiers that brilliant revolutionaries strive every day to broaden. Art and science may be hostile brothers, but they are still brothers. These brothers may not often meet, but recent developments in artificial intelligence, other scientific endeavors, and specifically their use in the creation of artificially designed art will soon force the brothers to embrace. However contentious this meeting might be, in their reconciliation art and science may find that they have more in common than they might have expected. Artificial intelligence (AI) is exhibited by computers and other machines when they display abilities typically associated with human intelligenceโ€” pattern recognition, problem solving, and creativity. At least that is the intention. Artificial intelligence is still in its early days and has not yet manifested the terrifying apocalyptic power that science fiction writers divine. Even though we are not quite living in a sci-fi movie yet, artificial intelligence is doing one thing that might give many people pause: AI is making art. The prime example of this, as always, is Google. Google has recently developed a program called DeepDream that uses neural nets to identify patterns, extracting them from a large variety of images presented as input. Essentially, DeepDream does exactly what any good artist does: closely observes the world, identifies what is interesting about it, magnifies that aspect, and recreates it in a new medium. โ€œBut wait!,โ€ you might object, โ€œDeepDream isnโ€™t actually observing the world. It is just programmed to look for something, change it a little, and spit out a new product at the end. There is no creative intention, no motive, and no inspiration!โ€ And you would be right, but here we are not talking about the artist as being the computer. That is the wrong way of thinking about it. We are talking about the artist being the programmer. Masha Ryskin of the Rhode Island School of Design says on the topic of art that is artificially created, โ€œI donโ€™t consider the robot to be the artist, like how I donโ€™t consider the chimp as an artist. The robots can be used by artists to make art, however.โ€ Ryskin is absolutely right on this point. We should not think about DeepDream or any other technology that is used to make art as the artist. Artists intentionality, emotion, perspective, and vision is what separates mere design and visual regurgitation from real art. Perhaps more importantly, inspiration is what separates good art from great art. In this case the computer programmer is the artist. It is the programmer who decides what patterns the program will look for. The programmer makes the subtle design decisions that result in a beautiful piece of visual or musical composition. Maybe it is not obvious that the images that DeepDream produces is beautiful. Below is an example of an image modified and enhanced by DeepDream. Dreamscope App/dreamscopeapp.com It is important to note that the images created so far by DeepDream need an image to be fed into the program to produce images such as the one above. This means that, at this point, DeepDream is essentially a glorified Snapchat filter. But this does not negate the fact that, at the rate that computer technology is progressing, it will not be long before computers wonโ€™t need to be spoon-fed images to produce spectacularly beautiful artwork. DeepDream is not the only futuristic art that is being produced, and some might be even more controversial. The Mandelbrot Set Above is a picture of part of the Mandelbrot set. It may be hard to believe but the picture is a mathematical graph that has had colors applied to it by a computer program. The Mandelbrot Set is determined by iterating a simple mathematical function โ€” in effect connecting output to input โ€” which ultimately produces infinite intricacy. We must ask ourselves: is the Mandelbrot set art? This is where science and art really start to meet. If someone were to present you with the picture of the Mandelbrot set and say that he had just had a stroke of inspiration and painted this breath-taking abstract image from nothing, you certainly would be awestruck. It is an incredible image. But that is not what happened. This image was created when a mathematician spent hours and hours, and days and days doing tedious calculations and research that most people in the world could not even pretend to understand. Then dozens or hundreds of computer scientists and mathematicians and engineers spent even more countless hours designing and programming computers that could take the information produced by that first mathematician and draw a picture of it. Even then the work was not done. At that point, the picture was still in black and white. Someone still had to figure out a way to add color that would maintain the integrity and mathematical accuracy of the picture, while adding a dimension to the pictureโ€™s profundity. This really brings into question what we consider to be art. The Mandelbrot set certainly looks like art. It has the sort of themes that artists discuss. It has a wide range of colors that are well balanced across the image. It contrasts symmetry with asymmetry. It compares chaos and order. It has structure, but also seems messy โ€” almost rebellious. One moment it looks lifeless but the next endlessly organic. This piece can certainly be analyzed and, perhaps much more crucially, appreciated as a work of art. But maybe that is not good enough. Maybe you need inspiration and intention and emotion in a work of art. Maybe you need the artist to feel deeply and to cause the viewer to feel just as deeply. However, perhaps that is exactly what is happening in the Mandelbrot image. How is the inspiration of the mathematician any different from that of the artist? How is the intention of determination of the artist different from the meticulous calculations of the mathematician? How are the deep feelings of a mathematician scribbling notes in a dark corner in a university office different from the deep feelings that you feel when you look at the image that is the culmination of the work? This is also the same with the computer scientist using artificial intelligence to create art with DeepDream. The computer scientist and the mathematician are indeed artists. Hans Holbein: โ€œThe Ambassadorsโ€ It is also important to consider the fact that art and the use of technology to make art have always gone hand in hand for millennia. The development of paints is what allowed the first artists to try their hand at painting bison and horses in the oldest extant painting in the world, deep in the Lascaux caves of France. Five hundred years ago Hans Holbein used lenses and mirrors to project a distorted image of a skull onto a canvas so that he could paint the eerie outline. But in neither of these cases were science, technology and art considered mutually exclusive. At their cores are the same unifying principle: innovation. This spirit of innovation is what unites art and science through the use of technology. When taking all of this into consideration, it is important to remember that art is not simply something that an artist makes. Instead, we should think about it as an expression of beauty that can be appreciated regardless of who produced it or what media was used in its creation. The media is where science and art meet. It is the place where the hostile brothers embrace. The media is a form of technology that allows a person to creatively and artistically express oneself. As we move deeper into our still nascent technological age, remember that it is not the media that we use, but the passion with which we create our art, that truly matters.
A Reconciliation of Art and Science
188
a-reconciliation-of-art-and-science-12994cd00b2f
2018-05-29
2018-05-29 14:23:29
https://medium.com/s/story/a-reconciliation-of-art-and-science-12994cd00b2f
false
1,457
A non-linear space of fractal branches and connections, chaos, complexity, art, life, learning, and everything else.
null
null
null
FractaLife
fractalife75@gmail.com
fractalife
FRACTALS,SCIENCE,EDUCATION,LITERATURE,PHILOSOPHY
RADfract
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rob Scholle
Cursorily exploring personal philosophy
b0d23e54915a
robscholle
11
3
20,181,104
null
null
null
null
null
null
0
null
0
4bbe3e86ca37
2017-10-13
2017-10-13 16:07:18
2017-10-16
2017-10-16 12:01:29
1
false
en
2017-10-16
2017-10-16 12:01:44
3
129998654608
1.090566
0
0
0
By Recode
5
TLDR: General Motors is vertically integrating as it moves deeper into self-driving cars By Recode This summary is provided by Annotote, a network thatโ€™s the most frictionless transmission mechanism for your daily dose of knowledge. Have a minute? Get informed. All signal/no noise is only a click away: Try Annotote today! General Motors is vertically integrating as it moves deeper into self-driving cars GM is now one of the only carmakers or autonomous vehicle developers that owns a the whole supply chainโ€ฆBy Recode >>> Annotote โ€œ GM is now one of the only carmakers or autonomous vehicle developers that owns a good portion of the major components of the self-driving supply chain: The car itself, the self-driving โ€œbrainโ€ (via its 2016 acquisition of Cruise), a key part of the โ€œeyes,โ€ as well as the service layer, a proprietary ride-hail network. โ€œ Bringing lidar production in-house has become especially important as self-driving players scramble to meet impending deadlines to publicly deploy autonomous vehicles. Today โ€ฆ there are few that are mass producing lidars quickly enough and affordably. As a result, the industry largely relies on one firm, Velodyne. Annotote, the network where one manโ€™s annotation is another manโ€™s summarization. โ€œ [Kyle Vogt, GM-owned Cruise CEO:] โ€œby collapsing the entire sensor down to a single chip, weโ€™ll reduce the cost of each LIDAR on our self-driving cars by 99%.โ€ These highlights provided for you by Annotote. Leave your mark now!
TLDR: General Motors is vertically integrating as it moves deeper into self-driving cars
0
tldr-general-motors-is-vertically-integrating-as-it-moves-deeper-into-self-driving-cars-129998654608
2018-05-30
2018-05-30 05:53:55
https://medium.com/s/story/tldr-general-motors-is-vertically-integrating-as-it-moves-deeper-into-self-driving-cars-129998654608
false
236
todayโ€™s must reads, summarized for you
null
annotote
null
Annotote TLDR
anthony.bardaro@applomb.net
annotote
NEWS,SOCIAL MEDIA,APPS,TECH,ECONOMY
annotote
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Anthony Bardaro
โ€œPerfection is achieved not when there is nothing more to add, but when there is nothing left to take away...โ€ ๐Ÿ‘‰ http://annotote.wordpress.com
c79a365c5ac1
AnthPB
1,038
102
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-20
2018-01-20 20:53:01
2018-01-20
2018-01-20 21:25:36
1
false
en
2018-01-20
2018-01-20 21:25:36
6
129a577452ad
2.226415
1
0
0
Why am I investing energy in Hadron versus the thousands of other startups and altcoins out there? I chose Hadron due to itโ€™s vastโ€ฆ
5
Hadron. The in Browser Mining and Blockchain A.I. That Could Change the World โ€œBlockchain Artificial Intelligence, Mined in Your Web Browserโ€ โ€” The White Paper Why am I investing energy in Hadron versus the thousands of other startups and altcoins out there? I chose Hadron due to itโ€™s vast difference from every other altcoin on the market, Artificial Intelligence. Imagen a world that could arise from thousands of idle hardware turned into energy to compute through a neural network that helps cure cancer. This isnโ€™t just a currency, itโ€™s an A.I. neural network. As stated in their White Paper: โ€œ95% of business leaders expect their enterprises to utilize AI, yet 91% of business also report significant barriers to AI implementation, with a lack of IT Infrastructure being the top problem (Teradata, 2017). Hadron bridges these shortfalls by pooling unused resources to sell on an automated free-market exchange.โ€ AI is an incredibly fast growing market as thousands of companies rely on AI to scan, screen, tag, and organize. Hadron uses A.I. computations to help in the real world as you mine, so rather than sucking power out of your wall going through hours of the same hashing algorithms like Bitcoin and the hundreds of minable altcoins out there, you will be processing in an A.I. neural network to assist in real world situations. Hadron at the moment is in an Alpha phase, if you are reading this now, then you are still early. As of the date of writing this the project emerged from hiding about two weeks ago. Rather than sitting here and saying what Hadron is, or how you can get involved I will link you here to their bitcointalk post that explains the gist of what they do and how you can join far better than I ever could. The community has grown by the thousands. The Telegram group is over 7,600 people, the reddit group (created by the community) over 1,000. All in less than two weeks. A lot of information is not yet available to the public. Things like tokenomics, a wallet (currently the coins are stored on your hadron dashboard). This is because they are not yet ready to discuss these things as the current focus is to have a smooth and solid mining platform (their main product). The miner works on most browsers, on phones and computers alike. Google Chrome, Safari, Firefox, Chromium, and more. Not only this but it barely uses any resources and is scalable! For example, 1 mining tab uses about 3% GPU usage on a GTX 1060 6gb and you are able to scale this by adding more mining tabs. As mentioned before, this is in Alpha and certain devices may encounter issues but have you know, a man has already mined with his smart mirror! Here are some resources and social medias you can check out to get more intel https://bitcointalk.org/index.php?topic=2142232.0 (The bitcointalk post) https://hadron.cloud/ (Hadron Homepage) Hadron White Paper Edit descriptionhadron.cloud A platform for the discussion of the Hadron, a cryptocurrency driven by AI computations. *โ€ฆ Founded by successful entrepreneurs from Stanford and Berkeley, Hadron is the first cryptocurrency that utilizes miningโ€ฆwww.reddit.com Hadron HADRON.cloud Web Mining for AIt.me (Optional Referall, @UncleCletus in telegram chat to let them know how you found them) Hadron Decentralized Sharing Economyhadron.cloud
Hadron. The in Browser Mining and Blockchain A.I. That Could Change the World
5
hadron-the-in-browser-mining-and-blockchain-a-i-that-could-change-the-world-129a577452ad
2018-05-16
2018-05-16 18:10:42
https://medium.com/s/story/hadron-the-in-browser-mining-and-blockchain-a-i-that-could-change-the-world-129a577452ad
false
537
null
null
null
null
null
null
null
null
null
Mining
mining
Mining
4,410
Todd
Hi all! Welcome, I am new to this writing thing so please forgive me. I hope to have a good time here!
150f94d716d4
AvocadoAtLaw
4
80
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-07
2018-03-07 21:08:40
2018-03-08
2018-03-08 11:55:50
2
false
en
2018-03-08
2018-03-08 11:55:50
9
129a6d1a6523
5.643711
4
0
0
The [opportunity]
5
Make Tech Inclusive The [opportunity] Image from: https://imgur.com/gallery/LbSnPc3 (2013) โ€œNo one is born hating another person because of the colour of his skin or his background or religion โ€œ โ€” Nelson Rolihlahla Mandela In my last post, I introduced some important concepts in my research. I introduced ten factors contributing to algorithmic bias and highlighted where design begins to add value. Today I would like to introduce three case studies that highlight the current market climate around Ai ethics and the opportunity to connect consumer trust with the Ai communityโ€™s growing ethical conscious. If you havenโ€™t seen my previous posts about my project on algorithmic bias, click here. Hardware is โ€œhardโ€ I recently pitched an unrelated product of mine at the Imperial College Enterprise Lab VC evening. While waiting for my 6 min and 40s, I listened to one member of the audience ask an entrepreneur who had pitched a piece of hardware whether or not he could achieve the same product in a software state. I pondered the question and it made me realise how globalised the business market place is โ€” no doubt the investor would have been more interested if the product was software based. The economic return on investment incentive is much higher of course because it would be accessible to anyone with a computer and an internet connection in whatever country they live in. In the field of design context is everything. Without it, it is useless. Case and point, if we make and sell global, we must design global. Else we must segment our product and ensure its use is confined to the context to which it is intended for. Case Studies Technology has a long history of being non-inclusive. One does not need to look very far to find examples. Take the colour bias in film that was most prevalent during the 60โ€™s and 70โ€™s for example. Back then in that context Hollywood was almost entirely dominated by white actors and actresses, and the technology i.e. cameras that were used in film were designed to accentuate their skin tones. At that time, a lot of effort was put in to design camera chips to process the colour of light skin to the best possible quality. The same attention or priority however was not given to dark skin. If one looks back at those films, it is easy to notice the disparity image rendering ability, see fig.1. It could be argued that it was simply the consensus back then, and the people who built the technology for the industry were simply designing for that context. It was not until furniture brands that paid for product placements in television commercials started questioning the fact that cameras were not able to pick up the different types and shades of their woods. fig. 1 With the new incentive, camera manufacturers started redesigning their chips to capitalise on the opportunity to be the best at rendering dark colours and so began the race between manufacturers to correct the colour disparity. The ones that got it right were quickly endorsed and used by the likes Oprah Winfrey and other prominent black stars. Unfortunately, much of the disparity still persists in imaging tech today. From personal experience, I recall Skyping a South African friend of mine in 2009 and having issues with his laptop cameraโ€™s ability to capture his complexion in low light. He has quite a dark skin type and I could barely see him even though the screen light was reflecting off his face. I would like to share two interesting cases of algorithmic bias, both with very different effects. In 2016 the first international beauty contest (beauty.ai) was judged by a machine. The organisers were hoping not only to be the pioneers of the application, they were also hoping to get the most objective choice in the winners. The machine aimed to use objective factors such as winkles and facial symmetry to identify the most attractive contestants. There were roughly 6000 entries. The competition however ended with 44 winners, most of which were white with the exception of a few Asian, and one black African. Why did this happen? This is a clear example of algorithmic bias and it happened mostly because the data that was used to train the deep learning algorithm did not include enough people of other ethnicity. After all, machine learning is simply an optimisation tool. If it was taught that Caucasians are the most attractive, of course it is going to select them as winners. The onus however is on the people in charge of the algorithm design. They failed to realise that the training data (the data used to teach a machine learning algorithm what the desired outcome should be) was not inclusive and did not represent the wide variety of people that one would expect to apply for such a beauty contest. The other example, is from 1999 and far less benign and involves an automated system that was used in California to identify dads that had been skipping out on their child support payments. Unfortunately, the system misidentified hundreds of innocent men and automatically generated notices demanding payment. One man was sent a bill for $200,000. His wife whom was suffering from a mental illness at the time opened the letter and became suicidal thinking he had be leading a secret life. How do we establish a consensus on ethics? Artificial intelligence ethics policies have become โ€œsomewhatโ€ of discussion over the past two years or so. There are numerous publications by government, academic institutions and research bodies that highlight best practice ethical principles that should be complied with by the algorithmic design community. Of the principles and frameworks that I have examined, many address similar concepts such as responsibility, transparency, accuracy, auditability and fairness. Yet in practice, there is a clear disconnect and still much work to be done to incentivise compliance. Ethical Principles & Frameworks reading list: UK Cabinet office (Data Science and Ethical Framework) Code of Standards for public sector Algorithmic Decision Making Principles for Accountable Algorithms and a Social Impact Statement Data for Good Exchange (Covenant Code of Conduct) I would also like to point you to my friend Eirini Malliaraki who has assembled a comprehensive critical reading list: Toward ethical, transparent and fair AI/ML. Interestingly, a gathering labelled โ€œThe Data for Good Exchangeโ€ which was attended by Microsoft, Google and Pinterest employees was launched recently by Dj Patil former chief data scientist under the Obama administration. They discussed the notion of an oath that would be taken by Ai engineers who pledge to maintain an ethical standard, much like doctors that pledge an oath. This oath would โ€œbindโ€ them to go against their respective corporate and institutional economic endeavours to prevent the likes of deploying biased algorithms. This move is demonstrating a willingness from within the Ai community albeit a few, that people are becoming more aware of the impact that poorly designed Ai is having on society. Above I have introduced four concepts; Design out of context is useless Economic incentive to drive change Ai ethics principles that lack real world practice A growing willingness by the Ai community to drive ethical practice from within. In Accentureโ€™s recent Techvision 2018 forecast trust was highlighted as an increasing challenge in business, given the current data veracity that many businesses face. Ethics over the next 10 years will no longer be a nice to have, but a fundamental pre-requisite. Among Aiโ€™s many best qualities it has the unique ability to identify each and every customer as an individual, and so it is imperative that businesses ensure that they are deploying algorithms of a high ethical standard if they wish to remain competitive in the future marketplace. This is where my current focus lies, and where I aim to intervene. In my next post, I will discuss my intervention, and how I plan to make use of the four concepts above. My intervention aims to ensure the alignment of ethics across the individual, the consensus and business. If you have made it this far, I would like to thank you for your attention. I am as always open for discussion, constructive criticism and suggestions. Please clap if you enjoyed the read, and comment here. Oh, and by the way, I am trying to reduce my reliance on Facebook. Sincerely, Mikhail Wertheim Aymes My website: www.wertheimaymes.co
Make Tech Inclusive
58
make-tech-inclusive-129a6d1a6523
2018-03-10
2018-03-10 07:50:48
https://medium.com/s/story/make-tech-inclusive-129a6d1a6523
false
1,394
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mikhail C. A. Wertheim Aymรจs
Innovator / Designer / Algorithmic Ethics Advocate
a2f3ae49a641
mikwaymes
56
65
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-10
2018-07-10 07:37:52
2018-07-10
2018-07-10 07:41:17
2
false
en
2018-07-10
2018-07-10 07:49:32
1
129b39b988e3
7.13805
1
0
0
Driven by technological and legal changes, how far can the โ€œgig economyโ€ go?
3
If companies had no employees. Run, TaskRabbit, run: July 2030 Driven by technological and legal changes, how far can the โ€œgig economyโ€ go? THE EMAIL that landed in Eva Smithโ€™s mailbox at 7pm on Friday October 13th 2028 had the ominous subject line โ€œChangesโ€. Ms Smith, a director at a private-equity firm in New York, opened it with trepidation. โ€œDear team,โ€ it began, โ€œYou have probably heard rumours that we are shaking up the way we work at Innovation Investment Management. We will be transitioning to a new model.โ€ All jobs below C-suite level are to be reclassified. All those impacted will no longer be employees of IIM. Instead you will work for IIM on a contract basis. This change sounds scarier than it really is. It holds great benefits both for you and for IIM. The company will be able to respond more nimbly to a rapidly changing marketplace. We hope that you will continue to perform services for IIM on a contract basis, but you will also have the opportunity to work and earn elsewhere. If you have any questions, please ask Irma, our human-resources chatbot. To begin with, Ms Smith did not notice much difference in her relationship with IIM. She was already working on a deal, and while doing so she moved seamlessly from her full-time, permanent position to a fixed-term contract. (Her hourly rate went up by 20%, but she became responsible for her own pension and health insurance.) Ms Smith had hoped to be involved in the next deal. But she then learned that someone with a PhD in engineering from Harvard had got the contract for that gig, not her. โ€œItโ€™s not personal; they have the perfect skill-set for this deal,โ€ said IIMโ€™s boss. Ms Smithโ€™s experience is increasingly typical. During the 2020s companies across the rich world began to rely more heavily than ever on outsourced, temporary workers assigned via digital platforms. TimeToCare, a platform known as the โ€œUber for social careโ€, organises 90% of the in-home elderly-care visits in America. Workers from autonomous-taxi mechanics to retail assistants to flight attendants have jobs assigned on a daily or weekly basis through online exchanges that match firms with contractors. McDonaldโ€™s, a fast-food company, has taken things the furthest, outsourcing 100% of its restaurant jobs. Servers, cooks and cleaners at McDonaldโ€™s are no longer employees of the firm or its franchisees, but bid for positions at the till on an hourly basis through TaskRabbit, an online labour platform. โ€œThe First Fortune 500 Company With No Employees,โ€ trumpeted Fortune, a business-news service, in its profile of the firm published in 2029. It is all a stark contrast to the way work was organised in the second half of the 20th century. Back then, businesses were fairly self-contained operations. Most functions were completed in-house by permanent, full-time employees. Many people worked for only one or two companies during their careers. That arrangement had a business logic. As Ronald Coase, an economist, argued in the 1930s, it was usually cheaper for firms to have someone there at all times, and to direct them by fiat, than to negotiate and enforce separate contracts in the open market for every task. Americaโ€™s disdain for the rules-based order means its criticism of China now rings hollow In the 1980s, however, the Coasean model began to be challenged by a new way of working. As shareholders encouraged companies to focus on their core competencies, firms outsourced certain roles โ€” cleaning, accountancy, branding โ€” to specialist providers. During the 1990s outsourcing fever swept through the business world. Charles Handy, a management guru, spoke of the โ€œshamrock organisationโ€, which he defined as a โ€œcore of essential executives and workers supported by outside contractors and part-time help.โ€ For years, however, the shamrocks struggled to flower. Outsourcing ran up against technological limitations. Firms could not know for sure that they would be able to find the right sort of labour in the open market as quickly as it was needed. Companies were thereby forced to hold on to many employees who were not really central to their business. That arrangement suited many workers, who preferred the stability of permanent employment to the alternative of flitting between short-term contracts, which they would also find difficult to organise. But that all changed around 2010, with the rise of gig-economy platforms such as TaskRabbit, PeoplePerHour and Expert360, capable of quickly and seamlessly matching workers with employers. Ratings given by previous clients provided a way to assess quality. This enabled further chunks of firmsโ€™ activities to be outsourced. The gig economy started small, but within a decade it was growing rapidly; its poster-child was Uber, a ride-hailing service. In 2018 roughly 1% of workers were listed on at least one labour platform; by 2028 that figure had risen to 30%. More and more companies are starting to look like IIM. Two factors explain the boom in gigging. The first is changes to the law. For years the gig economy struggled against repeated legal challenges. In many cases, courts found, gig-economy workers were being classified as self-employed when they were really employees. (This meant workers were being denied things like minimum-wage protection and sick pay.) In 2020 FindMeChef, a platform linking cooks with restaurants, lost a ruling before an employment tribunal in Seattle, brought by a worker who had worked on a โ€œtemporaryโ€ basis for a client for an entire year. FindMeChef had to pay millions of dollars in back-pay and other benefits to its chefs. And Uber lost case after case in employment tribunals around the world, which forced it to stop classifying its workers as independent contractors in some countries. Amid such setbacks, gig-economy companies argued that governments ought to be on their side. They pointed out that gig work could be an important route into the labour market for the unemployed, and should therefore be encouraged, not regulated out of existence. In America the platforms lobbied furiously for the creation of a new category of employment, somewhere between self-employment and employment. Known as โ€œdependent contractorโ€ status, the third category would give workers the flexibility of self-employment but with entitlement to some workersโ€™ rights, such as sick pay. President Donald Trump heeded the call. In 2020 he introduced a package of labour-market reforms which provided for the introduction of โ€œdependent contractorโ€ employment status. The package was backed by Republicans as a way to free companies from red tape, and by some Democrats as a way guarantee some basic rights to gig-economy workers. On the day the reform was announced, the share prices of the big online-labour platforms jumped. Other countries soon followed suit. High-unemployment countries in Europe saw deregulation as a way to boost jobs. Others hoped it would attract foreign investment. The second big driver behind the gig-economy boom has been technology. Progress in artificial intelligence (AI) has made finding the right worker for a discrete task quicker and easier than ever, because modern AI systems can look past crude ratings systems and use a range of signals to determine whether a candidate is a good fit. Since 2026 LinkedIn, a professional-networking service, has offered a guarantee that it can find a suitable worker for any task within six hours โ€” and, thanks to a deal with Uber, can ensure that they are on-site within one working day. All this has given outsourcing a new lease of life. The latest wave of functions being contracted out includes administrative work, marketing and training. And some firms, like IIM, are going even further, shedding employees who perform core operations and rehiring them as short-term contractors to do specific tasks. True, outsourcing has not always gone well. In December 2028 an attempt by a group of American hospitals to use on-demand doctors led to a shortage of staff over Christmas, when many decided not to work even though โ€œsurge pricingโ€ had bumped up their hourly rate. Some companies report that morale among contracted workers is low, because they do not feel part of a team. Others worry that some of the โ€œtacit knowledgeโ€ that employees gain through working at a business full-time โ€” the culture of a firm, say, or how to approach a particular boss โ€” is lost. But companies that embraced the shift away from having employees have reaped big gains. They no longer need to pay people to be in the office when demand is slack. They can find the worker with the perfect skills for a task, not just someone willing to have a go. Because individual workersโ€™ output is finely measured, and their proficiency at completing a task becomes part of their online profiles, no one can be lazy and get away with it. Productivity growth, which had stagnated in the rich world after the financial crisis of 2008โ€“09, has accelerated since the mid-2020s. Many workers have also benefited. For those with sought-after skills, it can be far more lucrative to flit from contract to contract than to work for a single firm. After a bumpy start, Ms Smith now earns more than she did as an employee. She checks Expert360 and LinkedIn three times a day, playing off rival bidders for her labour against each other. Alongside on-and-off work for IIM, she consults for other investment firms, writes articles and offers lifestyle coaching. Workers without such valuable skills, however, are not doing nearly as well. The biggest problem stemmed from the 2020 labour reform. Dependent contractors working through online platforms, unlike employees, are not entitled to a minimum wage. It is difficult for trade unions to organise workers who are highly dispersed. Automation is also reducing the overall demand for low-skill labour. Having a pool of workers always available makes the gig economy operate efficiently, but limits workersโ€™ bargaining power. In real terms, wages at the bottom of the income distribution have now stagnated for two decades. Such workers cannot afford to contribute to pension pots; health-care coverage has also fallen. Concern over the potential long-term hit to the public finances has led to calls for more regulation of the gig economy. In America, the Democrats want to undo the 2020 reform and extend minimum-wage legislation to more people. But the gig economy has a powerful logic. In 1937, Coase famously asked โ€œwhy do firms existโ€? Nearly a century later, as technology makes it ever easier for them to disassemble their enterprises, more and more managers are asking the same question. ___ more The Economistโ€™s articles for free HERE
If companies had no employees. Run, TaskRabbit, run: July 2030
1
if-companies-had-no-employees-run-taskrabbit-run-july-2030-129b39b988e3
2018-07-10
2018-07-10 07:49:32
https://medium.com/s/story/if-companies-had-no-employees-run-taskrabbit-run-july-2030-129b39b988e3
false
1,790
null
null
null
null
null
null
null
null
null
Sharing Economy
sharing-economy
Sharing Economy
7,071
The Economist Access
null
53e8e1901905
economistaccess
6
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-28
2018-09-28 12:38:29
2018-07-25
2018-07-25 00:00:00
1
false
en
2018-09-28
2018-09-28 15:44:54
7
129b73d02425
3.230189
0
0
0
The annual Gartner top strategic technology trends are an oft-impatiently awaited list of anticipated shifts that Gartner analysts expectโ€ฆ
4
How Gartnerโ€™s top predictions for 2018 held up The annual Gartner top strategic technology trends are an oft-impatiently awaited list of anticipated shifts that Gartner analysts expect to see in the market in the coming year. We have taken a look back at this yearโ€™s trends and compared them to the top keywords from the last 100 articles (roughly one monthโ€™ worth of articles) published on CMSWire. While this isnโ€™t an academically conclusive picture, here are some interesting findings: 1. AI less of a topic than anticipated Gartner has consistently cited AI as a growing trend for a number of years, but โ€œAIโ€ and โ€œIntelligenceโ€ are both words used less often on CMSWire than the Gartner trends suggest. In terms of the hype cycle, AI falls somewhere in the disillusionment phase right now. This objectively makes sense: AI is incredibly complex, yet all of its current applications narrow, and the projected benefits are mostly theoretical at best. If one considers CMSWire to have the finger on the industriesโ€™ pulse, then it can be concluded that readers are not looking for insights into artificial intelligence solutions or practices. This can of course change rapidly, but at least for the moment it seems, the only ones betting big on AI, are vendors. 2. Experience is making a mark Whether CMSWireโ€™s Digital Workplace Experience Summit had a large influence, or just due to โ€œemployee experienceโ€ being the words on everyoneโ€™s lips, itโ€™s clear that Gartner understated the need for new technology to not only be better at something, but to do so whilst being a part of a bigger picture. This is an interesting insight into user adoption; people are tired of using another tool or website that performs one specific task, without the option for holistic integrations into their wider landscape of business applications. We reckon that organizations want less one-trick-pony tools, and more applications that โ€œplay niceโ€ with other existing applications through standards like RESTful APIs. 3. Everything is digital The importance of โ€œdigitalโ€ seems to be universal. Itโ€™s not surprising โ€” digital is the new word for everything people are longing for in their applications: fun, borderless, automated, integrated, intelligent; in short โ€” digital. Not that many organizations are analogue in 2018; or do you still use a fax and dial up modem? But letโ€™s not digressโ€ฆ Conversational UIs or AI-based conversations between human and data are a futuristic, Jetson-like dream. The fact that Gartner predicts this to be a trend is pioneering and unequivocally where the future is headed (Now Assistant is betting big, too, with integrations into Alexa, Cortana and Google Assistant). However, beyond single-purpose chatbots, the industry seems to be struggling to catch up. And given how utterly simplistic current AIs are, itโ€™s not all that surprising that people arenโ€™t yet wondering how to turn their pizza ordering bots into AI colleagues. 5. โ€œManagementโ€, โ€œmarketingโ€ and โ€œcustomersโ€ are prominent issues Businesses want to make money, unsurprisingly. So how to manage top technology that costs top dollar is of concern to them. Given the perceived pace of digital transformation, which is sweeping across the globe at break-neck speeds, attaining ROI on their technology investment could be a major question mark for organizations over the next 12 months. How can your organization invest into AI tech in such a way that it is effectively leveraging the first advances into the field, whilst keeping one eye on your bank account? The proof will be in the pudding in 2019 โ€” but while Gartner put its focus on new technologies, itโ€™s clear that they didnโ€™t consider how much organizations will want to future-proof their investment. 6. Error 404: These trends are not found Blockchain, AR (Augmented Reality), IoT (Internet of Things), Digital Twin; all these trends are no doubt exciting and still of strategic relevance for the future, but so far have not translated into much of a discussion point for everyday organizations. This may yet change in 2019, but in 2018 these trends were non-starters. There you have it; a quick insight into the most obvious differences between Gartnerโ€™s predictions for the year and the issues the industry is really buzzing about. This doesnโ€™t mean things canโ€™t go another way, though; itโ€™s important to remember that small changes culminate into a bigger overall movement towards future technology. To finish, hereโ€™s a quote from Bill Gates which hopefully can serve as some food for thought: We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Donโ€™t let yourself be lulled into inaction. Work smarter, not harder with Now Assistant, the AI-powered digital assistant. Are you ready to start your digital transformation journey? Originally published at www.adenin.com on July 25, 2018.
How Gartnerโ€™s top predictions for 2018 held up
0
6-gartner-predictions-for-the-year-and-how-they-held-up-129b73d02425
2018-09-28
2018-09-28 15:44:54
https://medium.com/s/story/6-gartner-predictions-for-the-year-and-how-they-held-up-129b73d02425
false
803
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Henry Amm
Self-professed Digital Workplace expert. VP @adenin.
8059006f5182
HenryAmm
72
151
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-20
2018-09-20 07:03:03
2018-09-20
2018-09-20 07:14:21
1
false
en
2018-09-20
2018-09-20 07:14:21
2
129e86bde4a2
3.086792
0
0
0
While many businesses are still on the journey to the cloud, a new path towards business intelligence (BI) is emerging for many.
5
The business intelligence journey Like the journey to the cloud, the path to business intelligence requires some pre-work to lay the foundation upon which your business intelligence system will be built. While many businesses are still on the journey to the cloud, a new path towards business intelligence (BI) is emerging for many. Before detailing what the BI journey looks like, itโ€™s important to understand what is meant by business intelligence. The proliferation of data visualisation tools has provided a new way to gain insights into the information that has been trapped inside excel spreadsheets and legacy information systems. However, these tools have failed to give the full picture, because many organisations have only been able to utilise them to visualise siloed business functions like finance or marketing. When organisations want a complete picture across the business, the full range of data black spots begins to emerge. BI is about spotting a dip in revenue and then having access to comprehensive sales data to dig into the underlying cause. Itโ€™s about being able to track sales and service data to find the reason why your NPS score dipped in a particular month. Like the journey to the cloud, the path to business intelligence requires some pre-work to lay the foundation upon which your business intelligence system will be built. Digitisation of processes So much data still resides in excel-based systems that are siloed from organisationsโ€™ larger information management systems. Combine this with legacy-based software that lacks an application programming interface (API), and you start to get an understanding of the work that needs to be undertaken before any type of business intelligence is possible. A great first step is to do an audit of what workflow processes involve Excel or non API connected systems, then start the process of implementing cloud software solutions that can support the input and management of this data. Once the data is housed in a structured software system, it can then be accessed via an API to form part of your BI system. System integration Once youโ€™ve built a place for all of your data, itโ€™s time to integrate the systems and workflows to ensure the data is kept updated with the least amount of human effort possible. While having all information systems in the cloud is great, data is rarely in one unified software package. CRM data, E-commerce data and financial data may all be in different systems. But building a complete picture of your customers means that these system need to be integrated via an API to ensure data is consistent and relevant to the decision making process. Integration also means that the data is automatically sent from one system to another, ensuring it is readily available, current and of a high quality. Data warehousing Once data is in the cloud and the various systems are integrated and functioning like a Swiss watch, itโ€™s time for the fun part. To compare data from across information systems, a data warehouse needs to be developed to send it to, to be organised and stored, ready for data visualisation tools to produce those amazing new dashboards. Data warehousing has become much more accessible recently with several tools offering solutions that can be quickly configured and affordably maintained in the cloud. A data warehouse will be unique to each organisation and its design will be dictated by the type of data, how often it needs to be updated and the structure of the information that needs to be stored in the data warehouse. Data Visualisation Once data is being automatically sent to a data warehouse, we can start to produce the visualisations that will help to build a data driven organisation. The data model that was developed for the data warehouse will enable comparison and analysis of information from each of the information systems. The challenge is to leverage this unique data model and find the appropriate visualisations to allow users to quickly understand what the data is showing them, so they can take the best course of action. A great place to start is replacing board reports with a series of dashboards. Generally, the information needed to satisfy board reporting will require data from all areas of the business. Once this reporting process has been refined, dashboards can be built for the next level of management, before continuing to drive this data down to other levels of the organisation giving everyone permission controlled, tailored access to the organisationโ€™s data. Go start your Business Intelligence Journey These types of initiatives require a constant improvement approach to realise their true benefit to the organisation. So โ€” develop a strategy, get the right people involved and take the first step. Michael Macolino, Senior Manager, BDO Business Advisory
The business intelligence journey
0
the-business-intelligence-journey-129e86bde4a2
2018-09-20
2018-09-20 07:14:21
https://medium.com/s/story/the-business-intelligence-journey-129e86bde4a2
false
765
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
BDO Australia
Established in 1975, BDO is one of Australiaโ€™s leading accountancy, tax, advisory and business consulting firms.
adc92abe3067
bdoaustralia
0
1
20,181,104
null
null
null
null
null
null
0
null
0
fc0309257833
2018-07-13
2018-07-13 15:24:37
2018-04-25
2018-04-25 18:26:26
0
false
en
2018-07-16
2018-07-16 17:20:27
106
129f0e58bbbc
19.018868
0
0
0
null
5
Artificial Intelligence A primer on the technology poised to be the engine of the future economy. Note: This is one of many primers on emerging opportunities built for weststringfellow.com alongside hundreds of resources to help you grow your career and company. How This Primer Is Organized: Essential Reading: These are the some of the best resources on the topic. They range from the covering the history of AI looking at the applications that could change the world. Glossaries: The main purpose of this resource is to get more people involved in conversations about AI. Jargon is a barrier that prevents the cross-discipline conversations are the origin of great ideas. News Sources: These are some of the sources that cover the day to day shifts within the technology. Everyone consumes news in a different way so resources represent diverse approaches. Scan through the list and subscribe to a few to keep up to date on cutting edge advancements. Podcasts and Immersive Media: To really understand complex topics, itโ€™s important to understand the culture surrounding it and to hear conversations surrounding the topic. These resources provide an avenue to listen to conversations with experts and learn about the topic without diving through mountains of text. Market Perspectives: These transformative technologies are poised to radically change business models. These changes go beyond the mainstream disruptive effects on media and training. Startups: These companies provide a window into whatโ€™s possible within AI. Communities: Communities related to AI or ML to help you get started, network or push your knowledge further. Essential Reading These are some of the best resources involving AI, ML and computer science principles in general. They include both academic and theoretical approaches in addition to more practical for those looking to get into AI. Approachable Foundations Some may say this it may be difficult or you canโ€™t go deeper into AI without academic rigor (but we are seeing more approachable text, methodologies and general information) these books are two main recommended text in order: AI: A modern Approach by Stuart Russel and Peter Norvig Deep Learning by Ian Goodfellow and Yoshua Bengio In addition the following can be also extremely beneficial: Paradigms of Artificial Intelligence Programming http://norvig.com/paip.html https://mitpress.mit.edu/books/reinforcement-learning- One of the most active areas of research Any workbook on Tensorflow, Pytorch, Keras, Sci-kit Learn etc. for practical lessons Deep Learning For deep learning: https://mitpress.mit.edu/books/deep-learning (especially given the trend and push in the industry to deep learning) Reinforcement Learning For reinforcement learning there are quite a few and this is a good one to start with: Reinforcement Learning An Introduction: https://mitpress.mit.edu/books/reinforcement-learning Please note that this book is expensive: Reinforcement Learning State-of-the-Art by Wiering and van Otterlo. -http://www.springer.com/us/book/9783642276446 *Also as a note any workbook or tutorial on AI programming that involves Python is highly recommended. Python is a crucial tool to understand in the industry and it is consistently being used more to build AI, ML and Data Science projects (look at recent tech such as Pytorch). Glossaries Knowing the key terms for artificial intelligence can be a challenge since AI can now be applied more broadly. The Ultimate Glossary of Artificial Intelligence Terms by Phrasee Glossary of Terms in Artificial Intelligence by The Windows Club The AI Glossary: A Data Scientistโ€™s No-Fluff Explanations for Key AI Concepts by Mighty AI Definitions pulled from the link above from Mighty AI include: Artificial General Intelligence (AGI) or Strong AI: A term for a hypothetical computer system able to learn, reason, and solve novel problems as a human can, or possibly even better. Not limited to one specific task, an AGI would be a true machine intelligence, capable of original thought. Nothing like this exists in the world today, and although there is a broad sense that such a thing should be possible, nobody has any idea how to even begin creating such a thing despite decades of research. Classification: A kind of supervised learning task where the goal is to assign one or more labels to each input from a fixed, pre-defined set. All the examples in the training set must be labeled by humans before the system can be trained. In image classification, for example, the inputs are digital images, and the labels are the names of various objects that appear in these images (โ€œcatโ€, โ€œcarโ€, โ€œpersonโ€, etc.). To train a classifier, we need to not only label our data, but first define the set of labels we will use. The examples for different labels need to be distinguishable, and each label must have a reasonable number of example occurrences in our training set. Classifier training generally works best if the different labels are roughly โ€œbalanced,โ€ that is, all have roughly the same number of examples. Popular machine learning systems for classification include neural networks, support vector machines, and random forests. Neural Network: A particular kind of algorithm or architecture used in machine learning. Loosely inspired by the structure of the brain, a neural network consists of some number of discrete elements called โ€œartificial neuronsโ€ connected to one another in various ways, where the strengths of these connections can be varied to optimize the networkโ€™s performance on the task in question. Although inspired by the brain, it is very, very important to keep in mind that artificial neural networks absolutely do not work the same way the human brain does! The similarities are often wildly overstated in the popular press. Neurons in a neural network are organized into layers, where the output of one layer becomes the input to the next layer, until the final output is produced at the final layer. Neural networks can be โ€œshallowโ€ or โ€œdeep,โ€ depending on how many layers they have. The basic โ€œfeed-forwardโ€ neural network architecture has no memory; It treats every input as an independent event, without consideration for sequence or timing. Supervised Learning: A form of machine learning in which, for every input, there is one correct output that the system is being trained to predict. All the training examples it learns from have to be annotated before training the system with this correct output, by human beings. The system โ€œlearnsโ€ how to correctly generate outputs from inputs by looking at the human-annotated training data it is fed. Based on the human-labeled training data, the algorithm finds a mathematical way to generalize the patterns in this data and predict what the output ought to be on novel examples that no human has labeled. Classifiers are classic examples of supervised learning. Unsupervised Learning: A form of machine learning in which there are no pre-existing labels or outputs defined on the input training data, and the system instead โ€œlearnsโ€ whatever patterns, clusters, or regularities it can extract from the training data. Clustering algorithms are classic examples of unsupervised learning. Another is the Google Brain project of 2012 that was fed millions of frames from YouTube videos without any labeling or annotation, and based on looking for common patterns, learned to recognize cat faces. Reinforcement Learning: Reinforcement learning is a form of machine learning where the system interacts with a changing, dynamic environment and is presented with (positive and negative) feedback as it takes actions in response to this environment. There is no predefined notion of a โ€œcorrectโ€ response to a given stimulus, but there are notions of โ€œbetterโ€ or โ€œworseโ€ ones that can be specified mathematically in some way. Reinforcement learning is often used to train machine learning systems to play video games, or drive cars. The DeepMind system that learned to play Atari video games used reinforcement learning. Convolutional Neural Network (CNN): A special neural network architecture especially useful for processing image and speech data. The difference between a normal feed-forward network and a convolutional network is primarily in the mathematical processing that takes place. Convolutional networks use an operation known as convolution to help correlate features of their input across space or time, making them good at picking out complex, extended features. However, they still treat each input separately, without a memory. Recurrent Neural Network (RNN): Neural network architecture that maintains some kind of state or memory from one input example to the next, making it especially well-suited for sequential data like text. That is, the output for a given input depends not just on that singular input, but also on the last several input examples as well. There are many different recurrent architectures, but the most important now is known as the Long Short-Term Memory (LSTM) network. These can be combined with convolutional networks, too. Computer Vision (CV): The application of machine learning to tasks involving digital images or video, such as identifying or tracking objects through a video sequence, or segmenting images into distinct objects. Convolutional neural networks are a powerful new tool widely used in computer vision. Natural Language Processing (NLP): The application of AI to tasks involving human language, both written and spoken. NLP tasks can include both computer parsing of input natural language, and computer generation of naturalistic outputs in human language. Recurrent neural networks have become an important tool in this area recently. Chatbots and voice control are applications of NLP. Overfitting: A problem that can occur in supervised learning tasks where the system learns patterns in the training data that are too specific or are there only by coincidence, so that it performs extremely well on examples it has been trained on but loses its ability to generalize and performs very poorly on anything new. Overfitting can be caused by an overly-complicated model, a limited training set without enough diversity, or by weaknesses in the training process itself. News Sources: Since we have seen AI grow exponentially throughout the last few years with significant companies integrating it into projects the field has also taken on a broad terminology and can also include machine learning and deep learning. That being said, the following news sources not only help gain knowledge covering the range of AI but to also examine concrete and up to date information within the discipline from a diverse source set: Science Daily โ€” ScienceDaily.com AI news that covers a range of subjects and also is research based. Itโ€™s a great source if you want to find a diverse set of recent articles. Machine Learning Mastery โ€” Machinelearningmastery.com โ€” This is a great source to utilize for all ranges of learners especially beginners. It also provides relevant and new information for technologies currently used in the industry (for example Sci-kit learn, keras, etc.). This is also a blog, so it can be considered on the list of resources for blog. MIT News Artificial Intelligence โ€” http://news.mit.edu/topic/artificial-intelligence2 -This is another highly recommended resource to use since it covers a range of subjects and has a daily email that will help keep you in the loop for modern AI breakthroughs. Harvard Business Review โ€” https://hbr.org/topic/technology โ€” Although you need to purchase access to be able to have full content from the journal you can get executive summaries and access to recent articles online for free. Moreover, as a peer reviewed journal and similar to MIT publications the HBR publications always tend to be quality material AI In the News โ€” https://aitopics.org/search โ€” This is an official publication source of the AAAI. It provides alerts, an AI magazine, classics and a brief history of AI. Publications on the site are useful and quality material. Machine Learning Weekly โ€” http://mlweekly.com/. This is a weekly newsletter and source for relevant ML information including topics that are currently of importance to the industry. For example you can find deep learning, NLP, quantum computing in addition to advice and tutorials for beginners. Oreilly โ€” https://www.oreilly.com/topics/ai โ€” As a publisher itโ€™s expected that this source contains relevant information but the key takeaway here is that they feature a range of information related to AI in addition to publishing new articles consistently along with highlights from conferences in the industry. Open AI โ€” Blog/Research https://openai.com/ โ€” OpenAI is a non-profit conducting fantastic research. What are even better are the publications products along with using OpenAIgym, which is a source to build fun projects with relative ease. Itโ€™s a great way to introduce new students to concepts such as reinforcement learning. Github for relevant repositories for AI software (Pytorch, NLTK, Tensorflow, OpenCV, etc.) or their corresponding websites. This recommendation is straightforward. Although it may not be suitable for a beginner if they are not ready to read code, itโ€™s highly recommended to check the repositories and even source code for a higher level of understanding. Import AI โ€” https://jack-clark.net/ Itโ€™s not just another newsletter but a weekly production from an employee of OpenAI and it provides a great source of reading material. Deepmind โ€” https://deepmind.com/ โ€” Purchased by Google, itโ€™s always a great idea to pay attention to what is going on with Deepmind along with their research into deep learning. Machine Intelligence Research Institute โ€” https://intelligence.org/ โ€” More math based research but necessary if you are looking to take on some fundamental concepts for research aspects. Nvidia โ€” https://blogs.nvidia.com/ โ€” Nvidia is producing some excellent articles again on a diverse subject base within AI. Use this source for news, practical approaches and industry trends. Kaggle โ€” Kaggle.com โ€” If you come from a development or software background you may have heard of Kaggle but itโ€™s another incredible way to explore and build (more data science focused) from great datasets in addition to seeing how others approached the task. Competitions are also held for specific subjects and you can see an overlap with specific AI techniques. StackOverflow โ€” Stackoverflow.com โ€” Stackoverflow has to be mentioned since it is one of the go-to sources for questions, debugging and discussion on topics. Wired AI -https://www.wired.com/tag/artificial-intelligence/ โ€” The list wouldnโ€™t be complete without Wired. This recommendation is fairly straight forward, as Wired AI will bring you interesting and relevant information to the industry. Academic recommendations to provide research based resources but they are also making it very approachable for beginners including the following: Stanford AI Lab โ€” http://ai.stanford.edu/ Carnegie Melon University AI โ€” https://ai.cs.cmu.edu/ (You can also list MIT, Berkeley, etc.) Coursera, EdX, Udacity, (and more) can be recommended for students to get a more hands on approach to building some fun projects, learning technologies for AI or taking classes to further learning. Andrew Ng blog โ€” http://www.andrewng.org/ โ€” Itโ€™s not 100% strictly blog but has up to date information regarding Andrew Ng which can be useful for newcomers to AI. Siraj Ravalโ€™s Youtube Channel โ€” https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A This is a great resource for those looking to get into AI and general AI information in a practical sense. Siraj brings you a range of AI/ML applications that beginners can take on. Nathan Benaichโ€™s Blog โ€” A blog to keep an eye on or more so a collection of writing by Nathan Benaich via Medium https://medium.com/@NathanBenaich. Nathan Benaich is the organizer of the London AI meetup and has some interesting readings for all levels. What are you doing to grow your career and company? We spent 15k hours researching best practice. Visit our website, explore hundreds of resources, and learn how to get things done. Implications of AI: Resources regarding how AI will affect jobs โ€” as a note these can tend to be biased and categorized as either apocalyptic or useful. In my opinion (the condensed version) we have such tremendous benefit to be gained from incorporating AI and technology in general. The key factor is to institute sound policies (economic, social, educational) that will benefit the populous because changes are not just coming; they are here and will impact every industry. A range of sources regarding the possible changes with the impact of AI: As Robots Rise, How Artificial Intelligence Will Impact Jobs โ€” Forbes How artificial intelligence will affect your job โ€” Marketwatch Automation and Anxiety โ€” The Economist The Jobs That Artificial Intelligence Will Create โ€” MIT Sloan Review The Future of Jobs and Jobs Training โ€” Pew Research What to Expect From Artificial Intelligence โ€” MIT Sloan Review Why AI could destroy more jobs than it creates, and how to save them โ€” Tech Republic As for incorporating AI into a project one of the key rules in the world of computational intelligence is that do not re-invent the wheel. These sources can help get you started forming an idea on when to use AI: 9 Ways Your Business Can Plan For Artificial Intelligence โ€” Forbes Technology Council What You Need To Know Before Incorporating AI Into Your Business Model โ€” Forbes Technology Council Podcasts Podcasts are great tools to use to stay informed within AI. This list includes a variety of topics and approaches within AI, ML and even Data Science. The great thing about podcast is the ease of use to listen to them throughout your day. Some podcasts may be more appealing to your interest than others so examining the topics of each one and taking a listen can help you decide what you want to dedicate your time to. Nvidia AI Podcast โ€” https://blogs.nvidia.com/ai-podcast/ โ€” Similarly to the blog and news source from Nvidia this is a well-rounded AI podcast on a range of interesting topics. Linear Digressions โ€” http://lineardigressions.com/ โ€” Data Science focused podcast but it features relevant work being done in the industry that individuals will find useful. Concerning AI โ€” https://concerning.ai/ โ€” excellent podcast that will help get you thinking on AI topics. For example their most recent podcast featured the 3rd in a series about narrow AI and AI being used for personal assistants. This week in ML and AI https://twimlai.com/ โ€” Relatively new to the podcast scene but it has a great selection of guest speakers. SDS podcast โ€” https://www.superdatascience.com/podcast/ โ€” Although based in data science there is overlapping work and some interesting info for AI / ML Startups The following companies are some great examples on how AI is starting to be utilized and applied to projects. Overall, startups are assisting in leading the way for tech related projects incorporating AI. Itโ€™s now easier than ever to get your start up off the ground or to build a prototype with AI using the resources available online. While taking a look at this list itโ€™s always a nice method to think in the back of your mind how you can apply AI to an idea, hobby or interest that you may have. * You could easily pick any from this list and make a relevant connection to the industry https://angel.co/artificial-intelligence Veritone, Inc., AI operating system โ€” Veritone is taking off in the AI space using cognitive AI approaches to process large quantities of data. For example, Veritone has developed the Veritone Platform, an open-developer ecosystem that combines the power of third-party cognitive engines and unlocks data from linear files like radio and TV broadcasts, police bodycam footage and call-center conversations. Although not specifically a startup, Anything Andrew Ng related (Baidu, and Deeplearning.ai). Not specifically โ€œstart upโ€ https://deepmind.com/ โ€” Purchased by Google in 2014, this London based startup is currently carrying out important research in the field of deep learning. They carry out ground breaking research in a variety of areas with deep learning and for example have used AI to reduce electric costs for sever rooms that can be applied to numerous industries in addition to natural language and neuroscience publications. The tech giants (again not specifically startups) have to be mentioned since they are all integrating AI for specific purposes, new products and more. Amazon, Apple, Oracle, etc. https://orbitalinsight.com/ โ€” Using AI approaches to process geospatial analytics https://aicure.com/ โ€” AI for patient monitoring โ€” โ€œAiCure builds and deploys clinically-validated artificial intelligence technologies to optimize patient behavior and medication adherence. The company was founded in 2010 to revolutionize patient monitoring with the ultimate goal of reducing hospitalizations and extending life expectancy.โ€ Imagine the changes in peopleโ€™s lives and positive benefit if we can use AI to build programs that monitor individualโ€™s health and can predict or optimize your wellness to maintain a healthy lifestyle. https://naralogics.com/ โ€” Naralogics is using AI to process enterprise data for real-time, context relevant recommendations and give the reasons behind them. Although to some AI in the realm of business analytics may seem not as interesting as other areas it has a huge impact since it can be used to run businesses more efficiently, analyze areas of the business for improvement and give you the tools to make better decisions for your business. https://www.bostondynamics.com/ (robotics focus) โ€” Another โ€œhas to be includedโ€ Boston Dynamics is one of the more well-known robotics companies but they are building impressive products. https://www.preferred-networks.jp/en/ โ€” A company to pay attention to for deep learning based in Japan, they use A (deep learning) related to handling data. https://www.arago.co/- Frankfurt โ€” Using AI for business process automation, straight forward (in itโ€™s definition) but another one the list to keep an eye on. Anki โ€” https://www.anki.com/en-us/company Bringing robotics to the consumer level. https://www.icarbonx.com/en/ โ€” Health data and analysis. We do see quite a few startups launching in the health space but itโ€™s no where near what is needed an this startup is using AI for processing health data and analysis. https://www.carmatsa.com/fr/ โ€” This startup based in France is working on the creation of artificial organs. Another startup to put a * on since the number of organ donors is lower than the demand, artificial organ creation will solve a huge problem in the world. http://en.cloudminds.com/ โ€” As we move towards integrating our tech as cloud based, this startup is working on building cloud robotics Zero Zero Robotics https://gethover.com/about-us โ€” On our list we have another consumer robotics company with Zero Zero Robotics but that doesnโ€™t mean any less irrelevant as they build great robotic products and push the consumer level further. http://customermatrix.com/ โ€” Using an artificial intelligence engine for financial services. https://scaledinference.com/ โ€” Using Machine learning and artificial intelligence to optimize performance metrics) https://fuzzy.ai/ Fuzzyai allows the use of AI based APIโ€™s to carry out real time decision making. As we continue to progress, APIโ€™s allow individuals or companies to access AI models with ease to incorporate into their tech stack. http://auro.ai/ Auro is using AI and robotics to build self-driving shuttles for campuses. This area is only getting started in growth and hopefully more individuals take interest. The benefits of having automation regarding transportation will only provide us more benefits as individuals when needing to commute, reduce traffic fatalities and energy waste. https://www.scaledinference.com/ Enhance your operations by applying AI to increase key performance metrics. Drone Based Similar to autonomous vehicles drone technology is just starting to โ€œtake flightโ€. The applications are endless and it can end up saving human lives when inspecting dangerous equipment or delivering supplies to areas effected by natural disasters. In addition we are seeing companies bring affordable drones to market so that we as individuals interested in drones can get involved and build fun applications! https://www.betterview.net/ โ€” Providing a solution for aerial viewing specifically for the insurance industry. This will help cut down on claims and cost reduction. http://www.skyspecs.com/ Automated Wind Turbine inspection https://www.sky-futures.com/ Industrial drone based inspections aimed to use for oil and gas platforms. https://www.cyphyworks.com/ Drones for defense and public safety, they offer a tethered drone that can stay airborne for 9 days. http://www.flyability.com/ Using drones to reduce liability, these are meant for inspecting those places that are dangerous or inaccessible. http://sharpershape.com/ Drone technologies to inspect power lines and additional infrastructure within the energy industry. https://www.aerialtronics.com/ received significant funding and uses IBMโ€™s Watson to detect critical flaws in infrastructure and more. Chatbots Chabotโ€™s may seem straightforward but with the integration of AI they have been able to unleash exponential potential. Before AI we had developers coding extensive for loops, for example: If individual x says Hello, write Hi and this would be extensive. We are now able to build and deploy chatbots that analyze each part of speech when interacting and will respond intelligently reducing the need to code basic answers. We are seeing huge shifts in the industry for companies adding chatbots to their applications. You may have seen them when logging on a website and a small pop up box opens asking if you need help. http://mode.ai/#/menu Using AI to power Chatbots (and we have recently seen a huge shift in chatbot technologies to integrate AI) https://secure.logmein.com/home/en Purchased an AI and Chatbot startup last month for $50 million, it will be a good idea to see how they use it to integrate into their products. Cybersecurity For the following companies AI is being used for cyber security purposes. As we have seen recently and normally each week we see a new hack leading to the breach of sensitive financial data, incorporating AI and ML concepts is not only a good idea but is necessary so we can protect our data. https://feedzai.com/ Fraud prevention with Machine Learning for financial services. https://www.appthority.com/ Automatically and grades risky behavior in mobile applications to identify risky behavior. https://www.cylance.com/en_us/home.html Using AI to predict and prevent cybersecurity threats. https://www.darktrace.com/ using advanced mathematics and machine learning to detect anomalous behavior in companyโ€™s networks. https://www.illusivenetworks.com/ WSJโ€™s one of the top tech companies to watch, Illusive Networks proactively deceive and disrupt in progress attacks. This is something a bit newer since it serves as a more proactive approach (usually cybersecurity is either strictly defensive or figuring out what happened after the attack). Other Each of these companies has a bit of a unique twist integrating AI into their projects. https://asktetra.com/ Tetra uses AI to take notes on your phone calls so that you can remember everything. Sometimes itโ€™s easy to get caught up in the conversation while trying to remember important points. http://biobeats.com/ Uses AI to help consumers increase wellness and productivity. For BioBeats what is a nice aspect is that since we have focused so much on technology this in a sense uses technology to perform better as a human. http://www.atomwise.com/ โ€” Atomwise is the creator of AtomNet โ€” the first deep learning technology for small molecule discovery, used by research groups for drug discovery programs. Health startups are crucial and this is one to watch as well. Using AI and ML to increase drug discovery will be highly beneficial. https://www.starship.xyz/ โ€” Building delivery robots to carry packages to consumers that are now being tested and can be seen in real environments. This is another company to watch since we can actual see them currently in the world. http://bloq.com/ Purchased Skry in February to bring AI and Machine Learning to blockchain. (Blockchain technologies and crypto currency are seeing huge growth right now). It will be very important to pay attention to see how AI/ML will be used within the crypto industry. https://botanic.io/ โ€” Building an interactive, versatile personality for your code. https://www.idavatars.com/ โ€” Building avatars with AI that are โ€œgenuinely caring avatars who can build trusting, enduring relationships. http://www.skytree.net/ Using ML to build predictive models faster. http://www.mintigo.com/ Data mining creating a customer โ€œfingerprintโ€ that helps reach your prospects better. https://www.festo.com/group/en/cms/10156.html Appling things learned in nature to inspire factory and process automation. http://exini.com/ Cancer detection with AI. We are starting to see machines make more accurate predictions when the data is available than the acting physicians. http://www.pilot.ai/ Computer vision based platform-using AI to solve real world problems such as following a person (with a drone) without GPS signal using vision detection and real time localization via webcam. Computer vision is being adopted into autonomous vehicles, phones, cameras and much more. This area is growing consistently and itโ€™s a fun area to get involved in! http://gazehawk.com// Using AI to track where users are looking on your website. Not the typical tracking where users click or mouse movement, this is using AI to track where users look through eye tracking. This is a newer tech or approach with AI as well instead of using the traditional tracking click method. https://www.terravion.com/ Arial imagery to enhance agriculture and farming projects. http://oriense.com/ Using AI and computer vision for blind or visually impaired individuals. It aims to solve three main problems including obstacle avoidance, geo-navigation and image-recognition. https://www.playosmo.com/en/ Using AI to drive creative thinking and social intelligence for children. http://www.bluerivertechnology.com/ Acquired by John Deere, it uses AI and computer vision to detect and identify each individual plant in a field aimed to reduce chemicals used to rid weeds and other harmful plans for agriculture. Communities To reiterate the following list includes highly recommended events involving the AI community. As mentioned below to get started itโ€™s always a great idea to check your local meetups for those that are AI or ML related to help you get started, network or push your knowledge further. In addition if you are interested in one itโ€™s always a great idea to keep an eye on it for scheduling information in addition to general news and publications. Online Communities and Forums Reddit Artificial Intelligence https://www.reddit.com/r/artificial/ Reddit Learn Machine Learning https://www.reddit.com/r/learnmachinelearning/ Reddit Machine Learning https://www.reddit.com/r/MachineLearning/ Reddit Data Science https://www.reddit.com/r/datascience/ โ€” Also this is further relevant material since the industry in Data Science has multiple parallels and useful info for AI students. In addition you can find more sources on Reddit but these are a good starting point. Conferences and Meetups AAAI Conference on Artificial Intelligence (AAAI-17) โ€” https://www.aaai.org/Conferences/AAAI/aaai17.php Machine Intelligence Summit โ€” https://www.re-work.co/events/machine-intelligence-summit-san-francisco-2017 International Conference on Artificial Intelligence and Applications (AIAPP 2017) โ€” Geneva, Switzerland โ€” http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=60845&copyownerid=46167 The AI Summit โ€” London, UK โ€” https://theaisummit.com/london/ The Oโ€™Reilly Artificial Intelligence Conference New York, NY โ€” https://conferences.oreilly.com/artificial-intelligence/ai-ny SGAI International Conference on Artificial Intelligence (AI-2017) โ€” Cambridge, UK โ€” http://www.bcs-sgai.org/ai2017/ Deep learning Summit โ€” https://www.re-work.co/events/deep-learning-summit-london-2017 Association for the Advancement of AI โ€” http://www.aaai.org/ European Association for AI โ€” https://www.eurai.org/ International Joint Conference on Artificial Intelligence (IJCAI) Melbourne, Australia โ€” http://www.ijcai.org/ The Pacific Rim International Conference on AI โ€” http://www.pricai.org/ What are you doing to grow your career and company? We spent 15k hours researching best practice. Visit our website, explore hundreds of resources, and learn how to get things done.
Artificial Intelligence
0
artificial-intelligence-129f0e58bbbc
2018-07-16
2018-07-16 17:20:27
https://medium.com/s/story/artificial-intelligence-129f0e58bbbc
false
5,040
Your biggest questions in your career and your company are answered precise, expert-focused content.
null
null
null
West Stringfellow
do@howdo.com
west-stringfellow
INNOVATION,GROWTH STRATEGY,STARTUP,EDUCATION,INNOVATION MANAGEMENT
westringfellow
Emerging Opportunities
emerging-opportunities
Emerging Opportunities
0
West Stringfellow
Everything I know @ https://howdo.com. Led innovation @ Amazon, Target, PayPal and VISA. CPO @ Rosetta Stone & BigCommerce. Me @ https://weststringfellow.com
2d5b17336159
westringfellow
682
466
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-19
2018-03-19 01:05:40
2018-03-19
2018-03-19 01:53:15
1
false
en
2018-03-19
2018-03-19 01:53:15
5
12a0e3aca9a5
2.479245
1
0
0
This highlight of this week was the first public beta release for Optic which came out on the 15th. You can go download it here.
3
Weekly Update (March 12โ€“18) Getting ready to launch something bigโ€ฆ This highlight of this week was the first public beta release for Optic which came out on the 15th. You can go download it here. This new version of Optic includes the following changes (all built this week): Released documentation Added Socket Events to send Project Status for Agents (Used to display loading page and error states) Bumped supported optic-markdown to version 0.1.2 Fixed issue where Optic didnโ€™t work if the App was located in a path with a space in it ie โ€˜Developer/My Appsโ€™ Packaged Optic Markdown within .jar so it doesnโ€™t need to be installed separately. Optic checks if a valid version of Node is installed before starting Optic Editor window loads faster and has correct title Bug Fixes: Data directories werenโ€™t being created on first run causing most of Opticโ€™s internal functionality to fail. This was missed because in testing we didnโ€™t wipe all Optic files from our test devices. Issue that prevented Optic servers from shutting down after the host Mac App closed has been resolved Other Development: Several build scripts written to automate deployments of new Optic binaries Built a new page for our website http://opticdev.com/get-optic/ Incorporated basic analytics/crash reporting in Optic (relies on Mixpanel) Hotfix released that fixed an issue several Mac users experienced when their local installation of Node was not in /usr/local/bin/node https://github.com/opticdev/optic/commit/ef0872dc108acc62e0b0878ae707754ea7a4cfcb HUGE THANKS to Scott Barstow, a beta user who helped virtually debug this issue over 35 emails. Finalized designs for the more advanced transformation system that will appear in the next major version of Optic Whatโ€™s Next: The next technical priority for Optic is finishing our advanced transformation system. What does that entail? Well right now the Optic API can take any Schema & Lenses and generate code into your project. So a Rest Route + Express JS Route will yield the code for a route definition. This is powerful, but it doesnโ€™t support stringing together Optic knowledge from multiple sources which really handicaps what Optic can do. For instance, we should be able to transform a DB Model into a Create Route for that Model. The advanced transformation system will enable a fully functional Route to written with both validation and a query contained within it. These transformations will be generic and allow users to customize the way the code is rendered based on their architecture choices. So the query component in our example above would be rendered using Mongoose, DynamoDB, Sequelize, etc depending on what youโ€™re using for your app. tl;dr: Advanced transformations will be able to define generic patterns in code, independent of any particular language or framework. When users call upon these transformations, complex code will be rendered based on the conventions theyโ€™ve included in their project. Create Route: - validation - An Insert Query: success -> Response(200, data) failure -> Response(4xx, error) Input Wanted: One of the challenges weโ€™re currently facing is figuring out the right updating schedule/mechanism. Since Optic is not a web app weโ€™re a little bit stuck in the past. Our dmgs are ~120MB today and itโ€™s not practical to distribute builds of that size every day. That being said, we also want to move quickly and get great updates out to users ASAP. What are some suggestions from the community? Any tools that might help? How often would you be willing to update Optic? We can patch changes (1โ€“5mb in most cases) in the background without asking you. Is that cool? Please respond by commenting on this post or reaching out to us on Twitter. Again, thanks so much to all our Beta Testers. Good things are on the horizon.
Weekly Update (March 12โ€“18)
1
weekly-update-march-12-18-12a0e3aca9a5
2018-07-13
2018-07-13 05:24:19
https://medium.com/s/story/weekly-update-march-12-18-12a0e3aca9a5
false
604
null
null
null
null
null
null
null
null
null
Software Development
software-development
Software Development
50,258
Aidan Cunniffe
Founder focused on Creative-AI, Humanist, Runner and Speaker. I build tools that help people create amazing products.
bbc16443a639
aidandcunniffe
212
201
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-28
2018-05-28 11:50:49
2018-05-28
2018-05-28 11:53:58
0
false
en
2018-05-28
2018-05-28 11:53:58
0
12a1b0ba7e94
1.577358
0
0
0
The AI learning adventure explores intelligence and its connection to engineering and technology. Classic A1 imparts human intelligenceโ€ฆ
1
Artificial Intelligence The AI learning adventure explores intelligence and its connection to engineering and technology. Classic A1 imparts human intelligence into machines or technology, and Future AI is design technology that can itself create intelligence. BulkWhiz is driven by cutting edge AI technology that is built in-house and constantly evolving. Our laser sharp targeting capabilities are helping us anticipate and respond to, client needs. We will start laying the groundwork for understanding AI by sharing key definitions: Artificial Intelligence (AI); the concept of having machines โ€œthink like humansโ€ โ€” in other words, perform tasks like reasoning, planning, learning and understanding language. While no one is expecting parity with human intelligence today or in the near future, AI has big implications in how we live our lives. The brains behind artificial intelligence is a technology called machine learning, which is designed to make our jobs easier and more productive. Machine Learning: is the core driver of AI, and involves computers learning from data with minimal programming. Essentially, instead of programming rule for a machine, you program the desired outcome and train the machine to achieve the outcome on its own by feeding it date. For example, personalized recommendations on Amazon and Netflix. Machine learning is a broad term that encompasses related AI technique, including Deep learning, which uses complex algorithms that mimic brains neural network to learn a domain with little or no human supervision. Consumer apps like Google Photos use deep learning to power face recognition on photos. Natural Language Processing (NLP) uses machine learning techniques to find patterns within large data sets in order to recognize natural language. One application of NLP is sentiment analysis, where algorithms might look for patterns in social media posts to understand how customers feel about a specific brand or product. Big Data is the raw fuel of AI โ€” large amounts of structured or unstructured information that provides the inputs for surfacing patterns and making predictions. Internet of Things (IoT) is a network of billions of digitally connected devices, from toasters to cars to houses and jet engines, that collect and exchange data and can communicate with one another to better service users. Predictive Analytics is a branch of advanced analytics that is used to make predictions about unknown future events, based on patterns in historical data. You might see this in marketing offers that become more relevant to you each time you take action (or donโ€™t) on an email offer. What else would you like to learn about AI?
Artificial Intelligence
0
artificial-intelligence-12a1b0ba7e94
2018-05-28
2018-05-28 11:53:59
https://medium.com/s/story/artificial-intelligence-12a1b0ba7e94
false
418
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
BulkWhiz
null
eba3f0c56bdc
a.joehnny
0
8
20,181,104
null
null
null
null
null
null
0
null
0
3cff50f50c18
2017-12-20
2017-12-20 21:47:16
2017-12-20
2017-12-20 22:02:36
3
false
en
2017-12-21
2017-12-21 20:09:33
18
12a1cc62efbb
4.648113
5
0
0
2017 has been a banner year for the blockchain industry. Along with the meteoric rise of Bitcoin, Ethereum, and other cryptocurrenciesโ€ฆ
5
PAI in the Smart City โ€” Can Personal AI Shine in the Worldโ€™s First Fully Integrated Smart City? Dubai is aiming to become the worldโ€™s first fully integrated smart city 2017 has been a banner year for the blockchain industry. Along with the meteoric rise of Bitcoin, Ethereum, and other cryptocurrencies, thereโ€™s been a public awakening among companies and governments regarding the potential uses of blockchain technology. Some businesses like IBM, Microsoft, and Oracle have already rolled out blockchain services in addition to their regular offerings, but no other city or government has committed as many resources to pursuing blockchain integration like Dubai has. With the blessing of Vice President and Prime Minister of the United Arab Emirates and ruler of Dubai His Highness Sheikh Mohammed Bin Rashid Al Maktoum the government of Dubai aims to lead the world by becoming the first fully integrated smart city, and theyโ€™re doing it with AI and blockchain. My co-founder Adam and I recently attended the World Blockchain Summit in Dubai to present ObENโ€™s mission of using Personal AI (PAI) on the blockchain to allow people to create independent avatars that can accomplish tasks and interact with other PAIs and services online. We left the conference very impressed with what we saw and heard there. While quite a few governments around the world have talked about blockchain and AI, with some even convening exploratory committees to study the subject further, Dubaiโ€™s leadership has already launched initiatives to push the adoption of these technologies across the city. ObEN COO Adam Zheng and his PAI during a presentation at the World Blockchain Summit ObEN is one of the earliest adopters of Project PAI, the worldโ€™s first decentralized platform for intelligent avatars, and we are currently building applications using our Personal AI (PAI) technology on the PAI blockchain. Our proprietary technology lets anyone create their own intelligent avatar with a simple selfie and a few recorded lines. These intelligent 3D avatars look like you, speak in your voice, and can even think like you. When attached to the blockchain, every PAI interaction serves to improve the individual users PAI, as well as the collective PAI community. With more use, the PAIs become smarter and more useful to their owners. Eventually these PAIs will be able to perform a myriad of tasks on your behalf, saving you time and energy. They will even be able to pick up skills that their user may not possess, like communicating in multiple languages. Dubaiโ€™s smart city project is a perfect place for 3rd party PAI applications to be developed and implemented. The city has already established the Dubai Future Foundation to foster technological advancements that aim to improve the livesโ€™ of the cityโ€™s residents. Launched in October of 2016, Dubaiโ€™s plan for blockchain integration has three pillars to guide the development and integration of the technology: government efficiency, industry creation, and international leadership โ€” each pillar of which can potentially benefit from PAI on the blockchain. To make their government more efficient, Dubai is switching to a paperless model whereby all government transactions are recorded on the blockchain, creating an immutable record. This will make government services much easier to access for citizens and visitors, and also considerably reduce the labor required to provide these services, while simultaneously reducing bureaucratic opacity and the massive CO2 footprint of prevailing paper-based record systems. With all government services moving to an online digital format, a PAI avatar in this environment would be able to handle the majority of service requests without an individual having to spend any time waiting in line or filling out forms. For people that have had to navigate the often maddening maze of government bureaucracy, this will make obtaining these services incredibly easy by comparison. In order to meet their goal of creating blockchain businesses in Dubai, the government has launched a two pronged program to both develop and nurture local human capital and entrepreneurs, as well as attract talented professionals and innovators from abroad. This includes the formation of the Global Blockchain Council, dedicated to sharing and exploring the potential applications of blockchain technology with other governments and leading innovators from around the globe. The theoretical solutions get turned into practical ones through the Dubai Future Accelerators program, which has created partnerships with leading government agencies including the city planners, police, road and transport authority, health authority, university system, and the main Dubai holding company. These organizations propose challenges and invite companies to submit proposals and work with relevant authorities to develop prototype solutions that are then tested in one of the most dynamic urban environments in the world. My Personal AI (PAI)โ€” PAIs can provide a new mode for personalized, digital interactions on the blockchain Integrating blockchain throughout an entire city creates an unprecedented opportunity for a network of PAIs and PAI users to digitally interact with everything from healthcare providers (personalized healthcare advice 24/7 through your doctorโ€™s PAI) to transportation systems (PAIโ€™s can plan travel routes and navigate through traffic before you even step out the door). A plethora of personal, business, and government interactions can all occur autonomously and securely with verified PAIs. Dubaiโ€™s leadership anticipate technology leaders taking advantage of this unique opportunity, and have already laid out a strategy for sharing and utilizing data submitted by citizens, visitors, and government agencies โ€” data that will be vital to refining and improving PAIs across the network. As part of their plan for global leadership, Dubai is using blockchain verification and security to pre-approve visas, driverโ€™s licenses, transportation, accommodation, and other services for international travelers. An international visitor will arrive at the airport and immediately be able to interface with the services available in this smart city. With application of PAI technology, before a visitor travels to Dubai their PAI can coordinate their itinerary, fill out all relevant forms for visas and other documentation, secure business services, and ensure access to healthcare โ€” all without the actual person lifting a finger. While in Dubai, their PAI can help find local restaurants and hotspots, provide information on the cityโ€™s rich culture and history, and even interact with local PAIโ€™s to gain more familiarity with the city. I admire the vision that the leaders of Dubai have shown in embracing technology and knowledge as the means to ensure a brighter future for their citizens. Their plan to become a smart city via mass integration of blockchain solutions can potentially provide unparalleled convenience, comfort, and security to the cities residents and its visitors. I look forward to exploring everything Personal AI can do for citizens and visitors in the worldโ€™s first smart city. Join our community Our newsletter subscribers get exclusive access to beta applications and news updates. Subscribe here. Follow our journey on Twitter.
PAI in the Smart City โ€” Can Personal AI Shine in the Worldโ€™s First Fully Integrated Smart City?
41
pai-in-the-smart-city-can-personal-ai-shine-in-the-worlds-first-fully-integrated-smart-city-12a1cc62efbb
2018-04-02
2018-04-02 04:18:09
https://medium.com/s/story/pai-in-the-smart-city-can-personal-ai-shine-in-the-worlds-first-fully-integrated-smart-city-12a1cc62efbb
false
1,086
Enabling every person in the world to create, own and manage their Personal AI. Tencent, Softbank Ventures Korea & HTC Vive X portfolio co.
null
obenai
null
ObEN
contact@oben.com
oben
null
obenme
Smart Cities
smart-cities
Smart Cities
5,072
Nikhil Jain
Nikhil Jain is the co-founder and CEO of ObEN, which creates Personal Artificial Intelligence (PAI) that can look, speak, sing, and even act like the user.
37650c1e54e9
NikhilRJain
14
1
20,181,104
null
null
null
null
null
null
0
from bs4 import BeautifulSoup import requests import re from time import sleep # Get raw web page with BeatifulSoup URL= 'https://www.monster.fi/tyopaikat/haku/?q=data-analytics&where=Helsinki' page = requests.get(URL) soup = BeautifulSoup(page.text, 'html.parser') print(soup.prettify()) num = sum(int(i) for i in re.findall('\d+', soup.find('h2', attrs={'class':'figure'}).text)) print 'There are ' + str(num) + ' jobs found' companies = [] company = soup.findAll(name='div', attrs={'class':'company'}) if len(company) > 0: for b in company: companies.append(b.text.strip()) print companies locations = [] location = soup.findAll(name='div', attrs={'class':'location'}) if len(location) > 0: for b in location: locations.append(b.text.strip()) print locations titles = [] title = soup.findAll(name='h2', attrs={'class':'title'}) if len(title) > 0: for b in title: titles.append(b.text.strip()) print titles links = [] for a in title: url = a.find('a').attrs['href'] links.append(url) print links job_page = requests.get('https://avoimettyopaikat.monster.fi/osa-aikainen-data-scientist-data-analyst-houston-analytics-helsinki-uusi-finland-academic-work/11/199503456') job_soup = BeautifulSoup(job_page.text, 'lxml') job_desc = job_soup.find('div', attrs={'id':'JobDescription'}).text print job_desc job_clean = clean_text(job_desc) job_clean = job_clean.replace("#jobBodyContent ul { margin-bottom: 13px; line-height: 16px; }", "") # To remove inline CSS job_clean = job_clean.replace("#ejb_SubHdTxt, .ejb_SubHdTxt { background-color: #f1f1f1;} .addthis_button {font-size:8pt;} div#content{ font-size: 12px; /*font-family: Arial, Helvetica, sans-serif; */ line-height: 20px; /*width: 758px; */ overflow: hidden; } img#barona_top_banner{ width: 100%; height: auto; margin-bottom: 30px; } div#content_text{ margin-left: 40px; margin-right: 40px; } div#barona-text-area-padding{ padding-bottom: 40px; } div#content_text a{font-size: 14px; font-weight: bold; color: #0d51b4; text-decoration: none;} div#content_text h1{font-size: 26px; font-weight: bold; margin-bottom: 30px;} div#content_text h2{font-size: 16px; font-weight: bold; } div#content_text span.company-title{font-weight: bold; } div#employer-company-card{ border-top: 1px solid #dedede; padding-top: 40px; padding-bottom: 40px; } img#barona-employer-logo{ max-width: 185px; height: auto; margin-bottom: 20px; } div#barona-company-card{ border-top: 1px solid #dedede; padding-top: 40px; padding-bottom: 40px; } img#barona-logo{ max-width: 185px; height: auto; margin-bottom: 20px; } ", "" #To remove inline CSS print job_clean
13
null
2018-09-17
2018-09-17 20:49:04
2018-09-18
2018-09-18 00:15:34
3
false
en
2018-09-18
2018-09-18 00:22:47
15
12a4462b67a2
7.987736
1
0
0
I am taking the first step into the job market now. And to prepare for that, I try to search for a job as many other do by going throughโ€ฆ
5
Scrape Monster.fi andโ€ฆ (Part 1) I am taking the first step into the job market now. And to prepare for that, I try to search for a job as many other do by going through some job listing site such as Indeed.fi or monster.fi or duunitori.fi โ€ฆ But, because I am a data lover, I would like to do some a little bit more by scrape all the related job ads and do some analyze and make some automation such as evaluation the job to keep only with my skill set. In this post, I will share how I scrape the content from monster.fi and save it as a Pandas DataFrame for further analytic. This can be useful for whom are in job market and want to work smarter, rather than harder at finding new and interesting jobs for yourself. โ€œA disclaimer before beginning, many websites can restrict or outright bar scraping of data from their pages. Users may be subject to legal ramifications depending on where and how they attempt to scrape information. Many sites have a devoted page to noting restrictions on data scraping at www.[site].com/robots.txt. Be extremely careful if looking at sites that house user data โ€” places like facebook, linkedin, even craigslist, do not take kindly to data being scraped from their pages. Scrape carefully, friendsโ€ Scraping rule Here are some important thing you should keep in mind before scraping any website: You should check a websiteโ€™s Terms and Conditions before you scrape it. Be careful to read the statements about legal use of data. Usually, the data you scrape should not be used for commercial purposes. Do not request data from the website too aggressively with your program (also known as spamming), as this may break the website (if you have fast internet connection, you may request few thousand pages per second). Make sure your program behaves in a reasonable manner (i.e. acts like a human). One request for one webpage per second is good practice. The layout of a website may change from time to time, so make sure to revisit the site and rewrite your code as needed Program Setup I will build this scraper using the Python 2.7 in Jupyter Notebook, We are going to use Python as our scraping language, together with a simple and powerful library, BeautifulSoup. In case you are new in Python and data analytics with python, I recommend you to install Anaconda, it will bring everything you need to scrape any website. The basic workflow of the program is Scrape the search result page, Find all information about Job title, Company name, Location, Job link and put them into separate DataFrame Follow the job link to detail job description, scrape the job description and put them into DataFrame Append all DataFrames into one DataFrame with contain all information about Job Title, Company Name, Location, Job Link and Job Description. Each of them is in 1 column Export and save the DataFrame into csv file with file name as format โ€˜Monster_<date>_<job_query>_<location_query>.csvโ€™ I will create 2 functions: First function is to scrape the content from the Monster.fi Second function is to clean the scrape content to make it nicer by removing all the Unicode escape, ASCII character or blank line break or remaining CSS and HTML tag (even though the BeautifulSoup really do nice work to only grab the text, but still remain some inline CSS there) Let Startโ€ฆ We will do as the following Find and scrape information about the Job Title, Company, Location and Job link. All those information is available at search result page and easy to find and scrape Monster.fi โ€” Search result page 2. Follow the Job link to job post to scrape the detailed job description. Even you can read it at the job result page by clicking into the job title, itโ€™s rendered using a client side javascript and cannot scrape easily without following the job link Monster.fi โ€” Job post detail page 3. Clean up the content and put them all in nice DataFrame for further use Building the Scraper Components Import library First, we need to import our library. Here, I will use The requests library to send the HTML request to the site. Keep-alive and HTTP connection pooling are 100% automatic with this library The BeautifulSoup library for pulling data out of HTML and XML files. It works with your favourite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. The time library to use the time.sleep() function that Suspends the calling thread for secs seconds The re library that provides regular expression matching operations to clean up and find information using patterns and strings Find job information Secondly, we will try to inspect the Monster.fi search result page to find the necessary information Note the URL , it follow this format: https://www.monster.fi/tyopaikat/haku/?q= <job_query>&where=<location_query>. By changing job_query and location_query, we can easily get the result of different job positions in different location To scrape the website, you need to know about the HTML tag (head, body, div, โ€ฆ.) and attribute (class, ID, hrefโ€ฆ.) and how to find the necessary information using them and familiar with the Inspect function of Google Chrome. I will not go through that part in this post. Now, we know that our variable โ€œsoupโ€ has all of the information housed in our page of interest. It is now a matter of writing code to iterate through the various tags (and nested tags therein) to capture the information we want. While this is not the appropriate place to go over all of the ways in which information can be found or withdrawn from a pageโ€™s HTML, the BeautifulSoup documentation has a lot of helpful information that can guide oneโ€™s searching. Find the number of job: The total number of found job is nested under <h2> tag with the attribute โ€œclassโ€=โ€figureโ€ and inside the phrase : Lรถytyi 27 ensisijaista tyรถpaikkaa ja 10 kiinnostavaa tyรถpaikkaa . We can extract that info using regular expression by getting only number out of phrase and sum all together Find the company name: All company name is nested under <div> tag with the attribute โ€œclassโ€=โ€companyโ€. We will use the BeautifulSoup function findAll to find and extract all company name into a list named โ€˜Companyโ€™ (this list item will contain all text between <div class=โ€™Companyโ€™ ></div>, including the child HTML tag , inline CSSโ€ฆ.). After that we will loop all the list item, get the text only (which is actually the company name that we need) and append them to the โ€˜companiesโ€™ list Find job location: All location is nested under <div> tag with the attribute โ€œclassโ€=โ€locationโ€. We will use the BeautifulSoup function findAll to find and extract all company name into a list named โ€˜Locationโ€™ (this list item will contain all text between <div class=โ€™location ></div>, including the child HTML tag , inline CSSโ€ฆ.). After that we will loop all the list item, get the text only (which is actually the company name that we need) and append them to the โ€˜locationsโ€™ list Find job title: All location is nested under <div> tag with the attribute โ€œclassโ€=โ€titleโ€. We will use the BeautifulSoup function findAll to find and extract all company name into a list named โ€˜Titleโ€™ (this list item will contain all text between <div class=โ€™title ></div>, including the child HTML tag , inline CSSโ€ฆ.). After that we will loop all the list item, get the text only (which is actually the company name that we need) and append them to the โ€˜titlesโ€™ list Find job link: Not like other information above, the job link is nested in the child <a> tag of the job title <div class=โ€title>. Therefore, to grab the link, we need to loop through the โ€˜titleโ€™ list above, find the tag <a> with attribute href Find the detailed job description: Like I mention above, we cannot use the BeautifulSoup to grab the content that generate using the client-side script , such as Javascript. Therefore, we need to follow the job link to the job post page and get the job description from there. I will use this job post for testing the script : https://avoimettyopaikat.monster.fi/osa-aikainen-data-scientist-data-analyst-houston-analytics-helsinki-uusi-finland-academic-work/11/199503456 and try to get the detailed information from that. By using the Inspect function of Google Chrome, we can see that the job description is nested in the <div> tag with class=โ€™JobDescriptionโ€™ attribute. Therefore, we can parse the desired info using: After that we can use this function below to clean the text: as The whole notebook that contain all the script above is below: toanchitran/monster_fi_scrapper Contribute to toanchitran/monster_fi_scrapper development by creating an account on GitHub.github.com Put them all โ€” Final Monster scraper Weโ€™ve got all of the various pieces of our scraper. Now, we need to assemble them into the final scraper that will withdraw the appropriate information for each job post, keep it separate from all other job posts, and assemble all of my job posts into a single DataFrame one at a time. In the final scraper, I would like to add up some nice stuff: We can search any job title that we want in any city in Finland (or whole Finland as we want) We can loop through all the result pages. Every result page have 20 jobs , therefore we need to go to more than 1 page if we find more than 20 jobs We can save the DataFrame as csv file for further use with file name as format โ€˜Monster_<date>_<job_query>_<location_query>.csvโ€™ The final and function-able scraper is below: Conclusion I hope that this scrapper will be useful to you. You can find all the notebook in this post and Monster.com scrapper version and some of job data file that I scrape using the Monster.fi scrapper script at my Github toanchitran/monster_fi_scrapper Contribute to toanchitran/monster_fi_scrapper development by creating an account on GitHub.github.com Next post, I will show how I use the data for looking my job.
Scrape Monster.fi andโ€ฆ (Part 1)
1
scarpe-monster-fi-and-part-1-12a4462b67a2
2018-09-18
2018-09-18 00:22:47
https://medium.com/s/story/scarpe-monster-fi-and-part-1-12a4462b67a2
false
1,971
null
null
null
null
null
null
null
null
null
Monsters
monsters
Monsters
767
Toan Tran
Data Lover
8354527619d1
toan.tran
46
58
20,181,104
null
null
null
null
null
null
0
import pandas as pd import pandas_datareader as web import matplotlib.pyplot as plt import numpy as np stock = web.DataReader('MRF.BO','yahoo', start = "01-01-2012", end="31-12-2017") stock = stock.dropna(how=โ€™anyโ€™) stock.head() stock[โ€˜Adj Closeโ€™].plot(grid = True) stock['ret'] = stock['Adj Close'].pct_change() stock['ret'].plot(grid=True) stock['20d'] = stock['Adj Close'].rolling(window=20, center=False).mean() stock['20d'].plot(grid=True)
6
null
2018-03-26
2018-03-26 12:05:04
2018-03-26
2018-03-26 12:06:27
5
false
en
2018-03-26
2018-03-26 12:06:27
6
12a4736b88a7
2.497484
2
0
0
In this blog, we will see basic time series operations on historical stock data. Our main focus will be on generating a static forecastingโ€ฆ
5
Time Series Analysis: An Introduction In Python In this blog, we will see basic time series operations on historical stock data. Our main focus will be on generating a static forecasting model. We will also check the validity of the forecasting model by computing the mean error. However, before moving on to building the model, we will briefly touch upon some other basic parameters of time series like moving average, trends, seasonality, etc. Fetching Data For the purpose of this blog, we will take past five years of โ€˜adjusted priceโ€™ of MRF. We will fetch this data from yahoo finance using pandas_datareader. Let us start by importing relevant libraries. Now, let us fetch the data using DataReader. We will fetch historical data of the stock starting 1st Jan 2012 till 31st Dec 2017. You can also fetch only the adjusted closing price as this is the most relevant price, adjusted for all corporate actions and used in all financial analysis. We can check the data using the head() function. We can plot the adjusted price against time using the matplotlib library which we have already imported. Calculating and plotting daily returns Using time series, we can compute daily returns and plot returns against time. We will compute the daily returns from the adjusted closing price of the stock and store in the same dataframe โ€˜stockโ€™ under the column name โ€˜retโ€™. We will also plot the daily returns against time. Moving Average Similar to returns, we can calculate and plot the moving average of the adjusted close price. Moving average is a very important metric used widely in technical analysis. For illustration purpose, we will compute 20 days moving average. Before moving on to forecasting, let us quickly have a look on trend and seasonality in time series. Trend and Seasonality In simple words, trend signifies the general direction in which the time series is developing. Trends and trend analysis are extensively used in technical analysis. If some patterns are visible in the time series at regular intervals, the data is said to have seasonality. Seasonality in a time series can impact results of the forecasting model and hence should be dealt with care. You can refer to our blog โ€œStarting Out with Time Seriesโ€ for more details on trend and seasonality and how a time series can be decomposed into its components. Read more here
Time Series Analysis: An Introduction In Python
11
time-series-analysis-an-introduction-in-python-12a4736b88a7
2018-05-11
2018-05-11 08:26:43
https://medium.com/s/story/time-series-analysis-an-introduction-in-python-12a4736b88a7
false
441
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
QuantInstiยฎ
QuantInsti is an Algorithmic Trading Training institute focused on preparing professionals and students for HFT & Algorithmic Trading.
42079579cd65
QuantInsti
379
138
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 02:30:22
2018-09-19
2018-09-19 02:40:11
5
false
en
2018-09-19
2018-09-19 02:40:44
5
12a4cb20a740
4.116352
2
0
0
Iโ€™m obsessed with podcasts. From a quick hit refresher presented in a 15 min format to hour long deep dives into complex topics, this is myโ€ฆ
5
5 Data Science, AI and Machine Learning Podcasts to Listen to Now (updated) Iโ€™m obsessed with podcasts. From a quick hit refresher presented in a 15 min format to hour long deep dives into complex topics, this is my preferred medium to consume data science content. You see, Iโ€™m a runner. I spend hours on the weekend in Chicago training and working to build endurance for long distance races. The podcast format allows me to take my favorite data science experts on the road to maximize my time โ€” allowing me to learn and train at the same time. In this updated blog, I share with you some of my favorite AI and machine learning podcasts, so that you, too, can stay up-to-date on the latest trends in the field โ€” while enjoying the things you love. Whether you are an executive looking to acquire a breadth of knowledge in a wide variety of topics, or a practitioner honing your expertise in machine learning โ€” my list of podcasts will help you stay up to date. If youโ€™re a podcast fan give one of these excellent data science shows a try. Linear Digressions | by Ben Jaffe and Katie Malone This podcast is short and sweet. Itโ€™s also my favorite podcast of the 5 Iโ€™m recommending. In Linear Digressions, you can expect casual conversation, fun anecdotes and awesome content. The two hosts do a great job playing off each otherโ€™s strengths; and the show is very entertaining. Katie is the data scientist of the group. She brings expertise and practical knowledge of many of the models the duo review. Iโ€™m always learning from her explanations because she always explains why as well as how itโ€™s accomplished. Ben is the engineer of the group, and he does a great job of thinking through a particular implementation and giving a developerโ€™s view of the solution. Whether theyโ€™re discussing the different statistical approaches to understand if a running shoe is worth buying; or how you might use Shapley Values to understand how features are working in a deep learning algorithm โ€” their discussion is always entertaining and enlightening. This podcast is designed for data scientists and machine learning practitioners. The show releases on a weekly basis and has a brief run time of 15โ€“30 mins โ€” perfect for a quick jog or a 5k. Oโ€™Reilly Data Show | by Oโ€™Reilly Media Oโ€™Reilly is a trusted source of educational content around computer science, data science and data infrastructure and engineering. The podcast is a valuable resource, revealing applications and approaches taken by practitioners to solve some timely big data and/or data science topics. Ben Lorica does an excellent job recruiting guests that have deep expertise in technical applications of data science, engineering architecture and languages that allow us to navigate and use these tools. Therefore, the quality of the interviews are quite high. Oโ€™Reilly tends to release their podcasts bi-weekly; and of course they promote their conferences, publications and lectures. I can tolerate the corporate spin that due to the high production value and the breadth and depth of the podcast. This Week in Machine Learning & AI | by Sam Charrington TWiML&AI is one of two new additions to this list. With discussion lead by Sam Charrington, a highly regarded machine learning consultant and speaker you can expect some high quality content. TWiML Talk allows data scientists, developers, business innovators and other machine learning and AI enthusiasts a platform to share their ideas around machine learning research, technology, business, culture, and more. The podcast is perfect for executives and managers who want to get introduced to complicated topics with clear, concise and thoughtful explanation. Data Skeptic | by Kyle Polich Kyleโ€™s Data Skeptic podcast will keep you up-to-date on the news, topics and discussions of all things data science, machine learning and AI. His podcast discusses a relevant machine learning or data science problem and then critiques the application/topic. I really enjoy the level of discussion in this podcast, and while the discussions can be quite technical, I donโ€™t feel like they are unapproachable by analysts, data scientists and computer scientists who have at least some working knowledge or deeper exposure to the industry. Recently his podcast has turned to a โ€œfake newsโ€ analysis; and Iโ€™m digging it. The quality of conversation and the community supporting this podcast keeps me coming back for more. Linh as the cohost adds some light moments; but it can get slightly patronizing in her segments. I donโ€™t think that takes away from a strong podcast. DataSkeptic releases weekly and has a run time of 30โ€“60 mins. Talking Machines | by Tote Bag Productions The 2nd new show to creep its way into my top 5, Talking Machines is a great podcast to check out. Well into their fourth season, hosts Katherine Gorman and Neil Lawrence lead insightful discussions of hot topics in our industry. Plus, you know they have staying power with four years of content to check out! Iโ€™m super excited to add this great listen to my playlist. Is there a podcast that Iโ€™ve left off of this list? Leave me a comment, and Iโ€™ll add it to my playlist!
5 Data Science, AI and Machine Learning Podcasts to Listen to Now (updated)
2
5-data-science-ai-and-machine-learning-podcasts-to-listen-to-now-updated-12a4cb20a740
2018-09-19
2018-09-19 02:40:44
https://medium.com/s/story/5-data-science-ai-and-machine-learning-podcasts-to-listen-to-now-updated-12a4cb20a740
false
870
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Courtney Perigo
#Analytics, #Data, #MachineLearning and marketing #Research Pro | #datadriven SVP of Data Strategy @cramerkrasselt www.courtneyperigo.com
2f1d75c2c6e6
Courtney_Perigo
85
100
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-27
2018-07-27 21:40:30
2018-07-27
2018-07-27 23:02:54
2
false
en
2018-07-27
2018-07-27 23:02:54
0
12a574d125d5
1.934277
0
0
0
In AI terms?
4
What is Data Science Sask? In AI terms? It is a neural network of people that will shift its weights based upon a social cost function that will change over time. I guess it would be an unsupervised learning model. Lets go with a Boltzman machine.. yeah, sure. Why? Well, we have no dependant variable to check if it passed/failed. Instead, we will just keep feeding it people, ideas, stories, and we will let the network create its own outputs for us. What have we fed it so far? Some rough videos of me explaining what machine learning is, and our first meet-up last night. What has it spit out? One two-hour-full-speed-back-and-forth-conversation, some new friendships, and a hang overโ€ฆ So, what did we learn from our first epoch? Data Science is For Everyone There were people from China, India, Khazakstan, Habour Landing, and even the NOD. There were engineers, business owners, construction workers, even a freakin solider (who helped the hosts and me shut down Habanos for the first time since I was way cooler). There were people from the government, start-ups, refinery, university, and of course, THE ARMY. We filled the room, and at least 20 people asked questions. It was informal, raw, and fun. Our speakers were amazing. They handled tough questions about ethics, tech and money. Some rookie data science enthusiast jack ass even tried to give them recommendations of what they should be doing, and how they should do things. It was a live stream of watching people who have no idea what data science is learn a few things, come up with a brand new idea, pitch it, receive feedback that it would probably work, and then told exactly how to do it. What To Feed it Next? If you are reading this then I guess blogging? Im sitting in a podcast studio I built, so you can expect to hear some audio. The #1 thing realized last night is that data is king. Data Science is a volume game. The neural network might eventually work if you feed it enough data. DSASK might work if we feed it enough ideas, people, code, money, sweat, videos, podcasts, blogs, meet-ups, pictures, green screens and beer. What will it spit out? I donโ€™t know. All I know is that in construction you only have to be wrong once, and in tech you only have to be right once. shut down crew!
What is Data Science Sask?
0
what-is-data-science-sask-12a574d125d5
2018-07-27
2018-07-27 23:18:41
https://medium.com/s/story/what-is-data-science-sask-12a574d125d5
false
411
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
John Hashem
null
ffe28ff76c29
johnhashem_60118
2
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-11
2018-06-11 06:25:51
2018-06-11
2018-06-11 06:46:22
0
false
en
2018-06-11
2018-06-11 06:46:22
0
12a5c01adc6a
0.996226
0
0
0
In the current word scenario, education is in a precarious condition. Since it is a commodity that cannot be quantified easily, corruptionโ€ฆ
5
Blockchain: Enriching Education In the current word scenario, education is in a precarious condition. Since it is a commodity that cannot be quantified easily, corruption and quality compromise are rampant. Education today has become more diversified than ever, with a large number of private institutions and organizations thronging the scene. Such trends have put the authenticity of skill assessments under question, and the trust in certification and proof of learning has taken a hit. In order to ascertain a personโ€™s credentials in the field of his or her assessment, Blockchain is one revolutionary technology that provides us with a breakthrough solution to all the problems pertaining to authenticity and justness. It builds up such a system that is transparent, online and secure. In such a system, the assessment of the candidate will be done, and a digitally secure certificate would be provided, which would not only make sure that nobody can tamper with the assessment, but also ascertain that the certificate cannot be counterfeited. Digital certificates are registered on the Ethereum blockchain, cryptographically signed, and tamper proof. Adoption of the Blockchain technology would not only make skill assessments more transparent and secure, but also shift from the currently used wasteful techniques that involve pen and paper. In other words, you get an authentic assessment of skills, with a reduced carbon footprint. The field of Blockchain technology is one that is filled with ample opportunities, which, if made good use of, will only lead to a more transparent system of dealing with a wide array of services, wiping clean the horrors of corruption and forgery.
Blockchain: Enriching Education
0
blockchain-enriching-education-12a5c01adc6a
2018-06-11
2018-06-11 06:46:23
https://medium.com/s/story/blockchain-enriching-education-12a5c01adc6a
false
264
null
null
null
null
null
null
null
null
null
Blockchain Technology
blockchain-technology
Blockchain Technology
13,452
Equate Platform
null
c3625203870c
equateplatform
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-22
2018-08-22 09:12:33
2018-08-22
2018-08-22 09:59:40
1
false
en
2018-08-22
2018-08-22 09:59:40
1
12a8a57a5e28
1.358491
0
0
0
Like many of us, itโ€™s hard for me to talk about myself. It is difficult to look at yourself with a detached view and choose the facts andโ€ฆ
5
How to write about yourself in interesting and lively way. Like many of us, itโ€™s hard for me to talk about myself. It is difficult to look at yourself with a detached view and choose the facts and beliefs that will be interesting and significant at this particular moment, in this particular situation. But sometimes it is very important that the โ€œabout me โ€œ information sounded at easy manner and did not produce the impression of dry โ€œroboticโ€ text. Here are a few tips I used when writing an update for ยซAbout meยป section on Behance profile recently: The first thing you should do-answer the question: how can I be useful to the reader or conversationalist? ยซAs a UX/UI designer, I create logical and clear interfaces that are convenient and pleasant in usage.ยป Show that you are trustworthy. Use facts and feedback from your customers. ยซSome of my clients leave open feedback about our work. Latest case studies and feedbacks can be seen in the Behance portfolio.ยป Tell about your goal. After all, we do our business for some purpose, right? ยซMy goal is to help people interact with each other through technology, making the process easier, faster and more understandable.ยป Show yourself as a person, talk about your hobbies, what inspires you. Take the first step to establish a personal connection with the interlocutor. ยซI have always been intrigued and attracted by the digital world and technology development process. I grew up on sci-Fi books and artificial intelligence moviesยป Take away: Do you have your own voice while telling your story? Your own attitude to the things? Will the reader see themselves in your story? Keep it logical. Keep it simple, like youโ€™re talking to a friend. P.S. Behance updates are still in progress, Case studies โ€” are the next point of updates :)
How to write about yourself in interesting and lively way.
0
how-to-write-about-yourself-in-interesting-and-lively-way-12a8a57a5e28
2018-08-22
2018-08-22 09:59:40
https://medium.com/s/story/how-to-write-about-yourself-in-interesting-and-lively-way-12a8a57a5e28
false
307
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Helena Aleksandrova
UX/UI designer | @elevennadesign on Instagram | Connect w/me: https://www.behance.net/elevenna
8f55969a51c
elevenna
13
39
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-25
2018-09-25 20:09:26
2018-09-29
2018-09-29 11:29:59
2
false
en
2018-09-30
2018-09-30 22:17:31
15
12a95cd80224
2.107862
7
0
0
Banyan Network and its DVN Precision marketing vs. other marketing types. Marketing is extremely important in the business world withโ€ฆ
5
How Banyan Network will revolutionize marketing Banyan Network and its DVN Precision marketing vs. other marketing types. Marketing is extremely important in the business world with companies paying large amounts of money to market their products to consumers. These include: TV & Radio commercials, Advertising brochures, Newspapers Ads, Google Ads and Ads on social media such as Facebook, Twitter and YouTube. TV & Radio, Ad brochures and Newspapers: Companies paying for time and space, casting as large a net as they can afford with the hopes of reaching potential clients who might require their product/service. Google Ads and other social media ads: The range here is also large with the ability to target audiences types. Banyan Network can provide a solution so that companies are able to better target their potential customers. Banyan Network provides a solution, that allows companies to better use their marketing resources by identifying potential customers, that fit their target demographic. Banyan Network โ„–1 Global Data Fusion Network. Banyan Network is a fusion of big data network based on blockchain technology called Data Fusion Value Chain Network (DVN). It is also the worldโ€™s first distributed ecosystem of data economies raised and built by team experts in the field of data analytics. The solution Data Fusion Value Chain Network (DVN) - Data gateway and Data evaluation system - Data cleaning, Data tagging & Crowdsourcing governance platform - Match ID to fuse data from different data sources using ID-Mapping technology - Data query service & Precision marketing service Banyan Network provides precision marketing through their DVN. Banyan Network provides companies deeper insights into who their target market is. Companies will be able to identify which customers are more likely to buy their products. They can then market their products to their customers. This is called precision marketing. Banyan Network has billions of GOLDEN DATA sources for their DVN. DVN update 1 DVN update 2 DVN update 3 DVN update 4 DVN update 5 DVN update 6 Watch the informative video about Banyan Network and its DVN Precision marketing. So, why is the BBN Token interesting for us the investors? Companies need to pay with BBN tokens for Banyan Network and its services. Also if the customer wants to pay with other currencies, these currencies will be converted to BBN. These BBN will be locked for a year and it will be burned or rewarded to the holders after this locking period (both a win). The supply will be less, or the holders will be rewarded for holding it. BBN Tokens can be purchased at the following Exchanges: Bibox, Coinex, Bitfinex, Ethfinex, Coinsuper and Idex Follow Banyan Network on Twitter for the latest updates. Do not hesitate to ask any questions on Telegram in the BBN Global Fans group.
How Banyan Network will revolutionize marketing
227
how-banyan-network-will-revolutionize-marketing-12a95cd80224
2018-09-30
2018-09-30 22:17:31
https://medium.com/s/story/how-banyan-network-will-revolutionize-marketing-12a95cd80224
false
457
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Crypto ED
Founder of BBN Family ๐Ÿ’Ž
977303e93ed7
CryptoED
3
4
20,181,104
null
null
null
null
null
null
0
null
0
fc0309257833
2018-07-13
2018-07-13 15:23:34
2018-04-26
2018-04-26 17:57:54
2
false
en
2018-07-16
2018-07-16 17:16:59
18
12aa590a6d37
4.621069
0
0
0
null
5
AI and the Legal Department Overview: This is the first article on growth within the legal department built for built for weststringfellow.com alongside hundreds of resources to help you grow your career and company. This article and the following, โ€œUnderstanding Blockchain & Emerging Blockchain-Powered Solutions,โ€ introduce two major technologies that are disrupting the legal arena and explore the implications of these technologies on current legal department operations. Here, we consider artificial intelligence and AI-powered tools. โ€œThere are two deep and abiding truths in the legal industry: no one knows what AI even meansโ€ฆand you need solutions that incorporate AI to resolve discrete problems you face. But that sounds less exciting.โ€ (Joe Patrice, Above the Law, 2017). AI is a vast concept and one that is often misunderstood. The graphic, below, from Thomson Reuters introduces the terms and AI functions relevant to the legal arena. Source: Thompson Reuters Technologies that incorporate AI are allowing legal departments to handle an increasing amount of work without commensurate additions to in-house attorney staffing. Digital solutions are automating basic and routine legal tasks, creating new efficiencies. And the use of algorithm-based tools can reduce costs, shape business strategy, and minimize contract risks. For example, AI technologies are standardizing best practices by assisting with document review and electronic discovery management, online dispute resolution, contract analysis, litigation predictions, the identification of fraud and misconduct, and the execution of due diligence reviews. Automated contract review, analysis, and comparison. LawGeex employs the latest in AI, machine learning, text analysis, and natural language processing to review, understand, and improve legal documents. Similarly, Axiom has added machine learning to contract review in its AxiomAI program. Legal research. The ROSS Intelligence tool is an AI legal research platform. It is built on ROSS Intelligenceโ€™s proprietary legal AI framework, Legal Cortex, which incorporates IBM Watsonโ€™s cognitive computing technology. According to Blue Hill Research, this solution uses natural language processing and machine learning capabilities to identify legal authorities relevant to certain cases. Users conduct searches by entering questions, and the ROSS tool can understand the intent of the question and provide in-context responses. Legal metrics analysis. The Premonition litigation database claims to be the largest in the world, and the companyโ€™s AI solutions are designed to analyze attorney litigation metrics, including success rates and the duration of cases. Judge, court, and expert witness metrics are also available. The database provides risk and loss information for in-house counsel and risk managers. Risk prevention. Intraspexion applies deep learning algorithms to tools that can identify potential legal violations or risks before they occur by scanning communications such as emails and messaging applications. Real-time monitoring and automated risk management systems for legal entities are expected to proliferate because their speed and convenience make them indispensable tools in the legal sphere. AIโ€™s Impact on Metrics for Assessing Legal Department Performance AI-powered tools can play a considerable role in improving legal department performance. Respondents contributing to the Thomson Reuters โ€œ2016 Legal Department In-sourcing and Efficiency Reportโ€ indicated that the efficiencies derived from digital analytics gave in-house counsel more time to devote to strategic work and the legal aspects of the job. Moreover, digital analytics and enterprise legal management solutions are so versatile that legal departments can select the KPIs or metrics that are best suited for their situation and better document how department performance aligns with the companyโ€™s overall goals. The integration of new AI technologies โ€” for example, predictive analytics and scenario modeling โ€” into in-house legal departments is also creating new metrics against which departments can be measured. Certainly, department budget, actual spend, types and volume of legal matters, and other traditional KPIs remain highly relevant when it comes to assessing department performance, but the use and integration of technology tools are supplementing these traditional KPIs. The LexisNexis CounselLink benchmarks law department performance in three areas: enterprise-level technology systems that capture information, drive workflows, and automate tasks; analytics that facilitate discovery and communication of meaningful patterns of data; and general processes in place that streamline outcomes. The corporate legal maturity model offers an analytical tool for measuring law department performance. Source: LexisNexis CounselLink, 2015 What are you doing to grow your career and company? We spent 15k hours researching best practice. Visit our website, explore hundreds of resources, and learn how to get things done. Securing Buy-in for In-house AI Adoption Despite strong endorsements for the value of AI solutions across industries and within certain segments of the legal industry, the adoption curve for new technologies within corporate legal departments is not a steep one, especially among smaller departments. A Thomson Reuters survey published in 2017 shows that the rate of AI adoption in small in-house legal departments is low. Only 21 percent of respondents believe AI will be mainstream in legal departments within five years; the majority believe it will take another ten or more years to ignite. Three factors โ€” cost, trust/reliability, and fear of change โ€” are, according to the survey, at the root of in-house counselโ€™s hesitancy to adopt digital solutions. Cost โ€” Although AI tools are anticipated to create cost efficiencies for corporate legal departments, there are concerns over the costs of implementing and supporting these technologies. Trust โ€” Some in-house counsel are uncomfortable with removing the human element from tasks because it also removes ownership and accountability. Fear of Change โ€” Lawyers and the broader legal industry are known to be risk adverse and resistant to change. But also contributing to such fear in this instance is AIโ€™s potential to bring about job displacement. Steve Lohr of the New York Times found that AI adoption will be a โ€œslow, task-by-task processโ€ that will not be detrimental to jobs for a long time. SA Mathieson from ComputerWeekly.co corroborated Lohrโ€™s perspective, explaining that legal professionals will not be replaced by AI and big data because, to be most effective, these tools must be combined with human expertise for processes such as document review. According to Mathieson, new solutions are likely to support lawyers, not displace them. General Counsel leadership and legal department involvement with other business units in decisions on technology integration can hasten the path to adoption. Securing input from a wide cross-section of company stakeholders is advised when embarking on any infrastructure changes โ€” for example, security concerns, integration with existing infrastructure, and impact on end-users necessitates feedback from many business units. The next article in this series, โ€œUnderstanding Blockchain & Emerging Blockchain-Powered Solutionsโ€ looks at the technologies in this space that are disrupting the legal arena, smart contracts for example, and explores the implications of these technologies on current legal department operations.
AI and the Legal Department
0
ai-and-the-legal-department-12aa590a6d37
2018-07-16
2018-07-16 17:16:59
https://medium.com/s/story/ai-and-the-legal-department-12aa590a6d37
false
1,123
Your biggest questions in your career and your company are answered precise, expert-focused content.
null
null
null
West Stringfellow
do@howdo.com
west-stringfellow
INNOVATION,GROWTH STRATEGY,STARTUP,EDUCATION,INNOVATION MANAGEMENT
westringfellow
Legal
legal
Legal
5,676
West Stringfellow
Everything I know @ https://howdo.com. Led innovation @ Amazon, Target, PayPal and VISA. CPO @ Rosetta Stone & BigCommerce. Me @ https://weststringfellow.com
2d5b17336159
westringfellow
682
466
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-29
2018-03-29 15:47:22
2018-03-29
2018-03-29 15:57:44
1
false
en
2018-03-29
2018-03-29 15:57:44
1
12abd20e3c31
1.509434
1
0
0
UX design is one of those areas that need your constant attention. The field of web designing is ever-changing, and it is very importantโ€ฆ
5
Trends That Are Going to Impact your UX Design in 2018 UX design is one of those areas that need your constant attention. The field of web designing is ever-changing, and it is very important for you to stay in touch with the latest trends in order to provide your customers with the best experience. For a person, it is very difficult to frame out a complete 1 year UX design strategy. There are various instances when even experts who provide web designing and development services find their UX skills outdated. So, to make it easy for you, here are 3 trends that can definitely help you in improving your website UX design. Switching from passwords to verification codes Looking at the present scenario, 2018 can be a year of making things easier for users. One major step in the whole process can be replacing the login passwords with the one-time verification codes. This will relieve the users from the constant headache of remembering different login passwords. Personalized Experience Though the concept of the personalized experience is not new, it will definitely be one of the key elements of the UX designs in 2018. Chatbots and live chats can be customized to improve the overall user experience. Ideas like age responsive designs can become part of the chatbots that can help in increasing user engagement. Artificial Intelligence 2017 saw the rise of chatbots in the business scene. In 2018, there are chances that the voice interfaces may become popular. Amazon is ruling the voice interface race with its voice interface Alexa. The introduction of Alexa and Siri is definitely a big change in both artificial intelligence and UX web designing. The above shifts are predicted using the trends from the past. Expert vision says that the main focus in 2018 will be on removing out the complexities from the UX designing tasks. The best way to achieve a perfect UX design is to consult a good web designing firm. Virtual overtake is one of the leaders when it comes to website designing and development services.
Trends That Are Going to Impact your UX Design in 2018
4
trends-that-are-going-to-impact-your-ux-design-in-2018-12abd20e3c31
2018-03-29
2018-03-29 19:06:46
https://medium.com/s/story/trends-that-are-going-to-impact-your-ux-design-in-2018-12abd20e3c31
false
347
null
null
null
null
null
null
null
null
null
Web Development
web-development
Web Development
87,466
Virtual Overtake LLP
Any online work, any part of the world, any type of business, any size of the company, Virtual Overtake LLP does it for you. https://www.virtual-overtake.com
97a464b99004
overtakevirtual
1
3
20,181,104
null
null
null
null
null
null
0
null
0
c6a52560810d
2018-01-15
2018-01-15 14:39:12
2018-01-15
2018-01-15 14:43:33
4
false
en
2018-02-05
2018-02-05 15:33:57
15
12ac84e84b4d
2.469811
0
0
0
by Gwendoline Soulier | Nov 02, 2017 | Communication | AI
5
CRAFT NEWS Oct2017 ๐Ÿ“ฃ| #Microsoft #Personalization #NOKIA #Samsung #WasteManagement by Gwendoline Soulier | Nov 02, 2017 | Communication | AI Microsoft has selected craft ai to join its AI Factory at Station F! Weโ€™re happy to announce that Microsoft selected us to join their AI Factory program from Station F, the biggest startup campus in the world, as one of the founding members. The team is really excited to start working with Microsoft & INRIA to boost craft ai deployment and to contribute to the development of a leading AI ecosystem. Original post Back from Microsoft Experiencesโ€™17 & AI Hackademy craft ai was invited by Microsoft as an AI Factory member to share insights about AI implementation in the Enterprise. First, we demonstrated within the AI Hackademy how our Artificial Intelligence can be applied to boost marketing campaigns performance. craft ai enables marketers to automatically identify every future buyer and push every promotion to the right audience. Fatigue is reduced and legacy campaigns are largely outperformed. Microsoft also gave us the opportunity to showcase 2 recent customer success stories: we co-presented with Dalkia how the AI-based assistant developed with craft ai helps their Energy Managers to improve the energy efficiency of their customersโ€™ sites. we shared the inner workings of the waste management project we have implemented with the city of Paris ! craft ai becomes an official partner of NOKIA weโ€™ve continued to work closely with NOKIA and we are proud to announce that craft ai has become one of its partners. craft ai is now available on the Nokia innovation platform as an AI API to accelerate innovation of IoT solutions. Check out our partner page HERE craft ai joins the Samsung Artik Cloud marketplace Following our collaboration around smart home, Samsung has selected craft ai as a cloud product partner on its newly launched marketplace. Want to know more about how to build IoT applications with craft ai and Samsung ARTIK? Check out these blog articles: AI With The Best, the return! This year again craft ai was one of the best AI maker! We presented how craft ai can be applied to build a system that learns and predicts the collection time of trash bins in the city of Paris. This talk gave us the opportunity to discuss some theoretical Machine Learning issues like handling one class classification in a white box context. We want you as an intern! If like Sylvain you want to become a machine learning ninja after a super internship at craft ai and do some amazing stuff like: a machine learning paper published in an international conference, the initial version of our craft ai python client, a predictive model of Paris bins collection timings, a personalized news recommendation system โ€ฆ.and more JOIN US NOW! Originally published at www.craft.ai.
CRAFT NEWS Oct2017 ๐Ÿ“ฃ| #Microsoft #Personalization #NOKIA #Samsung #WasteManagement
0
craft-ai-september-october-news-microsoft-personalization-nokia-samsung-wastemanagement-12ac84e84b4d
2018-05-10
2018-05-10 00:25:12
https://medium.com/s/story/craft-ai-september-october-news-microsoft-personalization-nokia-samsung-wastemanagement-12ac84e84b4d
false
469
Hosted API that enables your services to learn every day: provide a personalized experience to each user and automate complex tasks. This is the craft ai team blog (also available at http://www.craft.ai/blog/)!
null
craftai
null
craft ai
contact@craft.ai
craft-ai
ARTIFICIAL INTELLIGENCE,API,MACHINE LEARNING,AUTOMATION,IOT
craft_ai
Machine Learning
machine-learning
Machine Learning
51,320
craft ai team
The team behind the AI-as-a-service platform craft ai
3170b91ccee4
craft_ai
200
190
20,181,104
null
null
null
null
null
null
0
!pip install gensim !pip3 install gensim import requests from gensim.summarization import summarize, keywords text = requests.get('http://rare-technologies.com/the_matrix_synopsis.txt').text # Optionally chose whether you want to see the original text # print('Text:') # print(text) print('Summary:') print(summarize(text, ratio=0.01)) print('\nKeywords:') print(keywords(text, ratio=0.01)) Summary: Anderson, a software engineer for a Metacortex, the other life as Neo, a computer hacker "guilty of virtually every computer crime we have a law for." Agent Smith asks him to help them capture Morpheus, a dangerous terrorist, in exchange for amnesty. Morpheus explains that he's been searching for Neo his entire life and asks if Neo feels like "Alice in Wonderland, falling down the rabbit hole." He explains to Neo that they exist in the Matrix, a false reality that has been constructed for humans to hide the truth. Neo is introduced to Morpheus's crew including Trinity; Apoc (Julian Arahanga), a man with long, flowing black hair; Switch; Cypher (bald with a goatee); two brawny brothers, Tank (Marcus Chong) and Dozer (Anthony Ray Parker); and a young, thin man named Mouse (Matt Doran). Trinity brings the helicopter down to the floor that Morpheus is on and Neo opens fire on the three Agents. Keywords: neo morpheus trinity cypher agents agent smith tank says saying
8
null
2018-08-20
2018-08-20 00:43:55
2018-08-20
2018-08-20 02:07:47
2
false
en
2018-08-20
2018-08-20 02:09:01
6
12aeb6485326
2.564465
1
0
0
ไปŠๆ—ฅไธป้กŒ๏ผšไฝฟ็”จGensimๅšๆ–‡ๆœฌๆ‘˜่ฆ
2
Day 91 โ€” Gensim for Text Summarization ไปŠๆ—ฅไธป้กŒ๏ผšไฝฟ็”จGensimๅšๆ–‡ๆœฌๆ‘˜่ฆ ๅƒ่€ƒ่ณ‡ๆ–™ RARE TECHNOLOGIES โ€” Text Summarization with Gensim Mihalcea, Rada, and Paul Tarau. โ€œTextrank: Bringing order into text.โ€ Proceedings of the 2004 conference on empirical methods in natural language processing. 2004. Barrios, Federico, et al. โ€œVariations of the similarity function of textrank for automated summarization.โ€ arXiv preprint arXiv:1602.03606 (2016). ไธ€ไบ›ๅ…ถไป–ๆœ‰็”จ็š„่ณ‡ๆบ๏ผš awesome-text-summarization gensim โ€” Text Summarization ็ญ†่จ˜ ไปŠๅคฉ็š„ไธป้กŒๆƒณ่ฆไพ†ๅฏซ้ปžNLP้ ˜ๅŸŸไธญ่ ปๅธธ่ฆ‹็š„ๅ‡ฝๅผๅบซgensimใ€‚ๆœฌไพ†ๅ…ถๅฏฆ่จญๅฎš็š„ไธป้กŒๅชๆ˜ฏๆƒณๅญธ text summarization ๏ผŒไฝ†็™ผ็พ gensimๅ‡ฝๅผๅบซ่ฃก้ขๅฐฑๅทฒ็ถ“ๆœ‰ๅฏซๅฅฝไบ†็š„TextRankๆผ”็ฎ—ๆณ•ใ€‚ TextRankๆผ”็ฎ—ๆณ•ไธ€้–‹ๅง‹ๆ˜ฏ็”ฑ[2]ๆๅ‡บ็š„๏ผŒ็ถ“้Ž[3]็š„ๆ”น้€ฒไน‹ๅพŒ๏ผŒ็พๅœจ็œŸๆญฃๅฏฆ็พๅœจ gensim ๅ‡ฝๅผๅบซ็š„ๆ˜ฏๆ”น้€ฒไน‹ๅพŒ็š„็‰ˆๆœฌใ€‚ไฝฟ็”จไธŠ็›ธ็•ถๆ–นไพฟ๏ผŒๅฐฑๅช่ฆ import ้€ฒไพ†ไน‹ๅพŒ็›ดๆŽฅๅ‘ผๅซๅณๅฏใ€‚ ไปŠๅคฉ็š„ๆ–‡็ซ ๆ„Ÿ่ฆบไธŠๅฆ‚ๆžœๅชๆ˜ฏๆŠŠ gensim็š„summarize functionๆ‹ฟๅ‡บไพ†็”จๅฐฑ็ตๆŸไบ†็š„่ฉฑ๏ผŒๅฅฝๅƒๆœ‰้ปž็ฝชๆƒกๆ„Ÿ๏ผŒ้‚„ๆ˜ฏไพ†็ญ†่จ˜ไธ€ไธ‹TextRankๆผ”็ฎ—ๆณ•็š„ไธ€ไบ›ๆ‘˜่ฆๅฅฝไบ†ใ€‚ TextRank ๆ นๆ“š[2]็š„ๆ่ฟฐ๏ผŒTextRank็š„ๆ ธๅฟƒๆฆ‚ๅฟตๆ˜ฏๅ€Ÿ็”จGoogleๆ—ฉๅนดๅฐ‡็ถฒ้ ้‡่ฆๆ€ง้‡ๅŒ–็š„ๆ–นๆณ•๏ผŒไนŸๅฐฑๆ˜ฏๅฐ‡ๆ–‡ๅญ—๏ผˆ็ถฒๅŸŸๅ/URL๏ผ‰่ฝ‰ๅŒ–ๆˆไธ€ๅ€‹ๅœ–ไพ†่กจ็คบใ€‚ ๅŽŸๆ–‡[2]ๆœ‰ๆๅˆฐ๏ผŒไฝฟ็”จ้€™้กžๆ–นๆณ•็š„ๆผ”็ฎ—ๆณ•้ƒฝๆœ‰ๅนพๅ€‹ๅ…ฑ้€š็š„ๆญฅ้ฉŸๅฆ‚ไธ‹๏ผš 1. Identify text units that best define the task at hand, and add them as vertices in the graph. 2. Identify relations that connect such text units, and use these relations to draw edges between vertices in the graph. Edges can be directed or undirected, weighted or unweighted. 3. Iterate the graph-based ranking algorithm until convergence. 4. Sort vertices based on their final score. Use the values attached to each vertex for ranking/selection decisions. TextRank้œ€่ฆๅ…ˆๅฐ‡่ผธๅ…ฅๆ–‡ๅญ—ๅšTokenization๏ผŒ็„ถๅพŒๅฐ‡ๆฏไธ€ๅ€‹่ฉž็•ถไฝœไธ€ๅ€‹้—œ้ตๅญ—ไธฆ้–‹ๅง‹ๆ”พ้€ฒๅœ–่ฃกใ€‚่‡ณๆ–ผ้€™ๅ€‹้—œ้ตๅญ—่ฉฒๆ€Ž้บผๆๅ–๏ผŒ้€™ๅˆๆ˜ฏๅฆไธ€ๅ€‹ๅŽŸๆ–‡็š„้‡้ปžไน‹ไธ€๏ผškeyword extractionใ€‚ Keyword Extraction็š„ๆญฅ้ฉŸๅฆ‚ไธ‹๏ผš ็ฌฌไธ€ๆญฅ๏ผšTokenization The text is tokenized, and annotated with part of speech tags โ€” a preprocessing step required to enable the application of syntactic filters. ็ฌฌไบŒๆญฅ๏ผšAdd all lexical units to the graph All lexical units that pass the syntactic filter are added to the graph, and an edge is added between those lexical units that co-occur within a window of words ็ฌฌไธ‰ๆญฅ๏ผšๅปบ็ซ‹ไธ€ๅ€‹็„กๅ‘ๆœ‰ๆฌŠ้‡็š„ๅœ–๏ผŒๆฌŠ้‡้ƒฝๅˆๅง‹ๅŒ–็‚บ 1 ๏ผŒๅ†ไฝฟ็”จRanking Algorithm ไพ†ๆ›ดๆ–ฐๆฏๅ€‹้‚Š็š„ๆฌŠ้‡ After the graph is constructed (undirected unweighted graph), the score associated with each vertex is set to an initial value of 1, and the ranking algorithm described in section 2 is run on the graph for several iterations until it converges โ€” usually for 20โ€“30 iterations, at a threshold of 0.0001 ๆฌŠ้‡็”ฑไปฅไธ‹ๅ…ฌๅผๆฑบๅฎš๏ผš ็ฌฌๅ››ๆญฅ๏ผšๅฐๅปบ็ซ‹ๅฅฝ็š„็„กๅ‘ๅœ–ๅš Topological sort๏ผŒไฟ็•™ๆœ€ๅ‰้ข็š„ๅนพๅ€‹ๆฌŠ้‡ใ€‚ Once a final score is obtained for each vertex in the graph, vertices are sorted in reversed order of their score, and the top vertices in the ranking are retained for post-processin ็ฌฌไบ”ๆญฅ๏ผšPost Processing During post-processing, all lexical units selected as potential keywords by the TextRank algorithm are marked in the text, and sequences of adjacent keywords are collapsed into a multi-word keyword. ๅŽŸๆ–‡็ตฆ็š„ไธ€ๅ€‹็ฐกๅ–ฎ็š„็ฏ„ไพ‹๏ผš ๅŽŸๆ–‡ๆๅ‡บ็š„ๆผ”็ฎ—ๆณ•ไธญๅฆไธ€ๅ€‹ไนŸ่ ป้‡่ฆ็š„้ƒจๅˆ†ๆ˜ฏSentence Extraction ใ€‚ๆ ธๅฟƒ่ง€ๅฟต่ˆ‡Keyword Extraction ๅทฎไธๅคš๏ผŒ็„กๅŠ›ๆ•ด็†ไบ†โ€ฆ ็›ดๆŽฅไพ†็œ‹ๅฆ‚ไฝ•ไฝฟ็”จgensimๅ‡ฝๅผๅบซไพ†่ชฟ็”จ้€™ๅ€‹ๆผ”็ฎ—ๆณ•ๅงใ€‚ Gensim ไฝฟ็”จGoogle Colabๅš้€™ๅ€‹ๅทฅไฝœๅ…ถๅฏฆ่ ปๅฟซ็š„๏ผŒๅฐฑๅฎ‰่ฃๅฅฝgensimไน‹ๅพŒ็›ดๆŽฅๅผ•็”จๅณๅฏใ€‚ ๅฎ‰่ฃ๏ผš ่ผธๅ…ฅ็›ฎๆจ™ๆ–‡็ซ ๏ผŒ็›ดๆŽฅๅผ•็”จ summarize ๅ‡ฝๅผๅณๅฏ๏ผš ๅ…ถไธญ ratio ็š„ๆ„ๆ€ๆ˜ฏๅฎน่จฑ summarize() ๆจกๅž‹็…งๆŠ„ๅŽŸๆ–‡็š„ๆฏ”ไพ‹ใ€‚ ๅŽŸๆ–‡้žๅธธ่ฝ่ฝ้•ท๏ผŒๅฐฑไธๆ”พไธŠไพ†ไบ†๏ผŒๆƒณ็œ‹็š„่ฉฑ็›ดๆŽฅ print(text) ๅณๅฏใ€‚ ้€™่ฃกๅชๆ”พ่ƒๅ–ๅ‡บไพ†็š„ Text Summary๏ผš ไปฅๅŠๆŠ“ๅˆฐ็š„Keywords๏ผš
Day 91 โ€” Gensim for Text Summarization
1
day-91-gensim-for-text-summarization-12aeb6485326
2018-08-20
2018-08-20 02:09:01
https://medium.com/s/story/day-91-gensim-for-text-summarization-12aeb6485326
false
578
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Falconives
null
250d8013fad2
falconives
11
19
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-04
2017-12-04 14:41:50
2017-12-04
2017-12-04 14:45:50
0
false
en
2017-12-04
2017-12-04 14:46:55
0
12af01d50330
1.14717
0
0
0
Hi friends, Iโ€™m new to the medium, I want to tell a little about myself. My name is Richard Hokkins, I come from England. All my life I hadโ€ฆ
5
My first post on cloud computing Hi friends, Iโ€™m new to the medium, I want to tell a little about myself. My name is Richard Hokkins, I come from England. All my life I had a craving for computer technologies and programs, which has not subsided even now. I can not say that I am the best programmer, but everything that is connected with computers is quite fascinating to me. Today I would like to tell you about my new hobby, these are cloud computers. Yes, now this direction is gaining momentum because of its mobility. Perhaps for someone it will be a novelty, but you can use powerful calculations even from your mobile phone. Now in the market of cloud computing the main part of the market is occupied by Amazon and Google. I want to note that the price for such services is currently not the lowest, which makes it impossible for any user to use the power. I really like to follow new technical projects, if it concerns computer technologies. I followed many projects, including Golem Sonm, in some of them I even invested through ICO. And here a couple of months ago I noticed another quite unique and ambitious project. Boosteroid is another cloud computer, but with centralized capacities. As far as Iโ€™ve studied the project, they want to make a cloud-based service, even for the hardcore gamers. I can tell from myself that the project I really liked, and I just can wish them good luck. Those who are interested in watching the project, can find it in the search, I do not want to advertise here. To what I started everything, the world does not stand still, and cloud computing is now an excellent option to replace expensive laptops that will not stand out easily and autonomously.
My first post on cloud computing
0
my-first-post-on-cloud-computing-12af01d50330
2018-04-03
2018-04-03 15:11:06
https://medium.com/s/story/my-first-post-on-cloud-computing-12af01d50330
false
304
null
null
null
null
null
null
null
null
null
Cloud Computing
cloud-computing
Cloud Computing
22,811
Richard Hokkins
Ico enthusiast
febaded0de78
infared727
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-02
2018-07-02 23:32:31
2018-07-02
2018-07-02 23:34:43
1
false
zh-Hant
2018-07-06
2018-07-06 13:27:29
1
12af6deb6fa4
0.332075
0
0
0
โ€˜ ๆฉŸๆขฐๅ–ไปฃ็š„ๆ˜ฏ้ซ”ๅŠ›๏ผŒAI ๅ–ไปฃ็š„ๆ˜ฏๅˆคๆ–ทใ€‚ๅฆ‚ๆžœไบบๆœ‰้ˆใ€้ญ‚ใ€้ซ”็š„่ฉฑ๏ผŒๆฉŸๆขฐๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ซ”ใ€๏ผŒAI ๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ญ‚ใ€๏ผŒ็„กๆณ•่ขซๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ˆใ€ใ€‚้€™ๅฐฑๆ˜ฏๅ‰ตๆ–ฐๅทฅๅ ดๆŽ้–‹ๅพฉ่ชช๏ผŒ็•ถAI่งฃๆฑบไบ†ๅพˆๅคšใ€Œไบบใ€็š„ๅทฅไฝœไน‹ๅพŒ๏ผŒไบบ้–‹ๅง‹ๅšไธ€ไปถไบ‹ๅฐฑๆ˜ฏใ€Œๆœ‰ๆ„›็š„ๆœๅ‹™ๆฅญใ€ใ€‚ โ€” ็›งๅธŒ้ตฌ ใ€ŠC2B้€†ๅ•†ๆฅญๆ™‚ไปฃใ€‹โ€™
5
ๆ˜Žๅคฉ็š„ๆ–ฐๅธ‚ๅ ด๏ผšC2B ๆ–ฐไบ”ๆ–ฐๆ™‚ไปฃ โ€˜ ๆฉŸๆขฐๅ–ไปฃ็š„ๆ˜ฏ้ซ”ๅŠ›๏ผŒAI ๅ–ไปฃ็š„ๆ˜ฏๅˆคๆ–ทใ€‚ๅฆ‚ๆžœไบบๆœ‰้ˆใ€้ญ‚ใ€้ซ”็š„่ฉฑ๏ผŒๆฉŸๆขฐๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ซ”ใ€๏ผŒAI ๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ญ‚ใ€๏ผŒ็„กๆณ•่ขซๅ–ไปฃ็š„ๆ˜ฏใ€Œ้ˆใ€ใ€‚้€™ๅฐฑๆ˜ฏๅ‰ตๆ–ฐๅทฅๅ ดๆŽ้–‹ๅพฉ่ชช๏ผŒ็•ถAI่งฃๆฑบไบ†ๅพˆๅคšใ€Œไบบใ€็š„ๅทฅไฝœไน‹ๅพŒ๏ผŒไบบ้–‹ๅง‹ๅšไธ€ไปถไบ‹ๅฐฑๆ˜ฏใ€Œๆœ‰ๆ„›็š„ๆœๅ‹™ๆฅญใ€ใ€‚ โ€” ็›งๅธŒ้ตฌ ใ€ŠC2B้€†ๅ•†ๆฅญๆ™‚ไปฃใ€‹โ€™ ๅœ–/Unsplash ้ฆฌ้›ฒๆๅ‡บ็š„ใ€Œๆ–ฐไบ”ๆ–ฐใ€ๆ™‚ไปฃ๏ผŒๅŒ…ๆ‹ฌ๏ผšๆ–ฐ้›ถๅ”ฎใ€ๆ–ฐ่ฃฝ้€ ใ€ๆ–ฐ้‡‘่žใ€ๆ–ฐๆŠ€่ก“ใ€ๆ–ฐ่ƒฝๆบใ€‚ๅผท่ชฟไปฅใ€Œไบบใ€็‚บไธญๅฟƒ๏ผŒๅ…ถๆ ธๅฟƒๆ€็ถญๆ˜ฏ๏ผšใ€Œไฝ ไบบๅœจๅ“ช่ฃก๏ผŒ้‚ฃ่ฃกๅฐฑๆ˜ฏๆœๅ‹™ๆˆ–่ฃฝไฝœ็š„ไธญๅฟƒใ€‚ใ€ใ€‚ๅฆ‚ไน‹ๅ‰ๆ–‡็ซ ไธญๆๅˆฐ๏ผšไบบๅ€‘ๆถˆ่ฒป็ฟ’ๆ…ฃๅŠๆ–นๅผๅพžๅ‚ณ็ตฑ็š„ใ€Œๅ ดใ€่ฒจใ€ไบบใ€๏ผŒ้€ๆผธๆผ”่ฎŠๆˆใ€Œไบบใ€่ฒจใ€ๅ ดใ€๏ผŒๅธ‚ๅ ดๅคงๆ–นๅ‘้€ๆผธๅพž่ณฃๆ–นๅธ‚ๅ ด่ฝ‰่ฎŠๆˆ่ฒทๆ–นๅธ‚ๅ ดใ€‚ๅ•†ๅฎถๅฟ…้ ˆ้ฆ–ๅ…ˆๆŽŒๆกๆ•ธๆ“šๅŠๆถˆ่ฒป็ฟ’ๆ…ฃ๏ผŒๅœจๅ› ๅœฐๅˆถๅฎœๆŽจๅ‡บๅˆ้ฉ็š„็”ขๅ“ๆˆ–ๆœๅ‹™ๆ–นๆกˆใ€‚้€™ๆจฃๅพžๅ‚ณ็ตฑ็š„ใ€ŒB2Cใ€้€ๆผธๆผ”่ฎŠ็‚บใ€ŒC2Bใ€็š„้Ž็จ‹๏ผŒๅฏไปฅ่ชชๆ˜ฏ็”ฑใ€Œๆ–ฐไบ”ๆ–ฐใ€็š„ๅทจ่ผช็ทŠๅฏ†ไบคไบ’่กŒ้€ฒ็š„็ตๆžœใ€‚ ๅทฅๆฅญๅŒ–ๆ™‚ไปฃ๏ผŒ่’ธๆฑฝๆฉŸ็š„็™ผๆ˜Ž๏ผŒไบบ้กž็ฌฌไธ€ๆฌกๅšๅˆฐไฝฟ็”จๆฉŸๅ™จๅ–ไปฃไบบๅŠ›็š„ๆป‹ๅ‘ณ๏ผŒไธๅƒ…ๅฏไปฅๅฟซ้€Ÿไธ”ๅคง้‡็š„็”Ÿ็”ข๏ผŒ่ฎ“ๆถˆ่ฒปๅ“่ฎŠๅพ—ๆ›ดๆ˜“ๆ–ผๅ–ๅพ—๏ผŒ็”š่‡ณ้€ฒไธ€ๆญฅๆŽจๅ‹•ไบ†็คพๆœƒๆ”น้ฉ่ˆ‡ไบบๆฌŠๆ„่ญ˜ไธŠๆผฒใ€‚ๅฆ‚ไปŠๆ–ฐๆ™‚ไปฃ็š„ใ€Œๆ–ฐ่ƒฝๆบใ€๏ผŒไนŸๅฐฑๆ˜ฏใ€Œๆ•ธๆ“šใ€( Data )๏ผŒ่—‰็”ฑ็œพๅคš็š„ๆ„Ÿ็Ÿฅๅ™จ ( Sensor )๏ผŒๅฐ‡ไบบ้กž่กŒ็‚บ่ปŒ่ทก่—‰็”ฑ็ตฑ่จˆๅˆ†ๆž๏ผŒๆœ‰ๆ™‚็”š่‡ณๅฏไปฅๆฏ”ไบบ้กžๆœฌ่บซๆ›ดไบ†่งฃ่‡ชๅทฑใ€‚๏ผˆ่ชฐๆœƒๆ›ดไบ†่งฃไฝ ไธ€ๅคฉ่ตท่บซไบ†ๅนพๆฌก๏ผŸๅ‘ผๅธ้ ป็Ž‡ๅคšๅฐ‘๏ผŸ่ตฐ้Žๅบ—ๅฎถๆ™‚ๆŠฌ้ ญ็œ‹ไบ†ๆŸๅ•†ๅ“ๆจ™็‰Œๅนพ็ง’๏ผŸไธ€ๅคฉ้–‹ๅ•ŸๆŸ็คพ็พคๆ‡‰็”จ็จ‹ๅผๅนพๆฌก๏ผŸ็€่ฆฝๅ“ชๅ€‹ๅ•†ๅ“้ ้ขๆ™‚้–“ๆœ€้•ท๏ผŸๆ„Ÿ็Ÿฅๅ™จๅฏไปฅใ€‚๏ผ‰ C2B ๆœ€้‡่ฆ็š„ๆฆ‚ๅฟต๏ผŒไนŸๅฐฑๆ˜ฏใ€Œๅ€‹ไบบๅŒ–ใ€๏ผŒ่ƒฝ่—‰็”ฑ้พๅคงไบคไบ’็š„ๅผฑ้€ฃ็ต๏ผŒๅฝขๆˆๅผทๅŠ›็š„ๅทจๆต๏ผŒ่—‰ๆญค่ฎ“ๆถˆ่ฒป่€…ๆˆ็‚บไพ›ๆ‡‰้ˆไธŠๆธธ๏ผŒๅƒ่ˆ‡ๅฎšๅƒนใ€‚๏ผˆไพ‹ๅฆ‚๏ผšใ€ŒPricelineใ€ ่ฎ“ๆถˆ่ฒป่€…่‡ช่จ‚ๆ‰€้œ€็š„ๆฉŸ็ฅจใ€ๆ—…้คจใ€ๆ—ฅ็”จๅ“็ญ‰ๅƒนไฝ๏ผŒๅœจๅฐ‹ๆฑ‚็›ธ้—œไผๆฅญไพ†ๆปฟ่ถณๆถˆ่ฒป่€…้œ€ๆฑ‚ใ€‚ๆŠ‘ๆˆ–ๆ˜ฏ่ฟ‘ๆœŸ็ˆ†็ด…๏ผŒๆดป่บ็”จๆˆถๅทฒ้” 3 ๅ„„ไบบ็š„ใ€Œๆ‹ผๅคšๅคšใ€๏ผŒ่ฎ“็”จๆˆถ้€้Ž็™ผ่ตทๅ’Œๆœ‹ๅ‹ใ€ๅฎถไบบใ€้„ฐๅฑ…็ญ‰็š„ๆ‹ผๅœ˜๏ผŒๅฏไปฅไปฅๆ›ดไฝŽ็š„ๅƒนๆ ผ๏ผŒๆ‹ผๅœ˜่ณผ่ฒทๅ„ช่ณชๅ•†ๅ“ใ€‚๏ผ‰ ไธๅƒ…้™ๆ–ผ็”ขๅ“๏ผŒ็”š่‡ณๆ˜ฏ็”ขๅ“่จญ่จˆ๏ผˆไพ‹ๅฆ‚๏ผšๆตท็ˆพ้–‹ๆ”พๅ‰ตๆ–ฐๅนณๅฐ : Haier Open Partner Ecosystem ๏ผŒ้–‹ๆ”พ่ฎ“ๆถˆ่ฒป่€…่‡ชๅทฑๆŒ‡ๅฎšไป–่ฆ็š„ๅฎถ้›ป่ฆๆ ผ๏ผŒ่€Œ้€™ๅ€‹่ฆๆ ผๆœƒๆ”พๅœจๅนณๅฐไธŠ้–‹ๆ”พ็ตฆ่ฃฝ้€ ๅ•†๏ผŒๅฆ‚ๆžœ่ฃฝ้€ ๅ•†่ฆบๅพ—ๅฏไปฅ่ฃฝ้€ ไธ”ๆœ‰ๅˆฉๅฏๅœ–๏ผŒๅฐฑๆœƒ็”Ÿ็”ขๅ‡บไธ€็ต„ๆ–ฐ็š„ๅฎถ้›ปๆˆๅ“ๅ‡บไพ†ใ€‚๏ผ‰ไนŸๅŒ…ๅซๅœจๅ…ถไธญใ€‚C2B็š„ๅ•†ๆฅญๆจกๅผ๏ผŒไนŸ่ฎ“ใ€Œ็พค็œพๅ‹Ÿ่ณ‡ใ€ใ€ๅŠใ€Œ็œพ็ฑŒใ€ไปฅๅŠ P2P ไบคๆ˜“็ญ‰๏ผŒๅฆ‚้›จๅพŒๆ˜ฅ็ญ่ˆฌๅ› ๆ‡‰่€Œ็”Ÿใ€‚ ้ฆฌ้›ฒๅœจ่ฌ›่ฟฐใ€Œไบ”ๆ–ฐใ€็š„ๆ™‚ๅ€™่ชช๏ผŒใ€Œ็œŸๆญฃ่กๆ“Šๅ„่กŒๅ„ๆฅญ็š„ใ€่กๆ“Šๅฐฑๆฅญใ€่กๆ“Šๅ‚ณ็ตฑ่กŒๆฅญ็š„ๆ˜ฏๆˆ‘ๅ€‘ๆ˜จๅคฉ็š„ๆ€ๆƒณ๏ผŒ็œŸๆญฃ่ฆๆ“”ๅฟƒ็š„ๆ˜ฏๆˆ‘ๅ€‘ๅฐๆ˜จๅคฉ็š„ไพ่ณดใ€‚ใ€ ๅœจๆœชไพ†ไบ”ๆ–ฐ้”ๆˆ็š„C2Bๆ™‚ไปฃ๏ผŒๆ˜ฏๆ นๅŸบๆ–ผไบ’่ฏ็ถฒ่€Œๅฝขๆˆ็š„้พๅคง็คพ็พคใ€‚็›งๅธŒ้ตฌ ใ€ŠC2B้€†ๅ•†ๆฅญๆ™‚ไปฃใ€‹ๆๅˆฐ๏ผšใ€Œๆœชไพ†ไบบ้กž็š„ๅŠ›้‡ไธๅœจๆ–ผ้ซ”ๅŠ›ใ€ไนŸไธๅœจๆ–ผๅˆคๆ–ท๏ผŒ่€Œๅœจๆ–ผ่ชฐ่ƒฝๅค ็‡Ÿ้‹้€™ๅ€‹้พๅคง็š„็คพ็พคใ€‚่ฆ็‡Ÿ้‹้€™ๅ€‹็คพ็พคๅฐฑ่ฆๆœ‰ๆ„›ใ€่ฆๆœ‰ไบ’ๅ‹•ใ€่ฆๆœ‰ๅŒ็†ๅฟƒ๏ผŒ่ฆๆœ‰ๆถˆ่ฒป่€…้ซ”้ฉ—๏ผŒ้€™ไบ›ๅ…ถๅฏฆ้ƒฝๆ˜ฏๆ„›็š„ๆœๅ‹™ๆฅญ็š„็ฏ„็–‡๏ผŒ่ฎ“ไบบ่ˆ‡ไบบไน‹้–“ไบ’ไฟกใ€่ฎ“ไบบ่ˆ‡ไบบไน‹้–“ๆ›ดๅฎนๆ˜“ๅˆไฝœใ€‚ไบบ้š›ไบ’ๅ‹•็š„็คพๆœƒ่ณ‡ๆœฌ่ถŠไพ†่ถŠ้‡่ฆ๏ผŒ็ฎก็†็คพๆœƒ่ณ‡ๆœฌ็š„๏ผŒไธๆ˜ฏ้ ้ซ”ใ€ไนŸไธๆ˜ฏ้ ้ญ‚๏ผŒ่€Œๆ˜ฏ้ ้ˆใ€‚ใ€ๆ‰€่ฌ‚็š„่ณ‡ๆœฌ๏ผŒๅฐฑๆœ‰ๆŠ•ๅ…ฅ/็”ขๅ‡บ็š„ๆฆ‚ๅฟต๏ผŒๆœชไพ†็š„็”ขๅ‡บ้ƒฝๆ˜ฏๆ นๅŸบๆ–ผไฝ ๅœจไบ’่ฏ็ถฒไธŠ็š„่ช ไฟก่กŒ็‚บๅ’Œๅ€‹ไบบ้ญ…ๅŠ›๏ผŒ้€™ไนŸๆœƒๅฝขๆˆๅพŒ AI ๆ™‚ไปฃๅพˆ้‡่ฆ็š„็ถ“ๆฟŸๆˆ้•ทๆ•ธๆ“šใ€‚
ๆ˜Žๅคฉ็š„ๆ–ฐๅธ‚ๅ ด๏ผšC2B ๆ–ฐไบ”ๆ–ฐๆ™‚ไปฃ
0
ๆ˜Žๅคฉ็š„ๆ–ฐๅธ‚ๅ ด-c2b-ๆ–ฐ้›ถๅ”ฎๆ™‚ไปฃ-12af6deb6fa4
2018-07-06
2018-07-06 13:27:29
https://medium.com/s/story/ๆ˜Žๅคฉ็š„ๆ–ฐๅธ‚ๅ ด-c2b-ๆ–ฐ้›ถๅ”ฎๆ™‚ไปฃ-12af6deb6fa4
false
35
null
null
null
null
null
null
null
null
null
Data
data
Data
20,245
้พๅคฉ้ธ
ๆœƒ่จˆใ€ๆณ•ๅพ‹็š„ๅฐˆๆฅญ่จ“็ทด๏ผŒ็•ขๆฅญๅป่ฝ‰่บซๆŠ•ๅ…ฅ็ง‘ๆŠ€็ด…ๆตทๆˆฐๅ ดๆ“”ไปปๅœ‹้š›ๆฅญๅ‹™่กŒ้Šทใ€‚็›ฎๅ‰ๅœจ็ง‘ๆŠ€ๅปบๆๆ–ฐๅ‰ต้ ˜ๅŸŸ่€•่€˜ใ€‚ๆ“…้•ทๅ…จ็ƒไพ›ๆ‡‰้ˆไฝˆๅฑ€ใ€็ญ–็•ฅ้–‹็™ผใ€็ณป็ตฑๆ•ดๅˆ่กŒ้Šทใ€ๅ‰ตๆ–ฐๅ•†ๆฅญๆจกๅผใ€ๅฎคๅ…ง่ฃไฟฎ่จญ่จˆใ€‚ๅฐ‡ไปฅๆทบ้กฏๆ˜“ๆ‡‚็š„่ง€้ปžๅˆ†ไบซๅ…จ็ƒๆ–ฐๅ‰ตๅธ‚ๅ ดใ€‚
a71bc6a88a1e
winifred.chung777
1
4
20,181,104
null
null
null
null
null
null
0
null
0
244ef586c71e
2018-02-11
2018-02-11 23:26:53
2018-02-11
2018-02-11 23:39:45
3
false
en
2018-02-13
2018-02-13 00:11:13
40
12afb059caf
6.361321
9
0
0
Co-written by Alan Hucks and Dijana Sneath
5
Kiaora! From The Place of the Possible Co-written by Alan Hucks and Dijana Sneath Known affectionately as the โ€˜worldโ€™s coolest little capitalโ€™, Wellington is New Zealandโ€™s centre of government and the worldโ€™s southernmost capital city, boasting the best quality of life in the world. With its vibrant arts scene, renowned coffee culture, craft beer selection, beautiful coastline, and active outdoor lifestyle, Wellington offers its residents a perfect balance between metropolitan living and escape to the great outdoors. Wellington has the highest concentration of web-based and digital technology companies in New Zealand and is considered one of the top 25 most innovative cities in the Asia-Pacific region. The region is home to a number of world-leading companies and initiatives in AI: Xero is spearheading the way in FinTech with AI accounting software for massive data sets; Datacom is NZโ€™s largest IT tech exporter; and Weta Digital has created some of the most innovative VFX the world has seen โ€” from Lord of the Rings to Avatar. Other notable companies include the holographic VR 8i, and Magic Leap whose game team is inventing the future of AR. Wellingtonโ€™s talent pool of well-educated, worldly and skilled people is its greatest asset, and allows it to punch well above its weight in the tech industry. We have such a desire for tech talent, that extensive schemes are set up to attract the worldโ€™s best: such as the governmentโ€™s Edmund Hillary Fellowship to bring over entrepreneurs, or our more local Wellington development agency WREDAโ€™s recent LookSee campaign which went viral in seeking out talent to live and work here: 48,000 applications resulted in 100 offers to lucky job seekers โ€” and many of the roles were for AI practitioners. Small, Compact and Fiesty, Wellington is punching above its weight in AI and public good innovation The abundance of creative tech in Wellington has been built up over the last 20 years, and the mix of movie, startup and ICT has led to a very active and innovative army of techies doing all sorts of wonderful things in this city. The Wellington AI team is incredibly excited to be the first in Australasia to join the global City.AI community! Launch of Wellington.AI Mid-November 2017, Wellington AI hosted its first ever event which was a huge success! We were massively overbooked, with a full house of around 100 people showing up on the night. Following on our theme of โ€œAI for Public Goodโ€, our guest speakers inspired and entertained us all with their projects designed to help humanity. Michael Lovegrove (CEO of Bot the Builder) explained how AI has the potential to save lives, speaking vividly about his new chatbot for a New Zealand youth support and development organisation โ€˜Youthlineโ€™, while Nick Gerritsen (investor and entrepreneur) officially launched SAM: the worldโ€™s first AI politician! Our peer-to-peer clinic session over drinks and nibbles also sparked creative discussion in three key areas of interest: using AI to aid in pattern matching and volume recreation for VR, identifying biosecurity risk items in shipping containers or personal luggage via x-ray analysis, and getting Siri to understand Mฤori place names. If youโ€™re interested in hearing more, you can watch the full-length guest speaker presentations, and make sure you come along to our next event in March! Lessons from the Inaugural Wellington AI Event The clinic session at our November event achieved exactly what it set out to do: build up the regional community and help connect practitioners with others who are interested in similar areas. It was fantastic to see the collaboration on the night, and even more exciting to hear updates about the challenge leadersโ€™ progress since. The first challenger Steve Swallow from Datacom brought up the issue of getting the likes of Siri and Google Assistant to understand Mฤori place names. Steve set this as a very specific challenge for Mฤori language understanding and translation but noted that there are many other needs for this in Aotearoa. As the official indigenous language of New Zealand, Te Reo (Mฤori language) is recognised as a taonga (treasure) and is of great cultural importance to all New Zealanders, but most platforms are not set up to accommodate Mฤori pronunciation. While this is very specific to Aotearoa New Zealand, there are undoubtedly similar issues with indigenous languages across the world. Thanks to the clinic session discussions, Steve was introduced to practitioners working on a similar issue for a radio station in the Far North of NZ, Te Hiku o Te Ika โ€” a perfect example of City.AIโ€™s ability to build connections between practitioners and foster collaboration! Barry Polley from the Ministry for Primary Industries discussed using AI algorithms to identify biosecurity risk items in shipping containers and luggage. This is also quite a uniquely Kiwi problem, given our famously strict biosecurity concerns as an island nation, but has many transferable applications across other areas of security. Acting on a suggestion offered during the Wellington AI clinic session, Barryโ€™s team has recently released training sets of imagery with a Creative Commons licence. Barry mentioned that from there, itโ€™s a pretty short leap to hosting or seeding a public competition, which MPI intend to do early in the new year. He says that based on follow-up activity, the local community is small but active and growth-focused, so competitors help one another to grow the overall market. Insights from Wellington SAM has captured worldwide attention as the worldโ€™s first AI politician, with interest from Radio Sputnik to Vanity Fair, and was even identified by NASDAQ as one of two most exciting AI projects in their global AI sector research report! With a vision to have a positive impact on political discussion and democracy, the project aims to use AI and natural language processing technology to both provide a better way for the electorate to engage with politics, and to inform better political decision-making on issues like water quality, housing, or climate change. SAM is being developed by Wellington-born company TouchTech. An unsexy solution to dirty data Wellingtonโ€™s own Intela AI recently launched a private beta for their Farrago intelligent data cleansing tool. Powered by deep learning, Farrago can save hundreds of (mundane) man-hours cleaning data or identifying common records across multiple database. There are use cases across every type of organisation and industry; from duplicates in CRM databases, to local government issuing multiple building consents. Read more and join the beta list. Adolescent depression, anxiety and suicide is the highest it has ever been, with New Zealand sadly having the highest rates of teen suicide in the developed world. Bot the Builder is working with Youthline (a helpline for adolescents) to develop an artificially intelligent chatbot that can provide advice around given situations, so that troubled teens have support 24/7, not just when a call centre is open. The aim here is not to replace counsellors. Rather it is to empower them so that they can serve more people who need the help. By having a system that possesses some from of intelligence, we have the ability to triage every client to understand exactly what they need, maximising human to human connection for those who desperately need it. NZ AI Heroes Xero is a leading FinTech pioneer born out of Wellington, with a propitious future in AI. Xero is very active in three key areas; ensuring trust and transparency around data use, leveraging ML to power a โ€˜low touchโ€™ accounting product, and extending their platform to become a the heart of the small business ecosystem. Accounting data is an incredible asset for data science and AI. Combined with other data, Xero removes or reduces repetitive tasks and makes suggestions to help customers grow healthy businesses. Xero runs a wide range of ML and AI projects, ranging from computer vision, to predicting accounting actions, to network analysis of the small business economy. The Auckland-based Soul Machines is also making a splash in the AI industry, with their lifelike, emotionally responsive Digital Humans. With a vision to humanize computing to better humanity, Soul Machines is revolutionizing the interface between AI and humans. Even scammers are getting some AI heat with Re:scam, an AI platform built to keep scammers busy answering questions instead of targeting new victims. This tool disrupts scammers to reduce their effectiveness and damage their profits. Local Events New Zealand has a political agenda for AI and the AI Forum is keeping pressure on policy makers to keep driving forward alongside the community. Come along to AI Day in March 2018 to hear more! As the centre of government, everything GovTech is of great importance to Wellington. Reaching out internationally, New Zealand is part of the D5 global leaders in digital government, and Wellington is getting ready host the worldโ€™s most innovative public sector leaders for the D5 summit in February 2018. This also coincides with the Digital Nations 2030 summit in Auckland. After the astounding success of our last event, we at Wellington AI are extremely excited to announce that we will be hosting our next gathering on the 1st March. We have another two excellent guest speakers lined up for you: Kameron Christopher, Co-founder & CTO at Intela AI, and Simon Carryer, Data Scientist at Xero. Register now to secure your tickets! We look forward to seeing you all there!
Kiaora! From The Place of the Possible
96
kiaora-from-the-place-of-the-possible-12afb059caf
2018-04-23
2018-04-23 08:50:24
https://medium.com/s/story/kiaora-from-the-place-of-the-possible-12afb059caf
false
1,540
Making knowledge on #appliedAI accessible
null
cityai
null
Applied Artificial Intelligence
hello@city.ai
cityai
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,COMPUTER SCIENCE,NATURALLANGUAGEPROCESSING
thecityai
New Zealand
new-zealand
New Zealand
4,833
Alan Hucks
null
21e80c4624eb
alanhucks
31
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-18
2018-05-18 14:29:41
2018-06-13
2018-06-13 15:39:21
0
false
en
2018-06-13
2018-06-13 16:04:08
0
12b00b14a4f9
3.120755
10
0
0
Measures of Central Tendency are also called as First moment of business decision.
5
Data Science Statistics : Measures of Central Tendency [Explanation & Code in R and Python] Measures of Central Tendency are also called as First moment of business decision. To begin with, I would like to elaborate why this is important. While we first get a dataset to work with, we usually check for which features of dataset are important to us and whether they are numerical or categorical. In analysing a numerical dataset(or some features of the dataset that are numerical), the most common method is to check where the centre of the data is. Note : Features means the columns of our dataset. There are 3 different ways to check the centre of our data : Mean Median Mode The method chosen can greatly influence the insights people take from the data Mean Mean(aka average) is the average of all the numbers in respective feature of our dataset. Formula : x_bar = ( ฮฃ xi ) / n Below are the steps you need to follow to calculate the mean : Sum up all the numbers in the dataset. Divide by the total number of numbers in the dataset, n. As you add more points, the mean always shifts around, and it always depends upon every point. So, when it comes to measures of the center, mean is not advisable to solely depend on it, as it doesnโ€™t portray the whole story and can be misleading in few cases. Outliers affect the value of mean a lot. Outliers : Values in dataset which are very different from the values in the data and donโ€™t represent the dataset and can be skipped as they donโ€™t help in finding out insights from data For example, consider the below dataset : 1,4,4,5,6,3,2,1,3,5 Mean of the above dataset : (1+4+4+5+6+3+2+1+3+5) / 10 = 3.4 Now, letโ€™s add an outlier to this : Suppose we add 20 to this dataset 1,4,4,5,6,3,2,1,3,5,20 Mean of the above dataset : (1+4+4+5+6+3+2+1+3+5+20) / 11 = 4.9 As we can see, just adding 1 outlier caused such a drastic shift in the mean. So, we can now imagine how the outliers can affect the dataset. As, mean is affected by the outliers. It is generally not recommended to go only for mean for gaining insights about the data or concluding anything about the data. Lets code now. Python : def mean(x) : return sum(x)/len(x) x = [1,2,3,4,5] mean(x) Output : 3.0 R : a = c(57,61,65,63,63,64,64,65,63,67,66,71,75,67,67,70,74,81,85) mean(a) Output : 67.78947 Median The Median of a dataset is the position where it divides the dataset into half, provided the data is ordered in ascending order. It is denoted by M or x_bar. Below are the steps you need to follow to calculate the median : Order the numbers in ascending order from smallest to largest. If the total number of numbers in the dataset is odd, then the number exactly in the middle is the median. If the total number of numbers in the dataset is even, then take the two numbers that are exactly in the middle and average them to find the median. Median is always preferred by many analysts as it is more accurate to depict the center of data available. For example, consider the below dataset : 5,3,2,1 Step 1 : Order them : 1,2,3,5 Now as it is even number of numbers in dataset, go to Step 3. Step 3 : (2+3)/2 = 2.5 Letโ€™s take an example of odd number of values in a dataset : 3,1,4,2,5 Step 1 : Order them : 1,2,3,4,5 Step 2 : Middle value : 3 which is the median So, what should you use as the measure of center ? Mean or Median ? It totally varies from case to case, but reporting both is always handy. Letโ€™s code now : Python : def median(v): โ€œโ€โ€finds the โ€˜middle-mostโ€™ value of vโ€โ€โ€ n = len(v) sorted_v = sorted(v) midpoint = n // 2 if n % 2 == 1: # if odd, return the middle value return sorted_v[midpoint] else: # if even, return the average of the middle values lo = midpoint โ€” 1 hi = midpoint return (sorted_v[lo] + sorted_v[hi]) / 2 x = [1,2,3,4,5] median(x) Output : 3.0 R : a = c(57,61,65,63,63,64,64,65,63,67,66,71,75,67,67,70,74,81,85) median(a) Output : 66 Mode Mode of a dataset is the number which occurs most of the time in the dataset. Below are the steps you need to follow to calculate mode : Step 1 : Calculate the frequency of occurrence of each value in dataset Step 2 : Mode is the value with highest frequency Mode is very less commonly used. Letโ€™s code now : Python def mode(x): โ€œโ€โ€returns a list, might be more than one modeโ€โ€โ€ counts = Counter(x) max_count = max(counts.values()) return [x_i for x_i, count in counts.iteritems() if count == max_count] x = [57,61,65,63,63,64,64,65,63,67,66,71,75,67,67,70,74,81,85] mode(x) Output : 63.0 R Mode <- function(x) { ux <- unique(x) ux[which.max(tabulate(match(x, ux)))] } a = c(57,61,65,63,63,64,64,65,63,67,66,71,75,67,67,70,74,81,85) Mode(a) Output : 63
Data Science Statistics : Measures of Central Tendency [Explanation & Code in R and Python]
22
data-science-statistics-measures-of-central-tendency-explanation-code-in-r-and-python-12b00b14a4f9
2018-06-21
2018-06-21 07:03:05
https://medium.com/s/story/data-science-statistics-measures-of-central-tendency-explanation-code-in-r-and-python-12b00b14a4f9
false
827
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Pratik Solanki
null
8076ae62950
pratiks.solanki3
3
8
20,181,104
null
null
null
null
null
null
0
A B : 1+A*B 0 0 1+0*0 = 1 0 1 1+0*1 = 1 1 0 1+1*0 = 1 1 1 1+1*1 = 2 mod(2) = 0 func (m *model) encrypt() { for _, weight := range m.weights { p := new(big.Int).SetInt64(weight) c, err := paillier.Encrypt(&m.enc.PublicKey, p.Bytes()) m.encryptedWeights = append(m.encryptedWeights, c) } }
2
252c3a26f4a5
2017-10-09
2017-10-09 14:53:06
2018-01-08
2018-01-08 09:28:12
11
false
en
2018-01-08
2018-01-08 09:28:12
20
12b113c879d6
7.998113
35
1
1
How Practical is Homomorphic Encryption for Machine Learning?
5
Encrypt your Machine Learning How Practical is Homomorphic Encryption for Machine Learning? We have a pretty good understanding of the application of machine learning and cryptography as a security concept, but when it comes to combining the two, things become a bit nebulous and we enter fairly untraveled wilderness. While Fully Homomorphic Encryption is nothing new, we have not seen any practical and efficient applications so far. Recently, we spent time looking into homomorphic encryption to evaluate if it is suitable for tackling some of our privacy and security related concerns. In this article we will introduce Homomorphic and Fully Homomorphic Encryption and discuss its impact on model encryption and training. What is Homomorphic Encryption? A homomorphism is a map between two algebraic structures of the same type, that preserves the operations of the structures.ยน This means for our case, an operation (here addition and multiplication) on the encrypted data (ciphertext) preserves the result on the plaintext. Let me explain this in a bit more detail with another quote: Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext.ยฒ Letโ€™s define out notation for message, ciphertext, encryption, and decryption: Assuming homomorphism, we then get: It might have been obvious to some of you, but itโ€™s important here to mention another feature: Operations between clear texts and cipher texts are homomorphic as well: The operation of the structures was preserved, despite the value being encrypted. It is important to point out that c1 is as cryptographically secure as c2. This means even though we can do operations on the encrypted data, the resulting encrypted values are as secure as they were before. A more realistic example: If the RSA public key is mod r and exponent e, then the encryption of a message x is given by: Modular exponentiation The homomorphic property is then: Then there was noiseโ€ฆ When implementing a cryptographic algorithm, you have to consider various ways to attack the cipher. If you take a deterministic encryption algorithm like RSA, you donโ€™t have semantic security. Without semantic security, the algorithm is vulnerable to chosen-plaintext attacks: A chosen-plaintext attack (CPA) is an attack model for cryptanalysis which presumes that the attacker can obtain the ciphertexts for arbitrary plaintexts.ยณ One way to protect against CPA is the introduction of a random component so that encrypting a message twice results in two different ciphers. RSA solves this by using complex padding mechanisms that additionally entail a random component. The problem is, this random component is also affected by the homomorphic nature of our algorithm. So if we add two ciphers together, we also add up the introduced random component. While we build algorithms to remove the random component in the decryption step, there is an upper limit of randomness after which we are unable to recover the message. We call the introduced randomness โ€œnoiseโ€ in homomorphic encryption. If you consider the amount of operations involved in training a model or inferring data with an encrypted model, we quickly end up unable to decrypt the noise-polluted cypertext. Bootstrapping to the Rescue The concept of bootstrapping describes a recrypt step in the algorithm during which we decrypt a previous encryption inside a new encryption. While you would think one needs to remove the outer encryption before decrypting the inside, bootstrapping allows for the more elegant decryption-inside-encryption. The benefit of doing a decryption is quite obvious: we are removing all existing noise in the cipher! Recrypt encrypts the ciphertext with the new private key (pk2) and then removes the first encryption (pk1) in the evaluation step using the encrypted secret key (sk). Fully Homomorphic Encryption An encryption scheme is fully homomorphic when it is possible to perform implicit addition and multiplication of plaintexts while manipulating only ciphertexts.โด The algorithms available in the 80s and 90s were all limited in their homomorphic features. If they were homomorphic, they either did not support addition and multiplication, or the cryptographic restrictions made them unusable for fully homomorphic encryption. The holy grail of fully homomorphic encryption was finding a bootstrappable algorithm capable of simultaneous addition and multiplication. Image from Unsplash by Kristina Flour Why is this enough you might ask? Imagine you want to encrypt binary plain text. Given two ciphers A,B, and addition and multiplication, we could compute the simple function 1+A*B. Keeping in mind that all arithmetic is binary (i.e., modulo 2), such a function produces the following truth table: If you havenโ€™t guessed already, this is a big deal. What you see above is a NAND gate, which is all we need to implement any Boolean function. With Boolean functions, we can do arbitrary computations. Such a system was proposed in 2009 by Craig Gentry using lattice-based cryptography, and described the first plausible construction for a fully homomorphic encryption scheme. Why do you want to encrypt your model? Image from Unsplash by Kristina Flour Assume your customers are unable to give you their data for privacy or security reasons. Which means, if you want to apply your models on their data, you have to bring the model to them. But if sharing your valuable model is impossible or you are limited by privacy concerns, encrypting your model might be an option. You can train your model, encrypt it, and send it to your customers. In order for the customer to actually use the prediction, you have to provide them with a decryption service. Homomorphic Encryption of a model Encrypting a model is straightforward. In the following example I use the Paillier crypto system. The system is probabilistic, so it introduces some noise which we need to be aware of. The [Paillier] scheme is an additive homomorphic cryptosystem; this means that, given only the public-key and the encryption of x and y, one can compute the encryption of x+y.โต The scheme lacks the ability to compute the product of two ciphers without the private key and is thus not fully homomorphic, but is enough for us if we want to apply the model on data. While the noise is not causing any issues in the encryption step, when we use the encrypted model to get a prediction, summing up the weights means summing up noise. Depending on the parameters we use in the key generation, we could end up with unusable values as a prediction. But for demonstration purposes, it is well suitable. Training with homomorphic encrypted data With the amount of operations required to train a model, you need an algorithm that is bootstrappable. The introduced noise would overwhelm you in no time and render your model completely useless. Being bootstrappable means being able to remove noise using a recrypt step before it gets out of control. While we donโ€™t have to do the whole recrypt process on every operation, a fully homomorphic system comes with huge computational overhead. Although the literature tends to be a bit vague or hard to compare, here are a couple of quotes in rough chronological order: The first fully homomorphic algorithm was incredibly slow, taking 100 trillion times as long to perform calculations of encrypted data than plaintext analysis.โถ 100 trillion? Okay, this isnโ€™t even feasible for a calculatorโ€ฆ IBM has sped things up considerably, making calculations on a 16-core server over two million times faster than past systems.โท While IBMโ€™s numbers are a significant improvement over the initial implementation, their solution is still at least 50 million times slower than working with plain text. For the smallest parameter set, the time required for a homomorphic multiplication of ciphertexts was measured to be 3.461 milliseconds. For slightly larger parameters (~2x), homomorphic multiplication takes 8.509 milliseconds.โธ Now we are getting somewhere. Or are we? 3.5 ms sounds pretty fast. Actually, itโ€™s not. The average human reaction time is 215 ms. Addition on plaintext is a fraction of a nanosecond. A nanosecond is a millionth of a millisecond. And you might have noticed that the computational time is not linear with the parameter selection for the private key generation. Microsoft claims that its CryptoNet-based optical recognition system is capable of making 51,000 predictions per hour.โน This is an impressive number for an encrypted model, but we are only talking predictions, not training an encrypted model. And while an encrypted model is a possible solution to protect it, you still need to train it on raw data. Computational requirements are not the only concern โ€” we also have to consider the size of the encrypted data or model. In conclusion, the encrypted data is one to three orders of magnitude larger than the unencrypted data. The exact factor depends on what is considered a natural representation of the data in its raw form.โน While the increased size of an encrypted model might not have a considerable impact, an industry whoโ€™s value is based on large amounts of data might struggle if their training data needs to be homomorphically encrypted. Conclusion If the last section left you with a bitter taste, donโ€™t get me wrong, this is still an exciting topic. The advantages of encrypted models and training are also obvious enough to expect improvements. While practical application is still a bit of a question mark, there are some very good starting points. The basic concept works, we have decent implementations, and even some community efforts. So did it solve our problems? We are not there yet, but homomorphic encryption is something we definitely keep on a sticky note. Epilogue If you donโ€™t have a mathematical background, we recommend Greenโ€™s โ€œVery casual introductionโ€ to Fully Homomorphic Encryption. If you are familiar with most concepts in this article, Gentryโ€™s original paper โ€œComputing Arbitrary Functions of Encrypted Dataโ€ will serve you with plenty details. If you happen to be interested in using HELib in Golang, you will find some help to get started with helib-go. [1] wikipedia.org/wiki/Homomorphism [2] wikipedia.org/wiki/Homomorphic_encryption [3] wikipedia.org/wiki/Chosen-plaintext_attack [4] Gentry, Craig. A fully homomorphic encryption scheme. Stanford University, 2009. [5] wikipedia.org/wiki/Paillier_cryptosystem [6] theregister.co.uk/researchers_break_homomorphic_encryption [7] theregister.co.uk/ibm_open_source_homomorphic_crypto [8] Varia, Mayank, Sophia Yakoubov, and Yang Yang. โ€œHetest: a homomorphic encryption testing framework.โ€ International Conference on Financial Cryptography and Data Security. Springer Berlin Heidelberg, 2015. [9] Gilad-Bachrach, Ran, et al. โ€œCryptonets: Applying neural networks to encrypted data with high throughput and accuracy.โ€ International Conference on Machine Learning. 2016.
Encrypt your Machine Learning
432
encrypt-your-machine-learning-12b113c879d6
2018-06-15
2018-06-15 17:16:41
https://medium.com/s/story/encrypt-your-machine-learning-12b113c879d6
false
1,775
Corti is a machine learning company, providing accurate diagnostic advice to emergency services, allowing patients to get the right treatment faster.
null
null
null
Corti
hello@cortilabs.com
corti-ai
MACHINE LEARNING,DEEP LEARNING,SOFTWARE ENGINEERING,STARTUP,HEALTHCARE
corti_ai
Security
security
Security
41,279
Lukas Rist
null
ea1a71b9b6
glaslos
27
33
20,181,104
null
null
null
null
null
null
0
null
0
a2c8b344c70b
2018-09-17
2018-09-17 10:17:05
2018-09-17
2018-09-17 10:22:22
1
false
en
2018-09-17
2018-09-17 10:31:02
10
12b2bbe2633b
2.166038
4
0
0
By Pon Swee Man
5
Get To Know What ร•petโ€™s Strategic Partners Offer Its Users By Pon Swee Man ร•petโ€™s vision to revolutionise the incumbent global education system isnโ€™t one made alone; other institutions and partners all over the world share in its philosophy too. Be it its philanthropic purpose, technological forwardness or potential to right the current systemโ€™s inefficiencies, ร•pet has definitely been successful in garnering support and help from partners around the world, extending its revolutionary influence beyond local borders. ร•petโ€™s prized partners include Cambridgeโ€™s Judge Business School, the American University of Central Asia (AUCA) and the World Blockchain Organisationโ€™s United Nations Blockchain Foundation (UNBF). ร•pet users can gain access to Cambridge JBSโ€™ highly acclaimed personality profiling tool; experience the ร•pet system integrated into an American school system in Central Asia; and hold onto Opet Tokens โ€” a UNBF reserve currency. Converse with a Chatbot Empowered with Cambridge JBSโ€™s Personality Profiling Tool One of the ร•pet chatbotโ€™s lauded functions is its ability to learn the likes, dislikes and interests of its users. This ability is powered by none other than one of the worldโ€™s best personality profiling engines: Cambridge JBSโ€™s. This ability enables ร•petโ€˜s AI engine to recommend the most suitable institutions that cater to each student userโ€™s Big 5 personality profile, helping both students and educational institutions to find their best fits. Experience the ร•petโ€™s System in the American University of Central Asia ร•pet will be carrying out its first phase of implementation in AUCA, introducing its AI engine to self-learn the American high school curriculum in Kyrgyzstan. AUCA students will be the first to experience the ร•pet Chatbotโ€™s prowess for themselves, after which it will be circulated to other schools in the region. Based on the AI-blockchain technology of the future , the ร•pet chatbot will support AUCAโ€™s existing curriculum with personality-based insights on each studentโ€™s learning methods. The useful profile information can then be used to sort them into the learning system best catered to their learning styles โ€” a breakthrough advancement in the education system. Hold onto a Reserve Cryptocurrency of the United Nations Blockchain Foundation The UNBF is in the works of tokenizing special drawing rights in the form of cryptocurrency โ€” the Crypto Special Drawing Rights (C-SDR). Similar to the Special Drawing Rights (SDR) established by the International Monetary Fund, the C-SDR will comprise a basket of anchor cryptocurrencies that will pave the way for the global circulation and acceptance of cryptocurrencies in the near future. The Opet Token will be one of the anchor currencies of the basket, securing its value as a global cryptocurrency. To find out more about ร•petโ€™s partners, check out the respective press releases: ร•pet Partners With Researchers From Cambridgeโ€™s JBS Psychometric Center to Apply Personality Profiling to Education ร•pet Foundation Partners with AUCA to Launch AI-Powered Education in Central Asi World Blockchain Organization โ€” United Nations Blockchain Foundation Endorses Opet Foundation Project Along with an Investment Worth ~1,000 ETH To find out more about ร•pet Foundation, visit its following links: Official Website: https://opetfoundation.com/ Twitter: https://twitter.com/opetfoundation Telegram: https://t.me/Opetfoundationgroup Medium: https://medium.com/@opetbot Bitcointalk: https://bitcointalk.org/index.php?topic=3735418 YouTube: https://www.youtube.com/c/OpetFoundation LinkedIn: https://www.linkedin.com/company/opet-foundation/
Get To Know What ร•petโ€™s Strategic Partners Offer Its Users
151
get-to-know-what-รตpets-strategic-partners-offer-its-users-12b2bbe2633b
2018-09-17
2018-09-17 10:31:02
https://medium.com/s/story/get-to-know-what-รตpets-strategic-partners-offer-its-users-12b2bbe2633b
false
521
A blockchain project to enable seamless tertiary & college application and admission
null
opetfoundation
null
Opetfoundation
null
รตpetfoundation
EDUCATION,EDUTECH,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,TECHNOLOGY
opetfoundation
Blockchain
blockchain
Blockchain
265,164
ร•pet
Bringing AI and Blockchain Technologies into education, ร•pet is revolutionizing students' lives, helping them to reach their full potential.
8a81efd34a11
opetbot
137
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-02
2018-05-02 06:51:34
2018-05-02
2018-05-02 07:27:24
0
false
en
2018-05-02
2018-05-02 07:28:21
3
12b43cca968f
1.520755
3
0
0
Our team at orbis.ai is dedicated to creating affective computing engines that are capable of understanding and expressing human emotionsโ€ฆ
4
Paper Review for Affective Computing (1) Our team at orbis.ai is dedicated to creating affective computing engines that are capable of understanding and expressing human emotions. Every two weeks, we internally dedicate a part of our day to reviewing important papers for affective computing. We decided to share some of what we review to the world, since we believe more people should know the importance of emotion in HCI and in creating artificial emotional intelligence (hilariously portrayed by HBOโ€™s Silicon Valley using Sophia) Paper 1: HIDDEN MARKOV MODEL-BASED SPEECH EMOTION RECOGNITION https://www.researchgate.net/publication/224929735_Hidden_Markov_Model-based_Speech_Emotion_Recognition Why this paper is important: Many recent efforts for emotion recognition utilize deep learning, usually using convolutional architectures. Albeit some good successes, such approaches have limitations in explaining the recognition results. This paper brings us back to 2003, where they use good old HMM and hand-picked features for classifications, and show some great results. Key takeaways: This paper introduces two HMM based methods of recognizing emotion from speech. single state HMM for global statistics for each emotion continuous HMM for classification using instantaneous low-level features The work uses a total of 20 hand selected features for global statistics classification, including pitch and energy related features. 86.8% accuracy for 6 class emotion classification on in-house collected dataset is reported. For continuous HMM, they use instantaneous pitch and energy, and first and second derivatives of each, totaling 6 features. Surprisingly, they report a lower accuracy of 79.8%. Paper 2: Deep Learning for Emotion Recognition on Small Datasets Using Transfer Learning https://dl.acm.org/citation.cfm?id=2830593 Why this paper is important: Datasets available for training emotion recognition models are small in size, and most often imbalanced. This work aims to compensate for these problems by transfer learning from pre-trained ImageNet architectures. They end with an anticlimactic but important conclusion. Key takeaways: Without transfer learning, AlexNet and VGG architectures perform better than baseline (32%) on EmotiW dataset. Any further transfer learning led to marginal improvements in performance. The authors conclude that since EmotiW and Fer32 was much smaller in size than ImageNet dataset, the improvement was marginal. Data imbalance problem also led to lower classification accuracies on labels with smaller sample numbers. All in all, transfer learning or not, a dedicated emotion recognition dataset thatโ€™s large enough and balanced is important (but then, when is it not?). Well thatโ€™s it for today, stay tuned for next paper review. Check us out at https://www.orbisai.co
Paper Review for Affective Computing (1)
52
paper-review-for-affective-computing-12b43cca968f
2018-06-05
2018-06-05 04:34:18
https://medium.com/s/story/paper-review-for-affective-computing-12b43cca968f
false
403
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
orbis.ai
We make AI that touches upon human emotions.
3a43aade04b4
orbisai
12
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-31
2018-01-31 23:09:23
2018-01-31
2018-01-31 23:24:01
2
false
en
2018-01-31
2018-01-31 23:26:07
0
12b48ac2f3b7
1.477673
2
0
0
Introducing A new era in machine learning development: Decentralized Machine Learning (DML).
3
Future of technology Introducing A new era in machine learning development: Decentralized Machine Learning (DML). So what is that makes DML so exceptional? The DML protocol will apply on-device machine learning, blockchain and federated learning technologies. It unleashes untapped data usage without extraction and idle processing power for machine learning. Algorithms will be crowdsourced from a developer community through the marketplace resulting innovation from periphery. This will intern help us with the current day machine learning limitations, such as inaccessibility of private data - Traditional machine learning requires datasets to be uploaded to a dedicated server, centralization of processing power-machine learning is mainly conducted through a centralized computer, which its processing power is usually limited or confined to the processors of a single machine, and ofcourse last but not least the limitation of models and algorithms development because only large corporations can afford investing huge initial capital and resources to build in-house machine learning models and algorithms, or acquire tailor-made ones from consultancy firms to apply machine learning in their own business. How can DML help in regards to the aforementioned issues you ask? Well DML infrastructure is created in a way to solve those issues and return the autonomy of AI, machine learning and self-owned data back to the people because the current tech giants dominance over the market has led to anti-competitiveness. Decentralized Machine Learning (DML) is introduced with the vision to avoid central control of machine learning development and central ownership of data. With the belief of collective intelligence, DML also hopes to promote innovation from the periphery instead of being dictated by management and elites of the tech giants. Utilizing blockchain technology. To learn more about this project must visit decentralizedml.com
Future of technology
20
future-of-technology-12b48ac2f3b7
2018-05-11
2018-05-11 05:19:34
https://medium.com/s/story/future-of-technology-12b48ac2f3b7
false
290
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Slav Viach
Aeronautical engineering student
6ff7c8b8f5ec
spdtstannor
0
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-19
2017-12-19 15:49:30
2018-01-11
2018-01-11 13:06:09
1
false
en
2018-01-11
2018-01-11 13:06:09
9
12b4f1ebc6b5
4.54717
1
0
0
I am no expert in Artificial Intelligence. But I am an afficionado for cartoons. And Adventure Time just might have a few things for us toโ€ฆ
2
What can Adventure Time teach us about AI? I am no expert in Artificial Intelligence. But I am an afficionado for cartoons. And Adventure Time just might have a few things for us to learn from about AI on Goliad. First things first: what is AI and how does is it designed? AI, or Artificial Inteligence, is one area of computer science that has the goal of building machines as close to human intelligence and behaviour as possible. In case youโ€™ve been living under a rock, it has a grown a lot in the past few years, and so have the narratives around it. Meanwhile machine learning is the method, the algorithms created in order for the AI to do what it is supposed to do, without having to be explicitly programmed. So instead of programming every single line and every single case scenario the AI may encounter, computer engineers give the machine a bunch of examples and it learns from it by tracing a โ€œpatternโ€ between these examples. This learning method tries to mimic humanโ€™s predictive learning (like how we learn from observing our parents), with some imperfection but with greater intelligence and data capacity than humans. Machine learning in AI has been working greatly in many cases, such as the โ€œDid you meanโ€ฆโ€ tool of Google, but it has also been working terribly when unsupervised (as we will see ahead). Ultimately, we want AIs to replace human labor that is repetitive, wearing or โ€œexpensiveโ€. So no, most likely AI and robots wonโ€™t replace layers, doctors, firefighters and teachers. And that is also because AIs still have a lot to learn from human language and behaviour. AIs have already shown us that they are not capable (yet) of understanding human nuances, like the difference between deserts and pornography or that of a Pulitzer-Winning photo from child pornography. And most importantly, AI have shown us that they need guidance in order of being neutral and not learning humanโ€™s prejudices: AI programs exhibit racial and gender biases, research reveals An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language hasโ€ฆwww.theguardian.com Google's Sentiment Analyzer Thinks Being Gay Is Bad Image: Google/Shutterstock / Composition: Louise Matsakis Update 10/25/17 3:53 PM: A Google spokesperson responded toโ€ฆmotherboard.vice.com Microsoft Had to Suspend Its AI Chatbot After It Veered Into White Supremacy Less than a day after Microsoft launched its new artificial intelligence bot Tay, she has already learned the mostโ€ฆmotherboard.vice.com This means that, above all, AIs canโ€™t replace some labors yet because they lack a moral code. This has already been explored in many AI movies, such as HAL in 2001: A Space Odissey, Ava from Ex Machina, and The Puppet Master from Ghost in The Shell. And now, it has been tackled in Adventure Timeโ€™s Season 4, Episode 10 โ€œGoliadโ€. You can watch the entire episode here: In this episode, the protagonists Finn and Jake are called to Princess Bubblegumโ€™s castle in order to meet Goliad, a creature she the princess has created to replace her when she dies. Goliad is a huge sphinx-like pink creature, created from Bubblegumโ€™s DNA and candy-material. At first she seems very sweet and innocent, and speaks and acts like a young child, showing her current mind state. Much like an AI, Goliad is extremely intelligent, and much like a toddler, she has a huge potential/capacity to absorb knowledge by observing others. Sheโ€™s creepy, but also adorable As Bubblegum is really tired from working overnights in the creation of Goliad, Finn and Jake volunteer to teach the creature about the fundaments of leading people and rulling the kingdom of Ooo. They take her to a kindergarden, where she gets scared of the childrenโ€™s chaotic behaviour. Jake tries to control the situation by screaming and yelling at the children, in a military way, for them to stand still and follow orders. Goliad, much like a child and much like an AI, takes that response and behaviour as the correct one without questioning, and yells at the children in the same strict way in order for them to complete their tasks. Soon after, when Finn tells her that leading by fear is wrong and that she should โ€œuse her brainโ€, a third eye emerges from her forehead and we learn that she is an omnipotent and omniscient creature, with the power of controlling othersโ€™ movements with her mind. She has access to and power over a lot more information than an average human would, just like AIs have access to big datas and the power of analysing tons of data in a matter of seconds. Much as many other AIs in science fiction or in real recent events, Goliad scares us when she reveals her full potential and how much she knows. But, much like AIs, she doesnโ€™t understand the moral consequences of this omniscience. Princess Bubblegum tries then to teach her about leadership as something of mutual benefit, but Goliad shows that she understand the her power and that only the stronger should prevail. โ€œBee does not care for flower. Bee is stronger than flower. Goliad is stronger than bee. Goliad is stronger than all.โ€ Goliad has learned by watching others, but as she doesnโ€™t understand the moral consequences of her actions, her behaviour is seen as harmful. Goliad doesnโ€™t care about what is right or wrong, she only cares about what is the safest and most correct way of getting results, with no margin for error. It is no coincidence that Goliad resembles a sphinx, a symbol in mythology for both intelligence, wisdom and mistery, fear, dominance. Seeing the corrupted path Goliad has taken, Princess Bubblegum decides to tak her down, only to be โ€œmind-heardโ€ by Goliad, who denies it and tries to take the castle for herself. While Finn and Jake try to stop her from destroying the castle, Princess Bubblegum goes on a secret mission to build another sphinx creature to match her strength. But this time, she builds a creature that cannot learn, only obey. Much like AIs, Goliad is not deffective or badly programmed. She just doesnโ€™t have a moral a code. Even though she is supervised by Jake and Finn, she interprets their behaviour as correct and neutral, and learns from it indistinctly. The question here is how are we going to teach AI to interpret and mimic human behaviour from big data without it incorporating humanโ€™s prejudices and worst traits? Is it possible to create human-like robots without them being human trash as well? So far, the results have been imperfect. But these imperfect results have made scientists question how we see our own behaviour, and be more conscious about our own prejudices. I personally believe one day we will be able to create a human-like artificial intelligence that has a moral code and the skills to be helpful to humansโ€™ needs. But for now, Iโ€™ll continue to enjoy the human technohpanic in sci-fi and cartoons.
What can Adventure Time teach us about AI?
40
what-can-adventure-time-teach-us-about-ai-12b4f1ebc6b5
2018-01-22
2018-01-22 12:12:17
https://medium.com/s/story/what-can-adventure-time-teach-us-about-ai-12b4f1ebc6b5
false
1,152
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Quel Magalhรฃes
Trying not to have such a strong opinion on everything.
c23ea9d35db
quelmagalhaes
12
8
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-20
2017-10-20 16:11:30
2017-10-20
2017-10-20 16:55:45
0
false
en
2017-10-25
2017-10-25 12:09:35
1
12b55a9ed9ab
3.05283
0
0
0
The Financial Times recently wrote that 88% of the price inflation in the US since 1990 is due to four sectors: healthcare, prescriptionโ€ฆ
5
Be Upset Tech Progress Hasnโ€™t Come Quicker, Not That Relative Prices of Fundamental Services Have Risen The Financial Times recently wrote that 88% of the price inflation in the US since 1990 is due to four sectors: healthcare, prescription drugs, education and construction. According to Larry Summers, the current CPI of TV sets is ~6 from 100 in 1983, while the CPI of a day in a hospital room or a year in a college is ~600. The relative price of fundamental services like education, healthcare and construction have undoubtedly risen, while other products like computers have fallen dramatically. I wonโ€™t beat a dead horse as this is a widely discussed topic but I will say one thing โ€” the reason for this price increase is due to the inability of each sector to experience high productivity growth. Cost disease states that if workers become more productive in certain sectors and their wage is the same in all sectors, then the price of the areas where labor is not productive will rise. For sectors like education, healthcare and construction to offer lower prices, their inputs โ€” nurses, teachers and laborers โ€” must be granted special superpowers that allow them to do their job at 10x the productivity. Since this hasnโ€™t happened, their cost structure stays flat while inflation continues to rise. Technological growth is synonymous with productivity growth; developing new technology is fundamentally about improving the efficiency/capacity that something can be done at. But for services like education, healthcare and construction, technology cannot โ€˜disruptโ€™ them because they require fixed inputs (e.g., a human teacher spending an hour delivering a lesson โ€” time and people cannot be duplicated). Although much of the service sector requires fixed inputs since it deals to a greater extent with personal issues, at the basic level it can be improved by AI. MOOCs have democratized access to quality education, yet, they lack the personal touch that people need in order to learn. Simply watching recorded videos of someone lecturing is not the best way to learn, we need interaction with our educators! For instance, studies show that for newborns aged 0 to 3 years, watching educational programs has no effect on learning since itโ€™s not responding the child. But, when an adult speaks to a newborn over Skype, the newborns are able to learn, showing thereโ€™s nothing wrong with screens just that we learn in dynamic ways. In this case, an AI could be developed to converse with a newborn over an iPad or fun robot; similar developments could be made for other levels of learning. In healthcare, developments like telemedicine have broken down geographical boundaries to diagnosing illness but physician-dependent solutions lack scalability. Vision and speech recognition โ€” armed with deep data on our personal health โ€” will diagnose and offer cures to ailments that we present it. Assuming similar internet access, this will be true for a smartphone-bearing child in Mumbai the same way itโ€™ll be true for a hedge fund manager in NYC. The price of a home is determined by the cost to build the structure, plus the value of the land it sits on. In construction, advancements in robotics may lower costs by automating strenuous tasks and using different raw materials. While land values will always be dictated by supply and demand, advancements in self-driving cars may allow people to commute longer distances, thus, allowing them to spread out further (creating sub-suburbs). This is critically important considering that in cities, land value constitutes the majority of home prices โ€” in Toronto, the average home price 10 minutes from downtown is 3x that of homes 60 minutes from downtown. As we spread out, land values decline as thereโ€™s less of a need to bid outrageous sums to be clustered together. As you can see, Iโ€™m stating that AI will largely improve fundamental services at the low-end. At the high-end, AI will help to deliver medical treatment, one-on-one learning or home ownership in a big city, but given that on the high-end youโ€™ll be using each industryโ€™s cost-stagnant inputs, youโ€™ll still pay a hefty price. But, for the majority of people globally, getting an appropriate level of access to these fundamental services is all they need to thrive; by using AI to drastically reduce the cost of these services, we may allow them to thrive at last. Note: I believe this will be driven by a mix of work from startups and incumbents (due to high regulations and high R&D spending by incumbents). Also, sectors with less productivity employ more of the economy. Increasing the productivity of these sectors, if only at the low-end, will leave a large portion of the workforce unemployed. Letโ€™s not forget to have a conversation about changing our social assistance policy to meet this new economy.
Be Upset Tech Progress Hasnโ€™t Come Quicker, Not That Relative Prices of Fundamental Services Haveโ€ฆ
0
be-upset-tech-progress-hasnt-come-quicker-not-that-relative-prices-of-fundamental-services-have-12b55a9ed9ab
2018-05-18
2018-05-18 22:45:32
https://medium.com/s/story/be-upset-tech-progress-hasnt-come-quicker-not-that-relative-prices-of-fundamental-services-have-12b55a9ed9ab
false
809
null
null
null
null
null
null
null
null
null
Economics
economics
Economics
36,686
Liam Weld
liamweld.com
f12ed4af2a10
liamjweld
10
3
20,181,104
null
null
null
null
null
null
0
tl;dr // Either we trust the algorithms to genuinely understand us and provide us with tailored suggestions and information โ€” for which we have to sacrifice virtually all of our privacy โ€” or we renounce the comfort of automated decision making that is threatening our individualism and fight the artificially created choice funnels and content bubbles it inevitably produces.
1
null
2017-09-02
2017-09-02 01:59:44
2017-09-02
2017-09-02 15:16:00
1
false
en
2017-09-02
2017-09-02 22:56:59
5
12b5e02008fa
6.916981
2
0
0
Weโ€™re evolving to a dismal world in which our digital activities and analogue behaviour are generating massive quantities of measurable andโ€ฆ
5
The Abdication of Individualism Weโ€™re evolving to a dismal world in which our digital activities and analogue behaviour are generating massive quantities of measurable and collectible data that is worth huge amounts of money to corporations and institutions, especially in a commercial context. We -the citizens and consumers- never get to share in its value as our data is being sold and monetized. We barely know who collects it, who sells it or who buys it. The majority of people hardly bothers to read which permissions they grant to the apps they install on their phones or to which terms and conditions they agree to in exchange for a service or the use of a platform or social network. All we get in exchange for our data are โ€˜personalized adsโ€™, โ€˜custom experiencesโ€™ and โ€˜tailored product suggestionsโ€™ predicted by smart algorithms and delivered by intelligent content filters and automated distribution mechanisms. Institutionalized Urges of Instant Gratification With big companies using deep learning and artificial intelligence to profile us, our choices and preferences, our relations, our lives โ€” and soon enough our thoughts and feelings โ€” we need to keep in mind that not all that is automated returns joyful bliss. Digital products and services are cunningly crafted to create dependence, to increase our time spent with them and to subconsciously shape habitual actions. Deep-rooted in their design are triggers targeting our emotional and psychographic soft spots to gently strengthen virtual connections and fuel our endorphin production to ensure a permanent top-of-mind-never-out-of-sight position. Technical perfection, theoretical brilliance but borderline unethical. Next to our intrinsic cravings for acknowledgement, the dire need to belong, to be loved and to leave a mark, our human DNA seems to contain a healthy degree of aversion to repetitive work and trivial tasks as well. When services or products can relieve us from these burdens by offering automated alternatives, we confidently adjust our patterns and celebrate the time weโ€™ve gained while assuming nothing but the facilitatorโ€™s best intentions. Boldly, weโ€™ll sacrifice whatever remains of our privacy on the altar of indifference. The principally one-dimensional benefits of the elements constituting our always-connected, digitally managed and algorithmically infused lifestyle are the proverbial carrots being used to lure us further down the road of reliance and deeper into the bubble of automated, scripted interactions and filtered predictive (marketing) content. In a complex world where the long-run consequences of our actions and decisions are highly uncertain, our priority should be to avoid any risks leading to possible scenarios of dystopian futures in the context of new and unexplored technologies. We must demand a maximum degree of security for all connected devices and the networks they use. We must protect and safeguard all types of collectible personal data and strictly regulate its distribution and use. We should try to prevent any abuse of innovative concepts and focus on the preservation of our individuality and unique human identities, including the digital equivalence of the distinctive values and public morals which constitute the fundamentals of our society. The Quest for Regulation & Common Sense The need for Ethical Codes For Our Digital Age becomes critical as we are being hushed into a state in which we sacrifice without questioning all that is personal in exchange for services and features that are slowly turning us into lethargic believers of what is projected as technological progress and an improved quality of living. Simultaneously, subconsciously, we are gently being deprived of our freedom of choice for the sake of evolution. Apart from implementing a Code of Ethics in Artificial Intelligence (also read: Ethics of AI, 21 pages, PDF โ€” 335kb) or tracking the progress of institutes such as the Technical Committee on Robot Ethics and Automation Society, we must demand a legal framework to enforce the inventors, creators and suppliers of algorithmic and automated products or services to follow a strict moral code in which citizens and consumers are being protected at all times from the unprincipled intentions of dishonest politicians, manipulative spin doctors and fierce marketers. In the meanwhile Amazon, Google, Facebook, Microsoft and many others offer us the fabricated convenience and carefully designed peace of mind of a life well-organized โ€” hyper-personalized, supported by smart devices and driven by artificial intelligence. A digital and connected life in which every detail is meticulously engineered to capture more of our personal data. We must always be aware that todayโ€™s catalyst for virtually all commercial digital services is โ€˜profit as a purposeโ€™, predominantly at the expense of the unwary consumer. We are the product, but at the same time we also create the value by investing time and providing data. Unfortunately, the return on this investment barely compensates the value. Pardon My Ubiquitous Evil, Sir Gullible as we are, we prefer to focus on the immediate benefits we experience; the soothing sensation of instant gratification. But below the surface, the play-field belongs to the heartless, contemptuous, immoral and shameless risk takers and opportunists who unhesitatingly try to manipulate opinions, thoughts and behaviour. The โ€œdonโ€™t be evilโ€ mantra is nothing but a vague impression of the values it once represented. Consumer data has been around since the first time two people made a trade of goods for the second time. An incomplete and rough overview of consumer data collection since the debut of promotional snail mail and local punch-card loyalty programs: First it was your address and phone number for the Rolodex or offline filing system of your favorite stores, possibly souped up with some additional basic demographics. Remember those yearly birthday cards and seasonal invitations? The mail order catalogs? What a glorious time! Using your credit cards youโ€™ve created a trail of locations and purchase preferences, completing the profile of your financial transactions. Based on your spending habits, credit card companies and banks awarded you with loyalty points and exclusive deals. Still happens today. With the arrival of the internet, everybody wanted your e-mail address all of the sudden because that was way more personal than the banners and pop-ups clogging your dial-up connection speed. Until spam happened. Then it was all about your browser history and the cookies youโ€™ve collected, followed by the sites you had registered on and the social network profiles you claimed, including the content you wrote for your blog. Google scanned the content of your e-mails for a while too, so they could deliver better ads. Later, it was your phone and the apps youโ€™ve downloaded, completed with your preferred contacts, media items stored on your device, the wi-fi you connected to and the check-ins you made when you posted your reviews. Next came your GPS-enabled smart watch and other vital data-collecting and location-aware wearables. We were granted the power to analyze our performances and visualize our progress. Fascinated by the insights and semi-scientific information, weโ€™ve decided to even log our sleep patterns. Today Amazon, Google, Microsoft, Apple, Samsung and many others want to know what happens in your house. For this, theyโ€™ve created virtual concierge services to cater to your immediate needs, wherever you are. Using the concierge apps and pods, they would love to manage your connected smart house, take care of your grocery shopping, monitor and optimize your energy consumption and (in the near future) trace your car. Until the advent of connected devices and the decisive need of people to be constantly available, everything was still relatively manageable. You could keep track of your digital information and determine for yourself how much information you were willing to share. The Illusion of Privacy Separated by the silos created by the various โ€˜keepers and collectorsโ€™ of the data sets tied to you as an individual and their specific context and purpose, the situation is quite harmless. But if one company would have access to all of the data you and your devices produce, itโ€™s not difficult to understand that this might become a problem. We trust companies like Amazon, Google, Microsoft, Apple and their intelligent concierge services to provide us with the best answers possible, but how can we know for sure that the suggested products really are the right price/quality match available on the market? How can we be sure the news sources providing requested information are truly unbiased? What guarantee do we have that these smart algorithms make the right choices? Is there a way for consumers to be sure that the products suggested arenโ€™t โ€˜promotedโ€™, โ€˜injectedโ€™ or โ€˜placedโ€™ by a company that bought itself access to their digital data profiles? Most likely so, but the only way to find out is to regularly abandon your automated paradise and do the research yourself. Itโ€™s right here that we find ourselves in the middle of the intersection of the paradigm of true individualism with the paradox of automated, contextual and behavioural personalization. Either we trust the algorithms to genuinely understand us and provide us with tailored suggestions and informationโ€” for which we have to sacrifice virtually all of our privacyโ€” or we renounce the comfort of automated decision making that is threatening our individualism and fight the artificially created choice funnels and content bubbles it inevitably produces. Imagine one single company knows all of your shopping preferences and the brands you prefer; the websites you visit and the type of news sources you consult; what you look for while searching the internet; the friends you frequent physically and who you communicate with including the topics you talk about; the type of devices you have at your disposal and how much data you consume; the locations you spend time at, connect from and how you traveled there; the payments youโ€™ve made and the various payment methods used; what type of household electronics you have and how much you use them; what your physical condition is and what your health issues might beโ€ฆ You might come to the conclusion that there is a rather complete personal data set available that represents you in all facets, leaving not much of your story untold. This is exactly what Facebook, Google and Amazon are capable of today and in the near future. A goldmine for marketing and sales people. Priceless in value. Dangerous when in the wrong hands. Trust, but verify โ€” indeed.
The Abdication of Individualism
11
the-abdication-of-individualism-12b5e02008fa
2017-12-12
2017-12-12 23:44:43
https://medium.com/s/story/the-abdication-of-individualism-12b5e02008fa
false
1,780
null
null
null
null
null
null
null
null
null
Privacy
privacy
Privacy
23,226
Miel Van Opstal
Hi! I ponder about contextual consumer interactions in a digital, automated era and the future of marketing, artificial intelligence and its place in our lives.
e8fd5a61951
coolz0r
2,699
624
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-04
2018-09-04 00:21:06
2018-09-30
2018-09-30 19:57:57
5
false
en
2018-09-30
2018-09-30 19:57:57
5
12b60bc92a0
3.93522
1
0
0
Brain js is a javascript library which allows us an easy way to use machine learning.
3
Brain.js Brain js is a javascript library which allows us an easy way to use machine learning. Machine Learning โ€œ Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to โ€œlearnโ€ with data, without being explicitly programmed.โ€ โ€” https://en.wikipedia.org/wiki/Machine_learning Simply put, machine learning is teaching a machine by providing it with data and not programming it directly to learn specific tasks. What this means is that when machine learning is applied, the โ€œlearnerโ€ should be able complete tasks that are similar to the what it was taught. If trained correctly, the machine should now be able to do tasks that are similar but not identical. Why would I? Because we are lazy. Machine learning provides a very efficient means for repetitive and possibly very complex tasks. If we can train a computer to do a task for us it saves us time. A machine can continuously work without breaks much longer than a person can. Also, machines have the computing power to calculate possibilities much faster than a human can. Successfully using machine learning in any application can mean a signaficant amount of time saved and a greater amount of progress can be achieved. How? Using the Brain.js library we can harness the power of machine learning without having to apply the complex math required traditionally. How it works is broken down into 3 steps: Provide Training Data Process Training Data Return Prediction Step 1. Providing Training Data. Training a machine can be a lot like training a person. In this case we have to provide the computer with answers and โ€œnot-answersโ€ so that it can make an accurate prediction. The quality of the prediction depends on the quality of the training data. Step 2. Once we have gathered a satisfactory data set for our machine, it just has to processed. Luckily Brain.js makes this very simple as the we can just feed the data through a function (or network). Step 3. Now that the data has been given to the network to process, the machine brain is now considered trained. We can test it be giving it different data and it will tell us how confident it is that the provided data is the answer. Example As an example, we are going to train our own network to recognize our favourite colours. The Setup A little bit of setup is required before we can start to use Brain.js. We first import brain.js. The code for the file can be found here: https://raw.githubusercontent.com/harthur-org/brain.js/master/browser.js. This can be copied and pasted into a .js file and saved as brain.js. Our First Neural Network Now that brain.js is imported we have access to creating a new brain (or neural network). Using the code above will make that new brain for us. Training Now if you remember, to train our machine brain we have to give it answers and โ€œnot-answersโ€. Basically, we want to show and tell our brain data that matches the results we want and conversely what we dont want. So in this example, the input represents the RGB value normalized (or divided by 255). We normalize the data so that brain.js can deal with smaller numbers and can calculate predicitions more efficiently. We use 255 because itโ€™s a nice number and there are 255 values in the RGB spectrum. The output is given a value of either 1 or 0. Where 1 represents 100% confidence or the answer and 0 is 0% not the answer. Here we are using 1 as representing โ€œa colour I likeโ€ and 0 to be โ€œa colour I donโ€™t likeโ€. Running the function below will train our network with the training data. Run Along Little Guy Now that our network has processed the training data we can give it random colours and it will tell us how confident it is. Itโ€™s prediction will be how confident it is that the colour we give it is our favourite colour. This is done with the code below. Results Hereโ€™s a visualization of what it looks like. The good colours are the training data with the output of 1 and the bad have the output of 0. We got the RGB values of all the good and the bad, normalized them, then put it into our training data. Once the brain is trained we can now give it any random colour and it will return how confident it is that we will like that colour. In the picture above we can see that the network is 84% confident that this colour is one we might like. Conclusion Brain.js allows us a glimpse into the possibilities of machine learning. We can achieve incredibly powerful recommendations on a large scale with machine learning. Cool Stuff https://www.youtube.com/watch?v=9Hz3P1VgLz4 https://itnext.io/you-can-build-a-neural-network-in-javascript-even-if-you-dont-really-understand-neural-networks-e63e12713a3 References: Image 1 source: https://www.ie.edu/exponential-learning/blog/bootcamps/machine-learning-marketing/
Brain.js
10
brain-js-12b60bc92a0
2018-09-30
2018-09-30 19:57:57
https://medium.com/s/story/brain-js-12b60bc92a0
false
822
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ricky Law
Front-End Developer
74d1b20a90be
rickylaw94
20
33
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-05
2017-09-05 02:11:56
2017-09-05
2017-09-05 10:16:37
0
false
en
2017-09-05
2017-09-05 10:16:37
0
12b6df0b16
2.211321
5
0
0
The Journey:
1
gram Labs + mai Social The Journey: Over two years ago Ali and I met as roommates on our first day at Harvard Business School. My previous startup was in the middle of a difficult season, and letโ€™s just say Mr. Amarsy didnโ€™t care for me much. Ali comes from a world of running creative campaigns for large corporations @ Leo Burnett and I am all tech all the time having spent a number of years at Apple, but with a preference for startups. I am quiet and introspective, whereas others say that Ali has a โ€œlarge personality.โ€ We got over that very quickly and after more than 20,000 WhatsApp messages, countless hours on Skype with crappy internet connections, multiple trips across the world, and a lot of sacrifice only the two of us and our significant others really โ€œunderstandโ€, we launched what has collectively been a two year R&D journey. We started the company with one simple question: if we put really difficult problems in the center of the room and filled the chairs with insanely talented people whose background and experience sits at the intersection of art and science with a willingness to experiment, wouldnโ€™t that lead to some powerful outcomes? And has it everโ€ฆcheck out the demo below of President Obamaโ€™s AI that we built in 6 days. President Obamaโ€™s โ€œAIโ€ This left us with one big question โ€” what would it take to reduce a 6-day creation process to 60 seconds, and how could we give those tools to everyoneโ€ฆon their mobile device? mai Social Why did we build it? Expression through social media is becoming more central to identity every minute of every day, especially among Millennials. mai Social enables expression in ways that weโ€™ve never seen before. Not only is mai Social poised to transform conversations on major social media platforms, but it will become, in its own right, a platform that defines the modern social experience, especially for communities who are consistently under/misrepresented. A top priority for the company is deepening the accuracy of someoneโ€™s digital representation, and the ability for our technology to learn and improve over time. Our tools should allow each user to craft their digital representation and capture who they are online and offline โ€” culturally, stylistically, and ethnically. mai Social pairs a unique push towards diversity in digital representation with pioneering applications of data science driven technologies, powered by an incredible team. Why are we credible? mai Socialโ€™s team includes also includes two of the leading illustrators from the show Archer, Ellenโ€™s emoji exploji, software engineers with multiple exits or extensive startup experience, and a number of PhDs or Masterโ€™s grads in applied math and data science from Harvard, Oxford and NYU. While mai Social is made to be light-hearted and fun, its mission has broader implications for design and culture. The team aims to define the intersection of art, science and digital innovation while addressing the underrepresentation of certain identities in character design and digital identity creation. Behind all of this is a long-term interest in what data science and deep learning has to offer for the social world. Itโ€™s a sophisticated project that few companies โ€” the likes of Amazon, Facebook, etc. โ€” are also working on. Ultimately, these technologies will guide the user experience or let the avatar express itself with autonomy, if the user so chooses. We look forward to this journey with you, it is just the beginning. BRING DIGITAL YOU TO LIFE. -Matt, for the mai Social team
gram Labs + mai Social
61
gram-labs-mai-social-12b6df0b16
2018-02-14
2018-02-14 10:39:15
https://medium.com/s/story/gram-labs-mai-social-12b6df0b16
false
586
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
mai Social
bring digital you to life
a9565688d9ff
itsyourmai
7
1
20,181,104
null
null
null
null
null
null
0
null
0
9910fccb5ba4
2018-07-29
2018-07-29 18:05:23
2018-07-29
2018-07-29 20:35:06
0
false
en
2018-07-30
2018-07-30 02:00:42
4
12b9060d37e2
2.622642
247
174
0
An exchange built on a vision. A professional trading platform designed to be autonomous, efficient and transparent. Where the nextโ€ฆ
4
Discover Q82.io An exchange built on a vision. A professional trading platform designed to be autonomous, efficient and transparent. Where the next generation companies meet the power of a global community. During the last 5 years, since Q82 Digital we have witnessed the birth of new cryptocurrencies, decentralized protocols, and digital assets, designed to revolutionize almost every aspect of the existing infrastructure in the world. This rapid growth has not always been accompanied by adequate services or regulations and has kept out a large number of sophisticated investors who do not have the tools to participate in this emerging asset class. Not only that, the crypto investor community has suffered from a lack of transparency, data integrity, liquidity, security or minimal stability. For all of them, we have created Q82.io Q82 is not an ordinary exchange, it is a revolutionary platform to meet the real needs of the next generation of investors and digital assets. Not only that, but our platform has a contrarian vision to traditional management approaches, Q82 gives back power to its users and makes them participate in decision-making, dividend distribution, and governance decisions. In every sense, Q82.io is a decentralized organization governed by its users and maintained by its administrators. Q82 Platform Q82 will operate within the European Union and will fully comply with applicable laws and regulations. Based in Madrid, Spain, the company provides services mainly in both English and Spanish. As the regulatory environment becomes clear, Q82 will seek opportunities to provide similar services in the global markets. Our platform consists of four pillars of activity: Distribution of Emerging Alternative Funds Cryptocurrency and security token trading Secure wallet and custody services Third Party Integration and advanced reporting Alternative Fund Distribution The asset management industry is dominated by fund distribution networks with barriers to entry blocked. We will design a new technological framework using Blockchain and propose an alternative paradigm for rewarding talent and hard work, meanwhile cutting middlemen and fees. Cryptocurrency Trading Platform We will deploy the Q82 digital asset exchange platform to meet the evolving marketplace demands of investors, professionals, and institutions. An intuitive interface will connect users to matching order books, charting, analytics, and industry-standard order management tools. We will customize the user experience with multi-level approvals. Q82 exchange will seek to become a center of liquidity for the digital currencies we make available for trading. We will deploy assets using a best-practices framework and community voting: Primary Currencies, Stablecoins and Alt listings. Security Token Trading Platform Q82 will develop an additional layer for security token trading by leveraging the knowledge, experience, and technical expertise of the team. The precise legal, regulatory, and technical requirements needed to establish a platform for security token remains uncertain, given the current state of flux associated with its market. Custodian Wallet Q82 accredited investors, institutional-customers will be able to hold balances in fiat, digital currency, and security tokens within their digital wallet. Fiat accounts will be limited initially to EUR and subject to regulatory licenses, with additional currencies (USD) added in the short term. Reporting and Recordkeeping Q82 and will implement strict privacy protocols to ensure that the information remains confidential. Platform Highlights Zero Transaction Fees. Every trade commision will get reimbursed to the traders using QEX. Get more information about QEX here. Order Matching Engine. We revolutionize order matching processes reducing operational risk during the trade using artificial intelligence. High-Performance API. A quick trading connectivity solution with obsessive customer support and feedback. Advanced Trading Engine. A comprehensive, functionally rich, high-performance, end-to-end platform. An ideal platform for algorithm execution. Next Generation Security. We will implement unmatched security measures and active prevention policies. Also, we regularly perform external cybersecurity audits on the platform and publish them. Obsessive Customer Support from day 0. We are committed to delivering the highest levels of customer satisfaction by providing a high-value, positive experience at every step in the process. QEX Token Terms We have created another post to explain the QEX Token. You can find it here. Find us online using the links below: Website: www.q82.io Telegram: https://t.me/q82io Twitter: https://twitter.com/q82io QEX Token Distribution Get Early Access
Discover Q82.io
4,292
welcome-to-q82-io-12b9060d37e2
2018-07-30
2018-07-30 02:00:42
https://medium.com/s/story/welcome-to-q82-io-12b9060d37e2
false
695
We provide a digital platform of best practices and principles in trading, investing, security, custody, and compliance that allow institutional investors to participate in the emerging digital asset class.
null
null
null
Q82 Digital
info@q82.capital
q82-digital
CRYPTOCURRENCY,CRYPTOCURRENCY NEWS,BLOCKCHAIN,DIGITAL ASSET MANAGEMENT,EXCHANGE
null
Blockchain
blockchain
Blockchain
265,164
Arturo de Pablo
Head@Q82 Digital
2cce9024de6c
a_57144
244
1
20,181,104
null
null
null
null
null
null
0
null
0
634d4b270054
2018-04-12
2018-04-12 10:46:36
2018-04-12
2018-04-12 10:47:34
1
false
en
2018-06-05
2018-06-05 08:52:57
3
12b911c65254
1.109434
0
0
0
Timothy van Langeveld, DEEP AEROโ€™s Regulatory and Policy Advisor, will be participating in AUVSI Xponential being organised in Denverโ€ฆ
5
Connect with DEEP AERO at AUVSI Xponential 2018 being organised in Denver, Colorado, USA Timothy van Langeveld, DEEP AEROโ€™s Regulatory and Policy Advisor, will be participating in AUVSI Xponential being organised in Denver, Colorado, USA from March 30, 2018 to April 3, 2018. Please connect with us and learn more about DRONE-UTM and DRONE-MP. Looking forward to meet you in Denver on March 30, 2018. About AUVSI Exponential 2018 If youโ€™re looking to harness the power of unmanned technology, AUVSI XPONENTIAL 2018 is the spot. An intersection of cutting-edge innovation and real-world applications, XPONENTIAL is the one event that brings all things unmanned into sharp focus. Join more than 8,500 industry leaders and forward-thinking users from both the defense and commercial sectors to learn the latest on policy, business use cases and technology applications โ€” and grab your share of this billion dollar industry. https://www.xponential.org/xponential2018/public/enter.aspx About DEEPAERO DEEP AERO is a global leader in drone technology innovation. At DEEP AERO, we are building an autonomous drone economy powered by AI & Blockchain. DEEP AEROโ€™s DRONE-UTM is an AI-driven, autonomous, self-governing, intelligent drone/unmanned aircraft system (UAS) traffic management (UTM) platform on the Blockchain. DEEP AEROโ€™s DRONE-MP is a decentralized marketplace. It will be one stop shop for all products and services for drones. These platforms will be the foundation of the drone economy and will be powered by the DEEP AERO (DRONE) token.
Connect with DEEP AERO at AUVSI Xponential 2018 being organised in Denver, Colorado, USA
0
connect-with-deep-aero-at-auvsi-xponential-2018-being-organised-in-denver-colorado-usa-12b911c65254
2018-06-05
2018-06-05 08:52:59
https://medium.com/s/story/connect-with-deep-aero-at-auvsi-xponential-2018-being-organised-in-denver-colorado-usa-12b911c65254
false
241
AI Driven Drone Economy on the Blockchain
null
DeepAeroDrones
null
DEEPAERODRONES
null
deepaerodrones
DEEPAERO,AI,BLOCKCHAIN,DRONE,ICO
DeepAeroDrones
Deepaero
deepaeros
Deepaero
0
DEEP AERO DRONES
null
dcef5da6c7fa
deepaerodrones
277
0
20,181,104
null
null
null
null
null
null
0
null
0
604640a9497a
2017-06-30
2017-06-30 19:58:02
2017-07-12
2017-07-12 16:55:36
3
false
en
2018-03-28
2018-03-28 19:34:55
19
12b988f18cd0
8.682075
80
9
0
Should I start a company that could destroy millions of jobs?
5
The AI Entrepreneurโ€™s Moral Dilemma Should I start a company that could destroy millions of jobs? If you like this article, check out another by Robbie: The Future Proof Job Source: Wikipedia In the coming years, entrepreneurs employing artificial intelligence will face a moral question that has not been a concern for the typical startup founder: Should I create a company/product/service that could result in wide-scale job loss? In an earlier article, I wrote about AIโ€™s potential. Itโ€™s unlike anything weโ€™ve seen before. Itโ€™s both exciting and a little scary. Itโ€™s scary in that itโ€™s easy to imagine software achieving increasing levels of human-like capability. If a technology becomes feasible, someone will try to commercialize it. What happens when the technology is so good, it could eliminate the need for thousands or even millions of jobs? Despite the business being an obvious home run financially for the founder(s) and investors, is it such a no-brainer to move forward when it could negatively affect so many people? AIโ€™s impact on jobs has been overhyped (so far) Iโ€™ve been dealing with concerns about AI and the impact on jobs since I launched Automated Insights back in 2010. Initially, we were focused on sports journalism, but when we took a broader approach to automating quantitive writing, I got a steady stream of reporters that wanted to talk about what it meant for the future of journalists, data analysts, and other related professions. Iโ€™ve always been pretty dismissive of the impact because we havenโ€™t seen any job loss as a result of implementing our platform (called Wordsmith). I even put together a few slides to address this issue, which Iโ€™ve used for many of my talks: My point was that despite Wordsmith producing over 1.5 billion pieces of content and me doing more interviews each year about AI taking jobs, Iโ€™m not aware of a single job that has been lost due to our software. My pitch boiled down to: Humans + Software > Humans or Software Essentially, humans and software together are better than either of them individually. Supervised methods are better than unsupervised methods. For a certain class of solutions, I still believe thatโ€™s correct. But what happens when thatโ€™s not true anymore? When the Creative Destruction cycle ends Itโ€™s pretty clear to me that jobs will get automated in the coming years at increasingly higher rates. That said, jobs being automated away is nothing new. The question of whether to move forward with a new business approach or product at the risk of hurting other companies or whole professions has existed as long as weโ€™ve had capitalism. The term we use is creative destruction, and itโ€™s what AI-optimists like myself invoke anytime someone proposes that the impending job loss is going to be a bad thing for society. Creative destruction refers to the incessant product and process innovation mechanism by which new production units replace outdated ones...Over the long run, the process of creative destruction accounts for over 50 per cent of productivity growth. Source: Ricardo J. Caballero, https://economics.mit.edu/files/1785 When a new product has superseded an older one, the jobs that were made obsolete are replaced by new ones at a comparable size somewhere else. Or it might mean certain companies go out of business, but a similar level of new companies take their place with new technologies. You can think of it as a cycle: Mary M. Crossan and David K. Hurst. โ€œStrategic Renewal as Improvisation: Reconciling the Tension Between Exploration and Exploitation.โ€ Advances in Strategic Management, Volume 23, 273โ€“298. 2006. What happens when the creative destruction does not result in new jobs because the jobs have been completely automated? Again, this has happened before, but there are other aspects of the economy that are growing or new industries created that buffered the loss. If multiple professions are undergoing this kind of decline at the same time, there may not be a big enough safety net. Now the fun begins. Letโ€™s consider a thought experiment. Into the future Letโ€™s fast forward twenty years to 2037 and imagine more jobs have gone away due to automation, and we havenโ€™t had nearly enough new jobs to backfill the old. Tensions have been rising among the unemployed and underemployed. After someone is out of a job for multiple years, they start to get desperate. It turns out that it gets harder to get a job the longer you are unemployed. Letโ€™s also imagine the rate of innovation around AI has continued to grow at an accelerating pace. Iโ€™m not talking AGI yet, but with a combination of continued processor improvements and new algorithm innovations, we get used to software being โ€œmagicalโ€. At that point, we might not be driving our own cars because software is much better at it than humans. In 2037, software will be better than humans at a variety of tasks, and most of our online interactions with smart bots are indistinguishable from interacting with humans. Now imagine you are an entrepreneur that figured out a way to automate a whole profession. Letโ€™s use accountants as a (somewhat) random example. You have software that can completely automate the human work previously required to be an accountant. Your software can mimic all the necessary interactions and knows how to manage books better than the best human. Itโ€™s a full accounting solution. The best part, your product costs a fraction of a traditional human accountant, and it does the job 10x better and faster because it doesnโ€™t need breaks. Take a leap of faith with me โ€” itโ€™s a thought experiment. As an entrepreneur, youโ€™ve built the base product and know the go-to-market strategy. It will be an easy sell. You are going to make a lot of money. You might even achieve one of those โ€œfastest company to reach X revenueโ€ milestones that the press loves to talk about. Investors are coming out of the woodwork to throw money at you. As a business, itโ€™s a no-brainer. However, your product is going to wipe out the accounting profession. Over the course of 5โ€“10 years, as competitors launch and saturate the market, human accountants will be a rarity. That will be 1.3 million good paying, college-level jobs, gone. Even if 300,000 accountants manage to stay around as some people prefer dealing only with humans despite the higher cost and greater risk of error, thatโ€™s one million jobs gone in a declining industry. Should you proceed with starting the company? Now, imagine a different industry and a different product. One that would have 10x the impact on job destruction โ€” more than 10 million jobs lost. Should you proceed? Donโ€™t tell an Entrepreneur they shouldnโ€™t do something There is no easy answer here. Many entrepreneurs are hardwired to go after big opportunities aggressively. Plus, out of the thousands of companies that get created every year, only a small fraction have serious, fundamental moral issues at stake if their business is successful. And Iโ€™m not talking Uber/Lyft style issues where millions of taxi driver jobs will be impacted because new jobs will take their place via ride sharing. Thatโ€™s creative destruction working the way it should. Former taxi drivers will have other, albeit less attractive, options. On one side there is the tug of a big business opportunity that could make the company extremely successful, but on the other side there will be real people, and lots of them, hurt as a result. Some may find it easy to take the moral high ground and say without blinking that an entrepreneur shouldnโ€™t even consider it. As Iโ€™ve alluded, itโ€™s not that easy. Entrepreneurs hear they โ€œshouldnโ€™tโ€ all the time. Part of what makes building a business fun is proving all the naysayers wrong. However, this is a little different. Next, Iโ€™ll walk through some of the arguments to try and justify starting the software accountant business I described above. None is a slam dunk either for or against in my book. โ€œCreative Destructionโ€ argument Until we have several examples of software being able to wipe out an entire profession like accountants, we can always use the creative destruction argument. Maybe by automating accountants some new type of business pops up somewhere else. The challenge with creative destruction is you can never predict how itโ€™s going to play out. The cycle has held for the last hundred years, so we canโ€™t be confident of it being broken until it is. It may look like accountants will be automated, but maybe it will present new opportunities for people with those skills! โ€œDoing People a Favorโ€ argument When we launched our automated quarterly earnings product with the Associated Press, my go-to line was that no financial reporter likes writing earnings reports. Itโ€™s stressful, repetitive, and mind-numbing. After talking to over a dozen financial reporters after our launch, I was pretty spot-on. Now, those financial reporters were freed up to focus on more important stories. We did them a favor! The same could be said for our accountants. Instead of doing the laborious financial work, they are freed up to think more โ€œstrategicallyโ€ and help companies work through issues as they scale their business. They will thank us. Except in this case, itโ€™s unlikely there need to be 1.3 million accountants to serve that โ€œstrategicโ€ role. Also, many accountants kind of like digging into numbers โ€” thatโ€™s what attracted them to the profession in the first place โ€œSurvival of the Fittestโ€ argument Progressives may find it odd to think that we wouldnโ€™t move forward with a technology we know would be better than the current solutions today. From a purely evolutionary perspective, itโ€™s survival of the fittest, and if your job gets eliminated, you must find another occupation in order to survive. While you can claim evolutionary theory is on your side, itโ€™s not exactly a story you want to tell your kids before bed. โ€œInevitabilityโ€ argument The beat of technologyโ€™s drum seems to be unstoppable right now. While we have regressed at various points in human history, itโ€™s hard to see that happening now. If you donโ€™t build the software accountant company now, whatโ€™s stopping someone else from doing it later? If itโ€™s inevitable that the technology will be built, why not build it now? This โ€œpre-emptiveโ€ argument holds less water for me in the same way a pre-emptive nuclear attack doesnโ€™t make sense. Just because we know North Korea is building nuclear bombs, it doesnโ€™t mean we should launch a nuclear bomb on them now. Any backlash? Letโ€™s assume you move forward with the software accountant business in 2037 with the macroeconomic climate I described earlier. What kind of reaction might we expect from the public? Can you imagine a situation where there are attacks against company CEOs or technology visionaries that are โ€œresponsibleโ€ for building the software? Think about the political climate. The question of โ€œjobsโ€ is a major part of presidential candidate platforms. Imagine we have increasing unemployment and a few entrepreneurs are well-known for creating profession-busting businesses. Is it outside the realm of possibility that undue political pressure (if not force) is applied to those entrepreneurs? What if a politician can help improve a certain stateโ€™s job outlook in the short-term by stopping a single entrepreneur? Given the number of senseless shootings we have now, is it crazy to think a mentally unstable person that lost their job will try to take retribution against the person or persons he feels is responsible? Perhaps a modern-day Luddite will feel justified just as groups did in the early 1800s when they attacked and burned factories in the name of stopping progress. This is a pretty grim picture for a self-professed optimist like myself to paint, but I donโ€™t think itโ€™s unimaginable. It will either happen quickly or slowly Itโ€™s plausible in the next 10โ€“20 years to see my thought experiment playing out. And I donโ€™t think we need AGI to do it. If it doesnโ€™t happen quickly with a specific entrepreneur and startup going after particular professions, it could happen more gradually as bigger companies chip away at it. Itโ€™s hard to predict how things will evolve as a society and the general sentiment towards AI. We are currently in a honeymoon stage. Iโ€™ve not experienced much negativism toward my company or the AI space in general. At least not yet. If anything, it is the opposite right now. Potential customers are disappointed when a solution isnโ€™t magical enough. Most real-world, high-profile examples of AI in action have at most meant humans needed to swallow their pride (Jeopardy Challenge, AlphaGo, etc.), but nothing more detrimental than that. After the first couple of public examples of jobs being lost en mass, which turns into fears of what is to come, it will likely turn the sentiment negative. If that happens, one profession that could be in high demand is bodyguards for entrepreneurs. If you like this post, give it a โค๏ธ below so others may see it. Thank you!
The AI Entrepreneurโ€™s Moral Dilemma
92
the-ai-entrepreneurs-moral-dilemma-12b988f18cd0
2018-03-28
2018-03-28 19:34:56
https://medium.com/s/story/the-ai-entrepreneurs-moral-dilemma-12b988f18cd0
false
2,155
Practical insights for executives, managers, and project managers eager to deploy machine learning inside their company.
null
null
null
Machine Learning in Practice
info@infiniaml.com
machine-learning-in-practice
ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DEEP LEARNING,BUSINESS,NLP
InfiniaML
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Robbie Allen
CEO @InfiniaML, Exec Chairman @Ainsights, Lecturer at @kenanflagler, Ph.D. Student @UNCCS, Writing a book: http://machinelearninginpractice.com
c308c421ca8d
robbieallen
7,170
10
20,181,104
null
null
null
null
null
null
0
null
0
42263398024
2018-02-24
2018-02-24 04:51:50
2018-02-25
2018-02-25 17:50:38
6
false
en
2018-02-26
2018-02-26 05:18:20
12
12b98c118d4c
4.991509
6
0
0
Traditionally software companies have relied on their code to provide a competitive advantage and hence source code was kept in hugeโ€ฆ
5
Analyzing and Improving the Open-Source Health of your Repository Photo by โ€œMy Life Through A Lensโ€ on Unsplash Traditionally software companies have relied on their code to provide a competitive advantage and hence source code was kept in huge secrecy. We have observed a radical shift in philosophy in recent years. Nowadays, a lot of companies and organisations are open sourcing their codebases. Facebookโ€™s React, Microsoftโ€™s Visual Studio Code, Googleโ€™s Tensorflow library are some of the most popular open source repositories. In this blog, we analyse the benefits of open sourcing, provide metrics to measure the success of open sourcing and suggest ways to improve it. Why open-source? Intuitively sharing the things which earn you money seems bizarre. However, lots of companies today rely on data and scale of supporting infrastructure to provide them with competitive advantage, rather than their source code itself. Thus, at places where sharing the code doesnโ€™t create competition and security issues the pros of open sourcing outweigh the cons as it provides lot of compelling advantages for business, like, Security. Given enough eyeballs, all bugs are shallow โ€” Linus Torvalds Given enough testers and developers even complex vulnerabilities can be detected quickly. Bug fixes are also very quick in open source repositories as compared to proprietary software. Knowing that your code will be looked and discussed by hundreds of developers results in developers making more efforts to maintain and improve code quality. 10 reasons for open-sourcing lists down various reasons why open sourcing is good idea even for large companies. Life cycle and role of contributors in a repository In this section, we analyse the role of contributors at various stages in the lifecycle of a repository. From the figure, we observe that most of the issue reports are created by the contributors, this means lot of bugs and defects, especially when the repository reaches some maturity, are detected by the open source developers. This saves a lot of time and resources in testing and qa. This also means that many of new features are requested by the users of the repository rather than the developers, thus helping in bridging the developer-user gap. We now focus on the amount of code contributed from the open source community. We take the number of commits as the measure of contribution. Admittedly using commits is not very accurate if the goal is to measure the percentage of contribution. However, it is very competent when we want to compare various repositories and also the same repository at different times. We observe that barring some spikes, most of the repositories follow this pattern โ€” 1. In nascent stage, contributor commits are very low. 2. Project increases in popularity, contributor commits increase rapidly. 3. Project becomes mature, contributor commits stabilize. Moreover, the contribution from open source members forms a majority in the latter stages of the product, thus reducing support cost when the team members have moved onto newer things. We obtained commits, issue and employees info using the GitHub API. Lifecycle of some open source repositories Reasons for contributor attrition? We list down the number of developers with greater than threshold number of commits in various repositories (see table below). We see that huge fraction of people who started contributing donโ€™t continue for long. WHY IS THAT ? Attrition rate in various repositories Each person is different, they have different motivation and skill level and hence it is almost impossible to accurately predict all the reasons which led to developer attrition. However, there are few reasons which are more responsible than others, like - Entry Barrier โ€” Many times the process to start contributing is either unclear or cumbersome. Slow and unresponsive community/owners โ€” Developers who have to wait for long to get their queries answered are less likely to contribute further. Research from Mozilla suggests that maintainer responsiveness is a critical factor in encouraging repeat contributions. Rejected Work โ€” One thing which a developer hates more than anything else is watching their work go to waste. Developers who raise a PR which doesnโ€™t merge into the master branch often lose interest. As a concrete example we look at the Elasticsearch repository. Out of 523 contributors whose PR was closed without merging, 410 didnโ€™t submit code to that repository again. Absence of active engagement โ€” There are contributors who have put in significant effort in the repository but after a while absence of engagement from repository owners they move onto new projects. These are the people who are most important as they are experienced with the code base and procedure and are likely to be more productive. They also possess some unique knowledge and perspective about the repository which might be very useful. Increasing contributors retention The first step in this direction would be to make contributing easier. This involves having comprehensive guidelines, effortless installation process and uncomplicated submission task. Then owners have to take up the mantle of answering queries in quick and satisfactory manner. Lot of times owners directly reject the work/PR done if they think it is not adhering to guidelines or it wonโ€™t add much value, rather than that, the attitude should be of guiding the developer to rectify it so that it adds to the repository. It might be time-consuming but it creates a better community. These practices are followed in all repositories in varied amount. Open source is more than just code. Successful open source projects include code and documentation contributions together with conversations about these changes. โ€” @arfon, โ€œThe Shape of Open Sourceโ€ With the advancements in Machine learning and NLP, we can strive for something higher. Some ways these technologies can be applied to improve the community engagements are: Identifying queries among the comments in PRs/Issues โ€” this way owners can be quickly notified when there are unanswered questions saving time in checking all the issues. Identifying indicators of negative emotions โ€” When some developer is not happy with some things his comments might reflect it. Identifying them can be very helpful in prioritizing things. Engaging advanced contributors โ€” Based on issues, labels and code data present in the repository we can suggest open source contributors who are familiar with those type of issues to the owners. External validation of their expertise makes developers feel more welcomed and appreciated and owners/managers also get a much larger workforce to solve the issues. We at DeepAffects, use these and many other useful insights to improve collaboration and productivity in software projects. Do check it out if you found the blog interesting.
Analyzing and Improving the Open-Source Health of your Repository
163
analyzing-and-improving-open-source-health-of-your-repository-12b98c118d4c
2018-03-01
2018-03-01 11:56:06
https://medium.com/s/story/analyzing-and-improving-open-source-health-of-your-repository-12b98c118d4c
false
1,071
DeepAffects uses artificial intelligence to understand your teamโ€™s dynamics to help you meet software delivery & support commitments, effectively.
null
DeepAffects
null
DeepAffects
sushant.hiray@seernet.io
deepaffects
DATA SCIENCE,TEAM DYNAMICS,ENGINEERING,TEAM BUILDING
DeepAffects
Open Source
open-source
Open Source
15,960
Royal Jain
Data Science @DeepAffects previously Fixed Income Strat @MorganStanley
cfecf0654b7c
royaljain0203
12
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-19
2017-11-19 22:42:53
2017-11-21
2017-11-21 09:01:06
2
false
en
2017-11-21
2017-11-21 09:01:06
7
12bacfb9b06
1.97956
3
0
0
My first publication on Medium talks about how Machine-Learning Algorithms will โ€œreadโ€ peopleโ€™s textual message (words and emojis)โ€ฆ
2
Google AI is Penetrating Human Action Source:https://research.google.com/ava/explore.html My first publication on Medium talks about how Machine-Learning Algorithms will โ€œreadโ€ peopleโ€™s textual message (words and emojis), training a deep learning neural network to collect billions of Tweets and predict the emotion behind it. Similar to ImageNet and Google Images, DeepMoji only detects information that is static, but when it come to motion-pictures, things become different and sophisticated. October 19, 2017, Google has written in its blog to announce a labeled dataset for human action understanding, which labeled 80 visual actions in 576,000 movie clips with specific space and time. Itโ€™s called AVA, atomic visual actions, a set of video clips recognizing human actions under multi-people environment, the data source of it from Youtube. From the introduction of Google Research Blog, we could know what Google has done at a glance: they collected content from YouTube, focusing on the โ€œfilmโ€ and โ€œtelevisionโ€ categories, including actors from different nationalities, then took a 5 minute clip from each video, and converted them into 300 non-overlapping 3-second small clips. After this, they manually labeled every persons of each 3-second clip. For each person, he/she will be labeled a pre-defined atomic action vocabulary (with 80 classes) that describes the personโ€™s actions. There are three groups of human actions: pose/movement actions, person-object interactions, and person-person interactions. Hereโ€™s the frequencies of AVAโ€™s labels followed a long-tail distribution. Google Research Blog In real-life situation, the algorithms will be applicable under many circumstances. For security system, we use it to install the recognition sensor in nursing home in case of any accidental falls of seniors, to detect criminalsโ€™ behavior in jail, to check production manufacturing process, and to identify personal identity by his/her pace of walking and pose. However, when it comes to action recognition, thereโ€™s still a long way to go. In spite of these difficulties, there still exits problems like discrimination against gender and race. For example, back to 2015, Google Images recognize two black people as gorilla, a case that took huge impact about human action recognization. In terms of video recognizing, the Wired has reported an issue that Youtube has put advertisements beside videos of terrorism and racial animosity, leading to abandon of Pepsi and Wal-Mart advertisement. In the future, companies like Google will make every effort to improve AI with high labor and monetary cost. With the ever-growing efficiency performed by machine, there should be more debate over the issue and Iโ€™m definitely looking forward to this radical change.
Google AI is Penetrating Human Action
3
google-ai-is-penetrating-human-action-12bacfb9b06
2018-05-14
2018-05-14 09:47:03
https://medium.com/s/story/google-ai-is-penetrating-human-action-12bacfb9b06
false
423
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Alex Wang
null
148395c34ed1
wangilex
4
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-30
2018-07-30 09:54:31
2018-07-30
2018-07-30 09:55:25
0
false
ko
2018-07-30
2018-07-30 09:55:25
0
12bad98b6503
2.794
0
0
0
๋ธ”๋ก์ฒด์ธ์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€?
5
์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ์—…๊ณ„ ํˆฌ์ž์— ๋Œ€ํ•œ ๊ด€์‹ฌ์„ ๋‹ค์‹œ ํ•œ๋ฒˆ ๋ถˆ๋Ÿฌ์ผ์œผํ‚ฌ ๋ธ”๋ก์ฒด์ธ ๋ธ”๋ก์ฒด์ธ์ด๋ž€ ๋ฌด์—‡์ธ๊ฐ€? ๋ธ”๋ก์ฒด์ธ์€ ๋ถ„์‚ฐํ™”๋œ ๋””์ง€ํ„ธ ๊ฑฐ๋ž˜ ๊ธฐ๋ก ์‹œ์Šคํ…œ์ž…๋‹ˆ๋‹ค. ์ด ๊ธฐ์ˆ ์„ ์กฐ๊ธฐ์— ์ฑ„ํƒํ•œ ์‚ฌ์šฉ์ž๋“ค์€ ๊ฑฐ๋ž˜ ๋ช…ํ™•์„ฑ์„ ์š”๊ตฌํ•˜๋Š” ๋ชจ๋“  ์ข…๋ฅ˜์˜ ๊ฑฐ๋ž˜์— ๋Œ€ํ•ด ๋ธ”๋ก์ฒด์ธ์„ ํ•ด๊ฒฐ์ˆ˜๋‹จ์œผ๋กœ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ์˜ํ™” ์‚ฐ์—…์˜ ๋ฌธ์ œ์ ์ธ ๋ณต์žกํ•œ ๊ณ„์•ฝ ์ฒด๊ฒฐ๊ณผ ์˜ค๋ž˜๋œ ์ œ์ž‘์‹œ์Šคํ…œ๋“ค์†์—์„œ ์ ์  ๋ธ”๋ก์ฒด์ธ ๋„์ž…์˜ ํ•„์š”์„ฑ์ด ์ œ๊ธฐ๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‰ด์š•์ค‘์‹ฌ์˜ ์ž˜ ๋‚˜๊ฐ€๋Š” DTV ๋Š” ์˜ํ™” ์ œ์ž‘์ž๊ฐ€ ์ œ์ž‘๋น„๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ , ์ฝ˜ํ…์ธ ์— ์ž˜ ๋ถ„๋ฐฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๊ธฐ ์œ„ํ•ด ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜๋“ค์„ ์‚ฌ์šฉํ•ด ์™”์Šต๋‹ˆ๋‹ค. ๋ธ”๋ก์ฒด์ธ์˜ ํŽ€๋”๋ฉ˜ํ„ธ์ด ํˆฌ๋ช…ํ•œ ๋ถ„์‚ฐ์›์žฅ ๊ธฐ๋ก์‹œ์Šคํ…œ์„ ์‚ฌ์šฉํ•ด ์‹œ์žฅ์ฐธ์—ฌ์ž๋“ค์˜ ํ˜‘์—…๊ณผ ๋ฐœ์ „์„ ๋„์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „ ์„ธ๊ณ„์˜ ์œ ๋ช…์ธ์‚ฌ๋“ค์€ ๋ธ”๋ก์ฒด์ธ์„ ์ด์šฉํ•ด ๋งŒ๋“ค์–ด์ง„ ๋…์ ์ ์ธ ํ•„๋ฆ„ ์ฒด์ธ ์ˆ˜์ง‘ ์‹œ์Šคํ…œ์„ ๊ฒฝํ—˜ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ฝ˜ํ…์ธ  ๊ฐœ๋ฐœ์ž๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•œ ๋ถ„์„๊ณผ ๋ฆฌ์„œ์น˜๊ฐ€ ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์กฐ์น˜๋Š” ์˜ํ™” ํˆฌ์ž์ž ์‚ฌ๊ธฐ ๋ฐ ๊ด€ํ–‰์ ์œผ๋กœ ์กด์žฌํ•˜๋˜ ๋ฌธ์ œ์ ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์—…๊ณ„์—์„œ ์ฑ„ํƒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋Š” ๋ณดํ˜ธ๋œ ๋””์ง€ํ„ธ ์žฅ์น˜๋ฅผ ํ†ตํ•ด ํˆฌ๋ช…์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๋Š” ๊ฑฐ๋ž˜์™€ ํ•จ๊ป˜ ๋น„์ฆˆ๋‹ˆ์Šค๋ฅผ ์ด๋Œ๊ธฐ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ๋กœ์„œ ๋ธ”๋ก์ฒด์ธ์„ ์žฌ๋ฐœ๊ฒฌ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฉ”์ปค๋‹ˆ์ฆ˜ ์ด ๋ชจ๋ธ์€ ์ฝ˜ํ…์ธ ์˜ ์ดํ•ด๊ด€๊ณ„์ž, ์„œํฌํ„ฐ ๋ฐ ์ œ์ž‘์ž์—๊ฒŒ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒํ•œ ๋ถ„์„์„ ํ†ตํ•ด ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ด€์ฐฐํ•˜์—ฌ ์ฝ˜ํ…์ธ ๋ฅผ ๋ณด๋Š” ์‚ฌ๋žŒ์ด ๋ˆ„๊ตฌ์ธ์ง€, ์ฝ˜ํ…์ธ ๊ฐ€ ์–ธ์ œ, ์–ด๋””์„œ, ์–ด๋–ป๊ฒŒ ์†Œ๋น„๊ฐ€ ๋˜๋Š”์ง€ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธฐํšŒ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํ†ตํ•ฉ์ ์ด๊ณ  ํˆฌ๋ช…์ ์ธ ๋ชจ๋ธ์€ ์ œ์ž‘์ž๊ฐ€ ๋ณด๋‹ค ํ–ฅ์ƒ๋œ ์ฝ˜ํ…์ธ ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐ ๋„์›€์„ ์คŒ์œผ๋กœ์จ, ์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ์—…๊ณ„๋ฅผ ์ƒˆ๋กœ์šด ๋ชจ๋ธ๋กœ ์ „ํ™˜์‹œํ‚ฌ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™”์˜ ์š”์ธ ํ˜„์กดํ•˜๋Š” ๋ฌธ์ œ์ ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋‚˜ํƒ€๋‚œ ๋ธ”๋ก์ฒด์ธ์€ ๋งŽ์€ ์ด๋“ค์˜ ๊ด€์‹ฌ์„ ๋ถˆ๋Ÿฌ ์ผ์œผ์ผฐ๊ณ  ๊ทธ๋กœ ์ธํ•ด ์•”ํ˜ธํ™”ํ ์‹œ์žฅ์˜ ์‹œ๊ฐ€์ด์•ก์ด ์ปค์กŒ์Šต๋‹ˆ๋‹ค. ์˜ํ™”์—…๊ณ„์˜ ๋งŽ์€ ์ œ์ž‘์ž๋“ค์€ ์ด ๋ธ”๋ก์ฒด์ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํŽ€๋”ฉ์„ ๋ฐ›์•„ ์˜ํ™”์‚ฐ์—… ๋ณ€ํ™”์˜ ํ•œ ์ถ•์„ ์ด๋ฃจ๊ณ ์ž ๋ฐœ๊ฑธ์Œ์„ ๋–ผ์—ˆ์Šต๋‹ˆ๋‹ค. ๋ธ”๋ก์ฒด์ธ์€ ์ž๋™ํ™”๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธ ๊ณ„์•ฝ์€ ํŠน์ •ํ•œ ์ƒํ™ฉ์—์„œ ๊ณ„์•ฝ์„ ๋งŒ์กฑ์‹œํ‚ค๊ธฐ์œ„ํ•ด ํ•„์ˆ˜์ ์ธ ๊ธฐ์ˆ ์„ ๋‚ด์žฌํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋น„ํŠธ์ฝ”์ธ, ์ด๋”๋ฆฌ์›€, ๋ผ์ดํŠธ์ฝ”์ธ๊ณผ ๊ฐ™์€ ์•”ํ˜ธํ™”ํ๋“ค์€ ๋ธ”๋ก์ฒด์ธ ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•˜์—ฌ, ๊ฑฐ๋ž˜๋‹น์‚ฌ์ž๊ฐ„์˜ ๊ฑฐ๋ž˜๊ธˆ์•ก๊ณผ ๊ฑฐ๋ž˜์‹œ๊ฐ„์„ ๊ธฐ๋กํ•ฉ๋‹ˆ๋‹ค. ํ†ตํ™”์˜ ๊ฐ€์น˜๋Š” ๊ธฐ์ˆ ์  ํŠน์ง•๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‚ฌ์šฉ์ž๋“ค์˜ ๋งŽ์€ ๊ฑฐ๋ž˜๋ฅผ ํ†ตํ•ด ์ž…์ฆ๋œ ์œ ์šฉ์„ฑ ๋“ฑ์„ ํ†ตํ•ด ๊ทธ ๊ฐ€์น˜๊ฐ€ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์ธ ๋ฐฐ๊ฒฝ์ง€์‹๋งŒ์œผ๋กœ๋„ ๋ธ”๋ก์ฒด์ธ์€ ๋น„์ฆˆ๋‹ˆ์Šค ํŠธ๋žœ์žญ์…˜์ด๋‚˜ ์ž์‚ฐ์„ ์ถ”์ ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์šฐ๋ฆฐ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ธ”๋ก์ฒด์ธ ๊ธฐ์ˆ ์˜ ์‚ฌ์šฉ์ž๋“ค์€ ์ด๊ฒƒ์ด ํ˜„์žฌ ๋””์ง€ํ„ธ์„ ์‚ฌ์šฉํ•ด์„œ ์ฒ˜๋ฆฌํ•˜๊ณ , ๋ฐ์ดํ„ฐ๋ฑ…ํฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋˜ํ•œ ๊ฑฐ๋ž˜์™€ ๊ด€๋ จ๋œ ๋ชจ๋“  ์‚ฐ์—…๋“ค์˜ ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋ณ€ํ™”์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ฃผ์žฅํ•ฉ๋‹ˆ๋‹ค. ์˜ํ™” ์‚ฐ์—…์—์„œ๋Š” ์ง€๋ถˆ๊ธฐ๋ก์„ ์ถ”์ ํ•จ์œผ๋กœ์จ ๋””์ง€ํ„ธ ์ €์ž‘๊ถŒ ๊ด€๋ฆฌ์— ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๊ณ , ์‹œ๊ฐ์  ํšจ๊ณผ ์ฒ˜๋ฆฌ๋ฅผ ๋ธ”๋ก์ฒด์ธ ๋„คํŠธ์›Œํฌ์— ํ• ๋‹นํ•˜์—ฌ ์ œ์ž‘ ๋น„์šฉ์„ ์ ˆ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ํŠธ๋žœ์ ์…˜ ์‚ฌ์ดํด์—์„œ ํˆฌ๋ช…์„ฑ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ์ด ์‹œ์Šคํ…œ์„ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด, ์ข…์ข… ์ง€์—ฐ๋˜๋Š” ํˆฌ์ž๊ธˆ ํšŒ์ˆ˜์™€ ๋”๋ถˆ์–ด ์˜ํ™”์‚ฐ์—…๊ณ„์—์„œ ์ง€๋ถˆ ๋ฐ ์žฌ์ • ๋ฌธ์ œ์™€ ๊ด€๋ จ๋˜์–ด ์ž์ฃผ ์ผ์–ด๋‚˜๋Š” ์˜ํ™”์ œ์ž‘ ํˆฌ์ž ์‚ฌ๊ธฐ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ๊ฐ€ ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. ๊ฒฐ๋ก  ์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ์‚ฐ์—…์ด ํˆฌ์ž์ž๋ฅผ ๋ณดํ˜ธํ•˜๊ณ  ๋กœ์—ดํ‹ฐ์™€ ์ง€๊ธ‰๊ธฐ๋ก์„ ์ถ”์ ํ•˜๋Š” ์œ„์™€ ๊ฐ™์€ ๋ณด๋‹ค ํˆฌ๋ช…ํ•œ ์‹œ์Šคํ…œ์„ ๊ฐ–์ถ˜๋‹ค๋ฉด, ๊ธฐ์กด์˜ ๊ทธ๋ฆฌ๊ณ  ์ž ์žฌ์ ์ธ ํˆฌ์ž์ž๋“ค์˜ ํˆฌ์žํ™œ๋™ ๋ฐ ํˆฌ์ž ์ด์ต๊ธˆ ํšŒ์ˆ˜์— ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํˆฌ์ž์ž๋“ค์˜ ๋ˆˆ๋†’์ด์™€ ๊ธฐ์ค€์„ ์ถฉ์กฑ์‹œํ‚ค๊ธฐ์—” ์—ญ๋ถ€์กฑํ•œ ๊ธฐ์กด์˜ ์‹œ์Šคํ…œ์œผ๋กœ ์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ์‚ฐ์—…์ด ์ ์  ๋”๋”˜ ๋ฐœ์ „์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค๋ฉด, ๋ธ”๋ก์ฒด์ธ์ด ๋„์ž…๋จ์— ๋”ฐ๋ผ ์ž์น˜์ ์ธ ์‹œ์Šคํ…œ์œผ๋กœ ๋ชจ๋“  ์ดํ•ด๊ด€๊ณ„์ž๋“ค์—๊ฒŒ ์•ˆ์ „์„ฑ๊ณผ ํˆฌ๋ช…์„ฑ์„ ์ œ๊ณตํ•จ์œผ๋กœ์จ ์‚ฐ์—…๋ฐœ์ „์— ๋„์›€์ด ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋ฉ๋‹ˆ๋‹ค.
์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ์—…๊ณ„ ํˆฌ์ž์— ๋Œ€ํ•œ ๊ด€์‹ฌ์„ ๋‹ค์‹œ ํ•œ๋ฒˆ ๋ถˆ๋Ÿฌ์ผ์œผํ‚ฌ ๋ธ”๋ก์ฒด์ธ
0
์—”ํ„ฐํ…Œ์ธ๋จผํŠธ-์—…๊ณ„-ํˆฌ์ž์—-๋Œ€ํ•œ-๊ด€์‹ฌ์„-๋‹ค์‹œ-ํ•œ๋ฒˆ-๋ถˆ๋Ÿฌ์ผ์œผํ‚ฌ-๋ธ”๋ก์ฒด์ธ-12bad98b6503
2018-07-30
2018-07-30 09:55:26
https://medium.com/s/story/์—”ํ„ฐํ…Œ์ธ๋จผํŠธ-์—…๊ณ„-ํˆฌ์ž์—-๋Œ€ํ•œ-๊ด€์‹ฌ์„-๋‹ค์‹œ-ํ•œ๋ฒˆ-๋ถˆ๋Ÿฌ์ผ์œผํ‚ฌ-๋ธ”๋ก์ฒด์ธ-12bad98b6503
false
398
null
null
null
null
null
null
null
null
null
Film
film
Film
48,256
ephelants360 Official | Korea
The ephelants360 platform is an all-in-one service that provides an eco-system for result-driven film & tv content creation.
ffde45cab0e5
ephelants360.io.korea
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-09
2018-07-09 02:02:08
2018-07-09
2018-07-09 02:49:18
8
true
en
2018-07-15
2018-07-15 15:46:28
2
12bb3ce45a03
3.72956
19
4
0
Data on Gender Balance in Media
5
How Balanced is Gender Representation in Media? (2017โ€“2018) Revelations about the abuse of power by men have been manifold over the past year. In particular, it has become clear that the media business suffers from some of the worst sexism of any industry. How has this lopsided power imbalance between men and women been reflected in the media that we consume? I set out to answer one part of this question by gathering the latest data on as many forms of media as I could find. Mainly, I wanted to know, can we measure gender imbalances at the top end of media industries? Among the highest performers in books, music, movies and more, what does gender representation look like in 2017โ€“2018? The results are unsurprising and somewhat depressing. Nevertheless, they provide glimmers of hope that the world may be changing. Representation in Music, Fiction, and Movies Despite women being dominant when it comes to making number one hit songs, in the last year, they are rather underrepresented on the Billboard charts. The below plot is ordered by the songs which spent the most time on the Billboard Hot-100. Female artists are in red, and male in blue. Gender balance is much healthier in the realm of fiction publishing. In fact exactly 15 of the top 30 novelists this past year are women (as measured by weeks on the NYT Fiction Bestsellers List). Movies on the other hand remain something of a boys club. In the below graph, I show the top grossing films of 2017, again splitting those with a male protagonist (blue) and those with a female protagonist (red). The situation is not quite as bad as the chart suggests. Many of the yearโ€™s hit films (including Jumanji, Guardians, Kong, War for the Planet of the Apes, and the Mummy) also have strong female protagonists alongside the male one. Where the movie business really fails is in the representation of women directors. In 2017 Patty Jenkins (Wonder Woman) was the only woman director who made a film that broke into the top 30 at the box office. Social Media (a hope for the future?) Though inequality is still rampant in the world of old media, things looks much more promising on social media platforms. The chart below shows the top independent Instagram, Youtube, and Twitter accounts โ€” ie those not representing large companies or groups. Women are ubiquitous on Instagram. A full 2/3s of the top accounts, including 4 of the top 5 are held by women. Similarly, on Twitter, 14/16 of the top 30 accounts are held by women (incuding 6 of the top 10). It seems that when it comes to popularity and celebrity, women are doing better than men. The situation is reversed on YouTube, where only 2 of the top 20 channels are run by women. This may owe to the fact that many of the top YouTube channels are devoted to gaming. That women are not well represented in video games is a separate and controversial topic. Show me the Moneyโ€ฆ As gender representation in old-world media is found wanting, we may wonder whether womenโ€™s dominance on social platforms translates into larger paydays. Alas, this is not the case. The below chart shows the total incomes of the top musicians, actors and celebrities in 2017 (courtesy of Forbes). Even though the top earning celebrities list includes social media superstars like the Kardashians, most still do not make it into the top 30. The highest paid celebrities are still mostly men, with only Beyonce, JK Rowling, Ellen Degeneres, and Adele to break things up. Update: Since the time of this writing, it does seem that one of the kardashian clan has made it into the top earners of 2018. Weโ€™ll see how many women are represented there when the full 2018 list comes out. Conclusions Clearly gender equality in media has a long way to go. Nevertheless, the world is changing. As the old paradigms of media consumption are disrupted by new forms of entertainment, we may hope to see womenโ€™s social media popularity translated into well-earned cold, hard, cash.
Gender Balance in Media
170
gender-balance-in-media-12bb3ce45a03
2018-07-15
2018-07-15 15:46:28
https://medium.com/s/story/gender-balance-in-media-12bb3ce45a03
false
688
null
null
null
null
null
null
null
null
null
Gender Equality
gender-equality
Gender Equality
13,774
Michael Tauberg
Engineer interested in words and how they shape society
a4d3e7fca7af
michaeltauberg
373
163
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-12
2017-11-12 17:43:46
2017-11-12
2017-11-12 17:50:20
1
false
en
2017-11-27
2017-11-27 09:28:53
2
12bdf0b5577e
0.735849
1
0
0
According to 27th November 2017, TOP TUBE crossed 800 thousands plus downloads on GooglePlay. Top Tube is a lightweight, clean and fastโ€ฆ
5
Bangladeshi mobile app TOP TUBE crossed 800 thousands downloads on GooglePlay According to 27th November 2017, TOP TUBE crossed 800 thousands plus downloads on GooglePlay. Top Tube is a lightweight, clean and fast client for youtube offering top 10 youtube songs and news every 12 hours. This awesome mobile application is developed by Bangladeshi young developers. TOP TUBE collects millions of YouTube data and summarizes dayโ€™s top and most popular YouTube videos. Digests are delivered twice a day โ€” once in the morning and once in the evening. All the top videos are summarized and presented with the key information. We care about your busy schedule, so only top 10 videos per category. With Artificial Neural Network (ANN) and Linear Algorithm a virtual brain analyzes millions of YouTube data and finds top ten results every 12 hours. get app from here @TOPTUBE_download_link
Bangladeshi mobile app TOP TUBE crossed 800 thousands downloads on GooglePlay
24
bangladeshi-mobile-app-top-tube-crossed-700-thousands-downloads-on-googleplay-12bdf0b5577e
2018-01-05
2018-01-05 05:18:12
https://medium.com/s/story/bangladeshi-mobile-app-top-tube-crossed-700-thousands-downloads-on-googleplay-12bdf0b5577e
false
142
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Jubayer Hossain
null
b21034ac7368
jubayerhossain
97
97
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-24
2018-05-24 14:23:02
2018-05-24
2018-05-24 14:30:07
0
false
en
2018-05-24
2018-05-24 14:30:07
1
12bf4d8f8dd3
1.422642
0
0
0
Everyone knows about recently held Google I/O 2018 event. We saw many interesting features of upcoming android version and its upcomingโ€ฆ
3
Artificial Intelligence : New Era Everyone knows about recently held Google I/O 2018 event. We saw many interesting features of upcoming android version and its upcoming apps which uses Artificial Intelligence in vast amount, or we can say the whole system is based on Artificial Intelligence and Machine Learning. AI and machine learning In this era of AI our smartphone and computers can do almost everything for us! Like remembering stuffs and reminding them on the right time. Keeping notes, texting someone or ordering something online. All your tasks is done by virtual personal assistants like Siri, Alexa, Cortana or Google Assistant. Google Duplex CEO at Google, Mr. Sundar Pichai just elaborated that how google assistant can make a phone call for us in background while we are busy doing our other tasks. If you havenโ€™t seen that video yet, here it is. You can see that your personal assistant is making a phone call to book an appointment to a beauty parlor and that pretty good. It makes cleverer decisions by itself and helps us doing stuff for us. There is also another example that didnโ€™t go very well. So the assistant does not make any decision and leaves that task up to its master. A game changer So, this is the era. This is the AI. And this is just the beginning of this era. Thereโ€™s just a trailer to the AI edge. Thereโ€™ll be more system and devices implemented on AI and ML. Possibility is even that AI and ML could steal your job and make you jobless. Yes, that is possible.AI will be a large game changer for the technical field. Slowly we will see AI replacing your jobs and eating out your wages and salaries. What should be done? In this rising era of AI we must prepare ourselves for this game changing battle. Try to learn AI and machine learning. it is better to create the system, instead of watching it eating our jobs. Be a master, develop stuff related to AI and be the part of this rising era of the AI. Footnotes : My blog post Artificial Intelligence : New Era Everyone knows about recently held Google I/O 2018 event. We saw many interesting features of upcoming android versionโ€ฆtechtalkeveryday.blogspot.in
Artificial Intelligence : New Era
0
artificial-intelligence-new-era-12bf4d8f8dd3
2018-05-24
2018-05-24 14:30:08
https://medium.com/s/story/artificial-intelligence-new-era-12bf4d8f8dd3
false
377
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Mohit Nirmal
null
afe6c180954a
nirmalmohit96
3
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-19
2018-04-19 21:46:18
2018-04-19
2018-04-19 21:47:50
4
false
en
2018-04-19
2018-04-19 21:47:50
3
12bf65126e52
2.990566
0
0
0
As technological innovation continues to advance, we are shifting the fundamental way we want to interact with the information around usโ€ฆ
5
AIโ€™s Potential in AR As technological innovation continues to advance, we are shifting the fundamental way we want to interact with the information around us; we are no longer focused on screens but looking up to have technology blend into our daily lives more naturally. Augmented Reality brings elements of the virtual world into our world thus enhancing the things we see, hear, and even feel. Augmented Reality, or AR, sits in the middle of the mixed reality spectrum, between the real world and the virtual world, and is believed to have the most potential for practical use cases. Whether itโ€™s students in med school opening up beating hearts in AR to learn how they operate to become more effective doctors or construction workers reviewing building specs to make sure no unnecessary mistakes are made, Augmented Reality is helping us be smarter, faster and more effective. For Augmented Reality to reach its full potential itโ€™s not enough for applications to be able to display information; AR will need to be smart. The idea of integrating Artificial Intelligence (AI) systems is not a revolutionary one. Companies working with AR today are already finding new ways to intelligently guide their applications to understand where the user is in space, the objects around the user, and how all these things interact and relate with each other. On the other side companies around the world are training these AI systems to recognize objects and people. For example, Google is using its deep learning algorithms for object recognition. As technology like this continues to evolve, machines will be able to provide us additional context and understanding to the world around us. Imagine taking a trip to a foreign country and having AR point out interesting places around you or help you with directions or translations. Closer to home, retail stores will allow shoppers using AR with AI overlays to better understand options through descriptions, buyer ratings, and even complementary goods. This will bring the powers of online retailers like Amazon into the brick and mortar store, which might be the one thing that can save physical stores. While the benefits for AI in Augmented Reality are seemingly endless there are a couple of major restraints from having this technology fully up and functional today. First, and perhaps the largest hindrance, the lack of an AR Cloud. This software infrastructure will need to be in place to hold, process and stream all AR experiences as well as a digital twin. The digital twin refers to the digital replica of physical assets, processes, and systems which provides information on how they can interact with the physical and digital world around them. Another large problem is once this infrastructure is set up how will we be able to effectively access it. Software today is already constrained by limited hardware abilities and as the need for real-time data and graphics continue to grow, the need for intelligent optimization will become a necessity. While early adapters to Augmented Reality with AI might be more forgiving, user experience will need to be clean, streamlined and easy in order for mass adoption to take place. While Virtual Reality has flopped in the minds of the consumer, Augmented Reality is being turned to as the new most desired technological advancement in our daily lives. The potential AR has made is astronomically better by the interweaving of smart systems or AI. You can now bring your 3D designs to life and experience the simplicity of the mobile AR through our iOS application. Try us out for yourself by signing up for a free trial below! SIGN UP NOW LEARN MORE
AIโ€™s Potential in AR
0
ais-potential-in-ar-12bf65126e52
2018-04-19
2018-04-19 21:47:51
https://medium.com/s/story/ais-potential-in-ar-12bf65126e52
false
607
null
null
null
null
null
null
null
null
null
Augmented Reality
augmented-reality
Augmented Reality
13,305
Umbra 3D
null
385de5e26996
Umbra3D
10
1
20,181,104
null
null
null
null
null
null