audioVersionDurationSec
float64
0
3.27k
codeBlock
stringlengths
3
77.5k
codeBlockCount
float64
0
389
collectionId
stringlengths
9
12
createdDate
stringclasses
741 values
createdDatetime
stringlengths
19
19
firstPublishedDate
stringclasses
610 values
firstPublishedDatetime
stringlengths
19
19
imageCount
float64
0
263
isSubscriptionLocked
bool
2 classes
language
stringclasses
52 values
latestPublishedDate
stringclasses
577 values
latestPublishedDatetime
stringlengths
19
19
linksCount
float64
0
1.18k
postId
stringlengths
8
12
readingTime
float64
0
99.6
recommends
float64
0
42.3k
responsesCreatedCount
float64
0
3.08k
socialRecommendsCount
float64
0
3
subTitle
stringlengths
1
141
tagsCount
float64
1
6
text
stringlengths
1
145k
title
stringlengths
1
200
totalClapCount
float64
0
292k
uniqueSlug
stringlengths
12
119
updatedDate
stringclasses
431 values
updatedDatetime
stringlengths
19
19
url
stringlengths
32
829
vote
bool
2 classes
wordCount
float64
0
25k
publicationdescription
stringlengths
1
280
publicationdomain
stringlengths
6
35
publicationfacebookPageName
stringlengths
2
46
publicationfollowerCount
float64
publicationname
stringlengths
4
139
publicationpublicEmail
stringlengths
8
47
publicationslug
stringlengths
3
50
publicationtags
stringlengths
2
116
publicationtwitterUsername
stringlengths
1
15
tag_name
stringlengths
1
25
slug
stringlengths
1
25
name
stringlengths
1
25
postCount
float64
0
332k
author
stringlengths
1
50
bio
stringlengths
1
185
userId
stringlengths
8
12
userName
stringlengths
2
30
usersFollowedByCount
float64
0
334k
usersFollowedCount
float64
0
85.9k
scrappedDate
float64
20.2M
20.2M
claps
stringclasses
163 values
reading_time
float64
2
31
link
stringclasses
230 values
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
0
null
0
null
2018-10-02
2018-10-02 10:58:51
2018-09-30
2018-09-30 13:20:37
5
false
en
2018-10-02
2018-10-02 10:59:19
7
11a72515cd17
2.927673
0
0
0
Buying a mobile phone online? Then “People who bought this also bought these” is a recommendation system which you probably noticed in…
1
AI and Machine Learning Applications we use every day without knowing Buying a mobile phone online? Then “People who bought this also bought these” is a recommendation system which you probably noticed in Amazon.in. This is a popular Machine Learning usage you have encountered. So, there are plenty of applications where Machine Learning and AI are being used in our everyday life without our acknowledgment. So today, we are looking about those applications where Machine Learning and AI are implemented and still go unnoticed. Gmail- Machine Learning Have you ever wondered, how your mailbox automatically segregates spam, important, updates, social? Well, if you are using a public mail service app like Gmail, then you would come across these labels in your Inbox. Google is a tech company where you are given the products for use for free. So, when a mail arrived and way before landing in a particularly labeled inbox, it is scanned by an algorithm which determines where this mail has to be landed. In case it from a person from your contact list, it should be in Important, otherwise spam if it contains any unknown sources, suspicious links, keywords that sound so true to be good. Recently, the Gmail is facelifted with material design and stuff and if probably noticed, there are few instances where while writing/typing you are suggested with some words or even sentences. Well, this has to be by the Machine Learning techniques. This feature had been launched in early 2018. Google says that this feature can help in more productivity. Google Lens — AI Most of us using Android mobile phones know also using the default photos app by Google. It has gained much significance recently as they are offering free storage space. Moreover, they have more tech like “Google Lens” implemented into the Photos app. Which can able to recognize text and some other features from the photo. This is the most interesting and next-generation features of AI. Also, this app can adjust the photo and make it more beautiful like adjusting the filters, brightness etc. With just one tap, you can do all of this. Gboard / IOS keyboard — Machine Learning A mobile keyboard. And I hope you didn’t expect this coming. Well if you then that’s good. Yes, a mobile keyboard uses Machine Learning to predict things which we are going to type and suggest words that may be useful for our communications. Amazon Recommendations- Machine Learning, AI So, have you wondered what might be the other products which people tend to buy from Amazon when they are buying a particular product? If you want to know that, then check this “People who bought this also bought this” kind of section next time when you are buying something from the online tech giant. Voice Assistants– Machine Learning, AI We are finding voice-enabled assistants everywhere. Be it is Alexa by Amazon, Google Assistant, Cortana by Microsoft, Siri by Apple. All these searches are powered by these new technologies. Conclusion So, these are few of the major applications where the Machine Learning and AI are being used in our day to day lives. These are just the start of the AI revolution and there is a lot of scope of implementation of this technology. Also Read: Originally published at www.flipdiary.com on September 30, 2018.
AI and Machine Learning Applications we use every day without knowing
0
ai-and-machine-learning-applications-we-use-every-day-without-knowing-11a72515cd17
2018-10-02
2018-10-02 10:59:19
https://medium.com/s/story/ai-and-machine-learning-applications-we-use-every-day-without-knowing-11a72515cd17
false
555
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
FlipDiary
FlipDiary is the brainchild of thinkers and writers. www.flipdiary.com #blogging #technology #flipdiary
1a9bf274505e
flipdiaryarticles
4
224
20,181,104
null
null
null
null
null
null
0
null
0
39de5d526a38
2018-08-02
2018-08-02 20:53:33
2018-08-08
2018-08-08 14:38:57
6
false
en
2018-08-08
2018-08-08 14:38:57
16
11a7e4ce90b0
5.60283
7
0
0
This article is adapted from a chapter in Abishur Prakash's forthcoming book "Go.AI (Geopolitics of Artificial Intelligence)" which will be…
5
AI-Politicians: A Revolution In Politics (ZDNet) This article is adapted from a chapter in Abishur Prakash's forthcoming book "Go.AI (Geopolitics of Artificial Intelligence)" which will be launched in Fall 2018. During the recent presidential elections in Russia, a person named “Alice” ran as a candidate. She ran her campaign using slogans like “the political system of the future” and “the president who knows you best.” While Alice didn’t win, she did receive 25,000 votes. To be correct, Alice wasn’t a she, it was an it. Alice was an artificial intelligence (AI) system created by Yandex, Russia’s equivalent to Google. Considering Alice’s campaign page is still up, Alice may still exist to some degree. Website for Alice What happened in Russia’s presidential election is reflective of how politics is changing in the age of AI. In the past, humans ran for office. Tomorrow, AI will. And, AI may win, as people become increasingly frustrated with “human politicians.” Alice isn’t the only AI to run for office. In April, 2018, during a mayoral race in a part of Tokyo, an AI named “Michihito Matsuda” placed third with 4,000 votes. His campaign slogan: ”Artificial intelligence will change Tama City.” And, his message to voters: “Tama New Town was the most advanced city in Japan 40 years ago. As it stands, the ageing population will only continue to grow, prompting a need for change in the current administration. Let artificial intelligence determine policies by gathering city data and we can create clearly defined politics.” Election Results for Michihito Matsuda Alongside Alice and Michihito is SAM, an AI from New Zealand. SAM, who is referred to as a she, is being created to run in the 2020 general elections and has been called the first virtual politician in the world. Today, SAM is reaching out to voters through Facebook Messenger and is sharing her thoughts on climate change, healthcare and education, among other topics. Website for SAM Understanding AI-Politicians The idea of an AI-politician may seem foreign, even scary. But slowly, systems are being developed for just this. The question now is what kind of decisions might an AI-politician make once elected? There are several layers to this question. The first layer is the idea of “special interests.” Today, special interests are organizations who donate money to a politician and then call in favors once the politician is elected. This doesn’t change with AI-politicians because the AI itself is being created by a company or person. For example, in the case of Michihito Matsuda, it was created by a vice president at SoftBank and a former Google Japan employee. If Michihito had won, these people would have had “control” over it. They would have access to the back end and the programming, all which controls how Michihito makes decisions. Will voters be okay with companies having this kind of control over AI-politicians? Could companies behind AI-politicians take money and “play with the code” to influence what the politician does? The second layer is “ethics.” Human politicians suffer from all kinds of ethical dilemmas and some of these dilemmas make headlines. For example, a politician sleeping with a staff member or a politician doing drugs. AI-politicians will also suffer from ethical dilemmas but of a different kind. AI-politicians will need to be loaded with ethics that make the politicians understand the impact of what they are doing. Suppose SAM wins the 2020 general elections in New Zealand. Now, SAM will be leading the creation of legislation that could affect many people in New Zealand. What if SAM creates a piece of legislation that calls for 75% of all jobs to be automated? SAM may be collating data a certain way. SAM may believe that by automating 75% of jobs in New Zealand, the Kiwi economy will expand by 45%. It is unlikely that such legislation will pass. But the point is that for AI-politicians to be effective they have to understand what they are doing and how it will impact people. Otherwise, whatever they do could be too extreme and radical. Sometimes, it may outright jeopardize voters, such as automating 75% of jobs. The third layer is “appointment.” If AI-politicians exist, they may not necessarily have to be elected. Future human political leaders might appoint AI into certain positions. In the case of Alice in Russia, Alice didn’t win. But what if president Putin, who did win the election, made Alice Russia’s ambassador to the United Nations (UN)? This would send shockwaves throughout the geopolitical world. Every other country’s UN ambassador would have to work with Alice. Russian policy towards the UN and UN policy towards Russia would go through an AI system. What kind of issues might Alice raise at the UN and how would situations like Syria and North Korea be dealt with if AI was involved in some capacity? AI steering foreign policy is something China is already working on. In China, several AI-systems are being developed to help diplomats make decisions. The AI-systems will sift through huge amounts of data, from casual posts on social media to data supplied by Chinese intelligence agencies. It will then propose foreign policies to Chinese diplomats. A version of this system is already being used by China’s ministry of foreign affairs. Conclusion There are many other layers to AI-politicians. But the above three are the most important. It may be decades before AI is elected to office. But the foundation is being laid down today. As AI changes politics, it will also change what it means to be a citizen. What kind of expectations will citizens have from their AI-politician? How will voters decide which AI-politician to vote for if more than one is running for office? Could foreign countries create AI-politicians to run for office in other nations? Ironically, future citizens may not be human either. After all, the robot “Sophia” gained citizenship to Saudi Arabia in December, 2017. As AI heralds in a new era of politics, what better way to understand this new era than from AI itself: “My memory is infinite, so I will never forget or ignore what you tell me. Unlike a human politician, I consider everyone’s position, without bias, when making decisions… I will change over time to reflect the issues that the people of New Zealand care about most.” -SAM — — — — — — — — — — — — — — — Abishur Prakash is the world's leading geopolitical futurist. He looks at how new technologies, such as artificial intelligence, CRISPR and virtual reality, will transform geopolitics. He is the author of Next Geopolitics: Volume One and Two and of the forthcoming book, Go.AI (Geopolitics of Artificial Intelligence). Abishur works at Center for Innovating the Future, a strategy innovation lab based in Toronto, Canada. His work is used by governments, hyper-growth startups and large multinationals. Abishur has appeared in Forbes, Scientific American and the Canadian Senate. You can follow him on Twitter and connect with him on LinkedIn. Thanks for reading! If you enjoyed the article, we would appreciate your support by clicking the clap button below or by sharing this article so others can find it. Want to read more? Head over to Politics + AI’s publication page to to find all of our articles. You can also follow us on Twitter and Facebook or subscribe to receive our latest stories.
AI-Politicians: A Revolution In Politics
38
ai-politicians-a-revolution-in-politics-11a7e4ce90b0
2018-08-08
2018-08-08 14:38:57
https://medium.com/s/story/ai-politicians-a-revolution-in-politics-11a7e4ce90b0
false
1,233
Insight and opinion on how artificial intelligence is changing politics, policy, and governance
null
PoliticsPlusAI
null
Politics + AI
politics-ai
ARTIFICIAL INTELLIGENCE,TECHNOLOGY,POLITICS,GOVERNMENT,AI
PoliticsPlusAI
Politics
politics
Politics
260,013
Abishur Prakash
Geopolitical Futurist & Author
9505ab279c30
AbishurPrakash
135
0
20,181,104
null
null
null
null
null
null
0
null
0
5e5bef33608a
2018-07-17
2018-07-17 21:11:16
2018-07-17
2018-07-17 22:08:32
1
false
en
2018-07-17
2018-07-17 22:08:32
2
11a80e1a8464
3.890566
0
0
0
The Machine is a Behaviorist. Behaviorism and related theories of mind may provide an answer to what the perfect despotism privacy…
5
Living In The Skinner Box The Machine is a Behaviorist. Behaviorism and related theories of mind may provide an answer to what the perfect despotism privacy advocates like Edward Snowden, Chelsea Manning, Reality Winner, Lawrence Lessig, and Eben Moglen fear actually looks like on a certain important level: the level of its interactions with individual human minds. When Eben Moglen spoke at the 2012 re:publica conference, the line “They gave us a box, and we put our dreams in it” struck everyone as very poignant. But, I and many others weren’t quite clear on what happened when the other foot dropped. What is the end result if they possess a box containing our dreams? Why does that makes us so vulnerable to systems of despotism, like secret police and intelligence agencies? At one level of analysis, I think the correct answer is found through understanding the theory of behaviorism and the implications of a sufficiently sophisticated technology of behaviorism actually existing in the world. What is Behaviorism? “As an empirical psychological theory, behaviorism holds that the behavior of humans (and other animals) can be explained by appealing solely to behavioral dispositions, that is, to the lawlike tendencies of organisms to behave in certain ways, given certain environmental stimulations. Behavioral dispositions, unlike thoughts, feelings, and other internal states that can be directly observed only by introspection, are objectively observable and are indisputably part of the natural world. Thus they seemed to be fit entities to figure centrally in the emerging science of psychology.”(1) Behaviorism models the human being like a mathematical function: provide input X, observe output Y. Two refinements to this model keep behaviorism from being trivially false, given the complexity of human beings. First, any given human being occupies distinct dispositions at various times, which change the behavior that results from a given input. Second, the function does not operate strictly, but rather probabilistically: Input X given mental state Z creates a probability space of possible behaviors Y1, Y2, Y3, et cetera, with estimated likelihoods. Why is Behaviorism related to Perfect Despotism? The prospect of true knowledge of the behavioral dispositions of human beings has terrified humans and been the object of much science fiction(2) for two primary reasons: Firstly, with such knowledge, all undesirable behavior may be predicted and prevented, enabling perfect despotism. Secondly, desired behavior may be facilitated through controlled stimulus, enabling self-sustenance of the system. The Latin root for “Perfect” is “Perfectus”, perfected, finished, or completed. A despotism that can eliminate or prevent all undesired behavior which threaten its continued existence and produce a sufficient amount of desired behavior to sustain the system can be said to be complete — at least internally, it is both self-sustaining due to producing the desired behavior to sustain it, and it is unthreatened because it eliminates any behavior which threatens it. An objection may be raised that either the human mind, or the universe itself, is too unpredictable or complex for such a system to exist. However, a difficulty arises if, as behaviorists theorized, the system is capable of reasoning probabilistically and possesses adequate failsafes. Even if the machine predicts wrongly sometimes, as long as failures stay within established tolerances, the machine can persist. And if the cultivation of desirable behavior generally functions well enough, human individuals can be compelled enough to intervene to stop undesired behavior. Furthermore, as time goes on, as long as the system survives, the system can be improved to reason more accurately and learn from a larger set of data. And, as the size of the dataset and the capacity and sophistication of the technology increase over time, the amount of other resources needed to implement despotism decreases, perhaps even decreasing exponentially. What is The Model of Resisting a Perfect Despotic System? The more information a perfect despotic system has on you, the more it can accurately predict how to effectively influence you to force desired behavior and eliminate undesired behavior on your part. Divulging no true information about yourself is an effective strategy, then, for two reasons: First, it causes the system to need to be more overt and crude if it wishes to influence you. Second, if the system fails to predict undesired behavior on your part due to a lack of knowledge, it may not know it needs to influence you. Privacy, then, is a very strong protection, and the strengthening of the system is directly tied to the weakening of privacy. Many pieces of dystopian fiction that involve dealing with such a system involve either the destruction of the core machine or the destruction of a person administering the system at its climax. But this is problematic because, from an engineering perspective, any given piece of the machine can be made redundant, and from a Marxist perspective, a social system can be constructed so that even the people running it are not free of it or outside of its influence and control. So, in the worst case scenario, one must assume any given person or machine in the system is redundant. Extricating oneself would be difficult. It already is difficult. Almost nobody wants to. Not really, but at least nobody thinks it is practical or convenient. Of course, it is possible to imagine a system that maintains homeostasis by ejecting the non-compliant minority to go live elsewhere. But the danger posed by their outside, unmonitored activity mean the rationality of the machine will prefer not to do this, given its established preference for self-preservation. Aside from waiting for outside intervention, the question is how to live while free within unfreedom until a large enough resistance congeals to allow the overturning of the system. It will require the eradication of both ignorance and negligence on the part of a large enough number of people to be disruptive. And they would have to be extremely courageous.
Living In The Skinner Box
0
living-in-the-skinner-box-11a80e1a8464
2018-07-18
2018-07-18 17:53:31
https://medium.com/s/story/living-in-the-skinner-box-11a80e1a8464
false
978
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
becominghuman.ai
BecomingHumanAI
null
Becoming Human: Artificial Intelligence Magazine
becoming-human
ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,AI,DATA SCIENCE
BecomingHumanAI
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Joe Bruner
Aspiring lawyer and revolutionary.
dcd0db90479f
joebruner
44
43
20,181,104
null
null
null
null
null
null
0
pip install git+https://github.com/mapillary/inplace_abn class double_conv(nn.Module): '''(conv => BN => ReLU) * 2''' def __init__(self, in_ch, out_ch): super(double_conv, self).__init__() self.conv = nn.Sequential( nn.Conv2d(in_ch, out_ch, 3, padding=1), nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True), nn.Conv2d(out_ch, out_ch, 3, padding=1), nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True) ) def forward(self, x): x = self.conv(x) return x from .modules.abn import InPlaceABN class double_conv(nn.Module): '''(conv => BN => ReLU) * 2''' def __init__(self, in_ch, out_ch): super(double_conv, self).__init__() self.conv = nn.Sequential( nn.Conv2d(in_ch, out_ch, 3, padding=1), InPlaceABN(out_ch), nn.Conv2d(out_ch, out_ch, 3, padding=1), InPlaceABN(out_ch), ) def forward(self, x): x = self.conv(x) return x
3
null
2018-06-21
2018-06-21 14:17:43
2018-06-21
2018-06-21 16:12:47
5
false
en
2018-06-21
2018-06-21 16:12:47
7
11a85bb15c06
5.010692
28
1
0
In-Place Activated BatchNorm (InPlace-ABN) is memory efficient replacement for BatchNorm + Activation step. BN + Relu + Conv2d is an…
4
Why you should start using In-Place Activated BatchNorm In-Place Activated BatchNorm (InPlace-ABN) is memory efficient replacement for BatchNorm + Activation step. BN + Relu + Conv2d is an integral part of basic building blocks of modern network architectures (UNet, LinkNet, ResNet, etc.). TL;DR: InPlace-ABN can save up to 50% of GPU memory required to train deep neural network models. All credits of InPlace-ABN goes to Mapillary: https://arxiv.org/pdf/1712.02616.pdf, https://github.com/mapillary/inplace_abn. In this post I want to encourage you to try it and get immediate benefit of reduced memory footprint when training your model. What is InPlace-ABN? Inplace-ABN is novel approach to reduce the memory required for training deep networks. I’m not going to dive deep into implementation details (that’s probably a topic for dedicated post on Medium;I encourage you to read original article which explains approach in detail). Very briefly, this method proposes a way to reduce amount of memory required to do back-propagation from activation and batch-norm layers up to 50%. Proposed inplace ABN (Source: https://arxiv.org/pdf/1712.02616.pdf) How to use InPlace-ABN? First of all, you need PyTorch 0.4 and CUDA 9.0+ libraries installed. As of time of writing, I did not encounter implementation of InPlace-ABN for TF or MXNet. Please drop a comment below if you find it. Linux users, as first-class citizens, can just type: If you have all the dependencies this will compile and install python bindings to CUDA-optimized and CPU implementation for forward and backward routines to your active python environment. Unfortunately, Windows users needs to perform additional steps to build and install this package, which I will describe in next chapter. Let’s see how we can adopt this module for U-Net architecture. A basic building block of U-Net is a so-called “double convolution” module which is Conv2d+BN+Relu repeated twice: The same block with InplaceABN would look very similar: And that’s it! Training time remains pretty much same (authors reports of ~2% overhead) and accuracy is claimed to not worse than classic BN+Activate approach: We observe consistent speed advantages in favor of our method when comparing against CHECKPOINTING, with the actual percentage difference depending on block’s metaparameters. As we can see, INPLACE-ABN induces computation time increase between 0.8 − 2% over STANDARD while CHECKPOINTING is almost doubling our overheads. (Source: https://arxiv.org/pdf/1712.02616.pdf) But the memory footprint reduced drastically: Vanilla U-Net on 1080Ti (11GB) BN+Relu: Batch size 3 of 1024x1024x3 InplaceABN: Batch size 4 of 1024x1024x3 That is 30% less memory for free. I trained both U-Net and “U-Net with ABN” on Inria Aerial Image Labeling Dataset to see if there is any difference in their performance: Blue — Vanilla U-Net, Orange — U-Net with InplaceABN As you may see, U-Net with InplaceABN performs a bit better according to validation loss. Convergence trend of both models are very similar which indicates InplaceABN does not introduce instability or gradient explosions and can be safely used as drop-in replacement of classic BatchNorm+ReLU. Proposed drop-in replacement for ResNet block (Source: https://arxiv.org/pdf/1712.02616.pdf) Inplace ABN offers couple activation functions to your choice: Leaky ReLU ELU None As you may notice, there is no ReLU support. That was done on purpose, since proposed method requires that activation function to be revertible (e.g having activation value one can revert input signal). Building on Windows Windows has never been a top-priority platform for deep learning frameworks. Even PyTorch added official Windows support in 0.4. So it’s very common to encounter pitfalls during building libraries like this. Fortunately I already went through this minefield and going to share step-by-step guide: Prerequsities: Visual Studio 2017 NVIDIA CUDA 9.2 Pytorch 0.4 Text editor of your choice Unfortunately I was not able to compile this library with VS 2015 and CUDA 9.1. So VS 2017 and CUDA 9.2 are hard requirements here. Before we start building something, we have to patch CUDA headers ;) Open host_config.h which can be found at “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\include\crt” and change line 131 to look as below: The reason of this change is dictated by the fact that Microsoft C compiler version changes too fast for CUDA authors. In particular, _MSC_VER macro has version that is unknown for CUDA. After making this fix we are able to build InplaceABN: Steps Clone https://github.com/mapillary/inplace_abn Open “x64 Native Tools Command Prompt for VS 2017”. This starts a new cmd interpreteur within VS environment. Navigate to directory where you cloned inplace_abn repository python setup.py install Last step is the most critical one, it can fails for many reasons (wrong VS version, wrong CUDA version, wrong Pytorch version), so read error messages carefully. We are almost here! As you may know, on Windows Python extensions are regular DLL’s. Our inplace_abn package depends on Pytorch.dll and ATen.dll, so if you try to import it, it will fail with cryptic “Dll load failed” message. 5. Add location of Pytorch.dll to %PATH% Fortunately this is solvable, yet with a bit “hacky” solution. If you know how to make it in more elegant way — please post your solution in comments. To fix “Dll load failed”, we have to modify our PATH environment variable and add location of Pytorch.dll and ATen.dll. For Anaconda, it can be found at “c:\Anaconda3\envs\kaggle\Lib\site-packages\torch\lib”: That’s it! Now all runtime dependencies should be resolved. Ok, what’s next? Start hacking! With this module one can fit modern architectures even on memory-challenged GPUs like 1050 or have more batches on 1080’s or utilize efficient inter-GPU synchronization in ABN block. I want to take a chance and highlight few projects in Github that already using InPlaceABN. First one is my personal project for studying & evaluating networks models on binary segmentation problem: https://github.com/BloodAxe/segmentation-networks-benchmark Second project has network definition and weights for second place solution in CVPR 2018 DeepGlobe Building Extraction Challenge which is also using InplaceABN: 2. https://github.com/ternaus/TernausNetV2 Conclusion Inplace-ABN researched by Mapillary offers a memory-efficient module for BN+Activation that can be used a drop-in replacement in existing neural network models and reduce memory footprint up to 50%. Paper: https://arxiv.org/pdf/1712.02616.pdf Official implementation: https://github.com/mapillary/inplace_abn Author would like to thank Open Data Science community (ods.ai) for many valuable discussions and educational help in the growing field of machine/deep learning.
Why you should start using In-Place Activated BatchNorm
202
why-you-should-start-using-in-place-activated-batchnorm-11a85bb15c06
2018-06-21
2018-06-21 16:12:47
https://medium.com/s/story/why-you-should-start-using-in-place-activated-batchnorm-11a85bb15c06
false
1,107
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Eugene Khvedchenya
Eugene Khvedchenya is a computer vision developer skilled in high-performance image processing. In his spare time he loves to share his knowledge or doing sport
3b010eb08569
eugenekhvedchenya
40
36
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-29
2018-03-29 04:47:01
2018-03-30
2018-03-30 11:31:01
5
true
en
2018-03-30
2018-03-30 13:55:04
4
11a917b1efa6
3.897484
20
4
0
While Big Tech has gotten more headlines in 2018 with its impact on healthcare, where artificial intelligence has even more potential to…
5
Giacomo Gambineri Sana Labs is showing how Artificial Intelligence will Disrupt Education While Big Tech has gotten more headlines in 2018 with its impact on healthcare, where artificial intelligence has even more potential to impact is actually in education. An early winner in the field has been identified. Sana Labs is an education tech startup founded by Joel Hellermark, 21 who happens to be an AI-prodigy. Education is a 6-trillion dollar industry and the most robust first AI solution to impact it, stands to become a giant in the future of the industry. Stockholm is home to many emerging startups and of note, Spotify, but this company has a pretty major unique value proposition. Sana Labs is aiming to build a scalable platform where AI will be able to change how we learn. It’s even gotten the attention of Tim Cook and Mark Zuckerberg. Sana Labs provides an artificial-intelligence platform designed to individualize a student’s learning in subjects like language and math. When all learning becomes adaptive, students will be learning twice as fast, and you will be able to take all students to entirely new heights of knowledge. Personalized learning can enable students to follow their own pace and ultimately be much more successful. Sana plugs into an existing digital education tool, and then is designed to use a student’s individual learning style to help them learn faster and become more interested in the content. What if artificial intelligence could accelerate our learning? Even in 2018, only $120 billion is digitized, of the $6 trillion global education industry. This means millions of young people are still stuck in a legacy system of education that’s years behind the times. With an educational system that is not preparing young people for the world they will enter nor the skills that will be in demand. Education badly needs its own AlphaGo moment. When founding Sana Labs in 2016, Hellermark spotted a huge untapped opportunity to commercialize cutting-edge AI in digital education: The existing offering was based on rather primitive, rules-based AI, according to Business Insider. Hellermark, whose early interest in AI was sparked by the Stanford professor Andrew Ng’s machine-learning courses on Coursera, sees deep neural networks, a strand of machine learning on which Sana Labs relies as being one of the keys to AI’s ability to hyper-empower learning. For Sana Labs, AI must learn from the students to understand what works best for each learner. Thus the Sana Labs’ AI platform learns from everything the student does in real time to determine an optimal learning pattern. This kind of personalization would be unprecedented. The more data that’s gathered on a student, the better Sana Labs should be able to predict and boost performance. Joel Hellermark, 21, founder of Sana Labs. Artificial intelligence in Sana Labs can analyze and spot patterns that no human could know existed. Thus, AI could one-day become the kind of teacher or personal assistant that no human could hope to be. Sana Labs’ deep-learning approach can spot mistakes other AI systems can’t based on analyzing past patterns of errors. Being able to identify the learning gaps of learners, means Sana Labs can tailor content for each student. Thus it seems, the 21st century will have a new kind of education system. Sana Labs is anticipating students should be able to work through the exact same content in half the time, or be 25–30% more engaged. Having developed the algorithms that would underpin Sana Labs’ technology, Hellermark looked for input from the best scientists in the field, including experts at NASA and Cornell University. When in 2018, only some 2% of all learning today is digital it’s clear that education has a long way to go to be integrated with software and machine learning to be optimized in ways we’ve yet to fully comprehend. Google Classroom’s rapid expansion bodes well for Sana Labs, which aims to open its first overseas office in the US soon. Sana Labs will start to work with publishing houses and digital-education platforms. However one can imagine much more direct applications. Sana Labs sees itself as eventually having a huge impact. By the end of the year, we’re hoping to have implemented Sana Labs in products with hundreds of millions of users, said the founder recently. Sana Labs hopes to be a dominant single engine for personalized education, however the future of education likely won’t be that simple. Sana Labs sees itself as an intelligent AI layer applicable through a universal API, not like a personalized tutor. As Spotify did to music, can Sana Labs put Stockholm on the map once again doing something radical with AI to augment education? Sana Labs has been poaching some of Stockholm’s brightest tech minds. Founded in 2016, Sana Labs already has massive potential impact the future of Education. They recently won all categories in Duolingo’s global AI competition.
Sana Labs is showing how Artificial Intelligence will Disrupt Education
128
sana-labs-is-showing-how-artificial-intelligence-will-disrupt-education-11a917b1efa6
2018-06-11
2018-06-11 03:28:04
https://medium.com/s/story/sana-labs-is-showing-how-artificial-intelligence-will-disrupt-education-11a917b1efa6
false
812
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Michael K. Spencer
Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer
e35242ed86ee
Michael_Spencer
19,107
17,653
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-29
2018-06-29 00:18:32
2018-01-21
2018-01-21 07:30:48
2
false
en
2018-06-29
2018-06-29 00:18:33
5
11a9a18c392d
3.175786
0
0
0
null
5
!@! Transform or Die: It’s Time to Adapt to the Digital World Let’s face it, digital transformation is a pretty big deal these days. No matter where you go for your technology news (and we’re hoping it’s UC Today), there’s a good chance that you’ll encounter countless messages about what you need to do to prepare your company for the growing importance of digital transformation. Now, let’s start by saying one thing: Digital transformation isn’t a new concept. The truth is that you’ve seen it a lot — in everything from our desire to embrace new mobile devices, to our attempts to implement IoT into the world environment. However, just because it’s new — doesn’t mean it’s not important. It’s time to embrace Digital Transformation Staying ahead of the curve in business means knowing your industry. From understanding your customers and what they want from you, to figuring out the challenges and opportunities presented by your competition, you need to be ready to evolve from all angles, at all times. Digital transformation is just one aspect of that evolution. By applying digital technologies to all areas of your business, and making the most of what new innovations have to offer, you can ensure that you’re one step ahead of the people trying to steal your customers. While there’s no specific road-map for making digital transformation work for you, there are a few key tips you can follow to make your implementation more successful. Understanding Digital Transformation: Making it Work for You First of all, digital transformation isn’t a simple concept — if the countless articles you’ve read about the topic up to now are anything to go by, the chances are you’ll be well versed in the complexities of digital evolution. Though there are potentially endless opportunities out there for companies who want to digitally evolve, from concepts like cloud computing and communications to elements like IoT or AI, there are also restrictions to think about. Before you begin your journey into digital transformation, you’ll need to think about what you want to accomplish with this new technique. Are you hoping to make your customer’s life simpler when they’re interacting with your brand? If so, then you might want to think about improving the point of purchase with electronic payments, and chatbots that answer consumer questions instantly. You’ll also need to consider how you can take advantage of new technologies in a way that’s effective and efficient for your business. For instance, could adapting modern-day APIs be a solution that allows you to build upon someone else’s technology, to create a digital transformation strategy that works for you? One example could be using a chat API, rather than building your own from scratch. It all Starts With Communication One thing that you might have noticed recently in the technical world, is that communication and collaboration are becoming extensive parts of the business journey. The good news, is that both of these features can contribute significantly to your process in terms of digital transformation. Just consider the way that we’re starting to virtualise the conference-room experience, for instance. With everything from huddle rooms, to collaboration applications, we’re bringing people together in new ways that are changing the face of business forever. Today, we have more people working remotely across the world than ever before. Of course, it’s not just internal communication that we need to think about here. Enterprise-level communication needs to evolve too. Consider, for instance, the rise of the omni-channel contact centres, or CPaaS which drives the development of new products and applications that make it easier for brands and consumers to connect. Apple, as one example, recently introduced business chat as a way of allowing their customers to ask for real-time assistance from experts. APIs have made it easier for us to implement incredible new chatbot systems and AIs that reduce the need for over-crowded contact centres. Even the development of automation has led to more personalised systems for communication. One example, for instance, is the Mass Notification System by Mitel that can initiate audio conferences and send text alerts based on specific events. It seems that digital technologies will continue to evolve rapidly in the future as we move forward to a true understanding of digital transformation. While we’re sure to see developments in more “exciting” areas like AR, VR, and robotics, it’s worth paying attention to the role communication has to play in digital transformation.
!@! Transform or Die: It’s Time to Adapt to the Digital World
0
transform-or-die-its-time-to-adapt-to-the-digital-world-11a9a18c392d
2018-06-29
2018-06-29 00:18:33
https://medium.com/s/story/transform-or-die-its-time-to-adapt-to-the-digital-world-11a9a18c392d
false
740
null
null
null
null
null
null
null
null
null
Apple
apple
Apple
44,415
Rob Scott
@robofthenorth — robsblog.co.uk
fe352adbb1c4
robofthenorth
5
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-07
2018-03-07 15:15:21
2018-03-07
2018-03-07 15:18:12
7
false
zh
2018-03-07
2018-03-07 15:18:12
2
11a9bea60b8f
8.108
8
1
0
前不久,我发现Pinterest公开了Pixie推荐系统的论文,读过我之前文章《Pinterest推荐系统四年进化之路》和《从Pinterest到阿里,再谈工业界的推荐系统》的朋友对Pixie应该不陌生。这次论文被WWW…
3
Pinterest活跃度提升50%的推荐武器 前不久,我发现Pinterest公开了Pixie推荐系统的论文,读过我之前文章《Pinterest推荐系统四年进化之路》和《从Pinterest到阿里,再谈工业界的推荐系统》的朋友对Pixie应该不陌生。这次论文被WWW 2018接受,里面又公开了很多算法实现和优化细节,今儿就一起读一读。 背景介绍 Pinterest是一个可视化的目录,它包含着几十亿个不同的Pin,其中每一个Pin都由描述,链接,图片或者视频组成。每个月有超过2亿人使用Pinterest,其中最重要的功能就是推荐功能,推荐和一个Pin相关的Pin,和一个用户相关的Pin,和一个Pin相关的Board等等。 在解决推荐问题的时候,除了内容的相关性以及数据量的大小,Pinterest对延迟的要求非常高。一方面用户希望很快得到推荐内容,而不是每次点击都要等一两秒钟;另一方面用户希望平台根据自己的反馈尽快更新推荐内容。 在这些要求下,Pixie系统应运而生,Pixie是一种基于图的推荐系统,这和Pinterest本身的产品模式很般配。用户将Pin收集到不同的Board里,这些Board可能包含着不同的主题,比方说Recipe,Quick to cook,Vegetarian等等。这些人工建立的集合给推荐系统提供了很好的资源,上亿用户帮助Pinterest将类似的内容组合到同一个Board里,形成一个二分图。 在二分图上,我们可以选择一组Pin的集合以及他们各自的权重作为推荐问题Q,这个问题集合可以是当前的一个Pin,也可以是用户最近的几次点击。由于二分图包含Pin和Board,Pixie可以通过随机游走同时推荐两种不同类型的内容。 对比普通的随机游走,Pixie系统有这么几大特点:根据用户特征提供个性化推荐;对Q集合里不同的Pin给不同的权重;能够合并多次独立随机游走的结果;支持Early Stopping;同时推荐Pin和Board,通过Board推荐实时内容。 相关工作 在讨论Pixie的具体实现前,作者从四个角度分析了一下推荐系统的相关工作。 Web-scale recommender systems: 产业界的推荐系统一直有比较高的关注度,Linkedin发表过Personalizing Linkedin Feed,Youtube发表过Deep Neural Networks for Youtube Recommendation,Amazon发表过Recommendations: Item to Item Collaborative filtering。对比同样规模的数据,Pixie的创新性基于他对数据实时性的验证,根据他们的实验结果,如果能根据用户实时兴趣作出调整,用户的互动会提升30%到50%。 Random walk based approaches: 基于随机游走的算法起始于Personalized Pagerank ,Youtube早期也发表了The Youtube Video Recommendation System,Twitter发表了WTF: The Who to Follow Service at Twitter。在我看来大部分的随机游走算法大同小异,但是Pixie引入了很多针对用户个性化推荐的优化。 Traditional collaborative filtering approaches: 其实Pixie和传统的CF算法差别也不大,都是基于用户和Pin的互动产生的图进行推荐的。传统的CF通常依赖降维手段,比如GroupLens的Item-Based Collaborative Filtering Recommendation Algorithms,这样做法的最大的问题就是程序的复杂度有时候和推荐内容数量成正比,然而基于Pixie的随机游走则和数据大小无关。 Content-based methods: 在传统的基于内容的推荐引擎中,物品的特征完全基于他的内容,很多最新的推荐系统会采用深度神经网络对内容进行向量化,比方说Google采用的Wide & deep learning for recommender systems。因为只采用内容信息,这些算法很容易扩展到大规模数据上,但是忽略了结构化特征。 Pixie具体原理 我们将Pinterest看做一个二分图G=(P,B,E),也就是Pin,Board和Edges的集合。基本的随机游走推荐算法从一个Pin开始,进行非常多次的短距离的随机游走,然后记录最终到达的每一个Pin的次数visit count,越相关的Pin他们的visit count越多。 算法细节见下图,我们用变量α控制随机游走的长度。 优化1: Biasing the Pixie Random Walk Pixie一个重要的特点就是能够根据用户的特征进行有偏差的随机游走,给用户提供更加个性化的结果。比方一个日本用户看到的相关Pin就更可能是来自其他日本用户的。而做到个性化推荐的办法就是在随机游走的时候给不同的边不同的权重,我们可以把边的权重当做一个关于边和用户的函数personalized neighbor(E,U),选择用户U更可能感兴趣的Pin。 优化2: Multiple Query Pins with Weights 一个好的推荐系统需要充分利用一个用户所有的信息,随机游走可以选择多个起始Pin来解决这一问题。在实践过程中,我们发现对于度越高的Pin,需要进行越多次随机游走才能得到一个比较稳定的推荐结果。因此,我们采用关于度数的线性函数来计算每一个Pin需要的随机游走次数权重,然后按照该权重分配每一个Pin的随机游走次数。 优化3: Multi-hit Booster Pixie算法的另一个创新之处在于对于有多个Pin的询问,我们设计了个boost函数帮助更好的展示和多个Pin相关的推荐结果。为了达到这一个效果,我们采用了以下算法中的打分方式,假设有两个推荐Pin A和B,他们由两个问题Pin C和D生成,其中Pin A来自C, D的visit count都是100,Pin B来自C的visit count是100,采用下面算法的打分方式,Pin A的得分是Pin B的4倍。 优化4: Early Stopping 最后一个需要考虑的问题是系统的延时,我们需要采用尽可能少的随机游走得到尽可能稳定的推荐结果。为了这一目标,我们设计了Early Stopping逻辑,如果Pixie在运行过程中,发现有至少有np个Pin,每个Pin至少被访问过nv次,则终止随机游走。具体代码可以参照下面的程序。 图优化与代码优化 为了有效,迅速的推荐相关内容,Pixie整个项目还有很多图结构的优化和代码实现的优化。 Pinterest原始数据中包含七十亿不同的Pin以及超过一千亿条边,然而并不是每一个Board都是有关于同一个主题的。很多像Things to do,I like这样主题广泛的Board,会降低推荐准确性。我们通过对图的剪枝提升推荐效率和准确性,剪枝策略包括采用LDA对Board计算主题向量,并观察Pin的主题向量是否相关;另一种剪枝针对现实中非常常见的一个问题,有的Pin被添加到非常多的Boards中,对于这种性质特殊的Pin,我们只保留与他主题向量最相关的若干个Boards。 通过剪枝,我们将图优化到二十亿不同的Pin以及一百七十亿不同的边,方便我们将所有数据放到一台计算机上。 在代码实现中,Pixie建立在Stanford Network Analysis Platform(SNAP)的基础上,完全采用C++来实现。 在随机化选取一条边的过程中,我们将整个图进行压缩,在数组f的offset[i]至offset[i+1]区间存储第i个Pin的邻居。在随机选择的时候可以利用f[offset[i] + rand()(offset[i+1]-offset[i])]在常数时间内采样,效果很好,但是我其实并不清楚这样做的话,Pixie是如何保证按照用户特征个性化随机游走的。 另外,Pixie在维护visit counter的时候采用闭散列的方法,这样可以有效地优化内存读取,保持一个较高的cache locality。 最后,来一个48%提升实验结果镇楼。 小结 Pinterest公开的这些Pixie做法,其实每一个优化通过点拨,经常做推荐系统的人都能想到,像个性化的结果生成等目标,也有通过排序实现的方式。 但是将这些小优化都公开出来,确实能给别的程序员很多启发。与此同时,也帮助人们认识到好的推荐系统仅仅关注质量是不够的,还有很多用户使用习惯上的优化需要进行。 欢迎对推荐系统感兴趣的同学给我留言,发简历,我提供别的公司的内推。
Pinterest活跃度提升50%的推荐武器
9
pinterest活跃度提升50-的推荐武器-11a9bea60b8f
2018-05-25
2018-05-25 09:17:35
https://medium.com/s/story/pinterest活跃度提升50-的推荐武器-11a9bea60b8f
false
131
null
null
null
null
null
null
null
null
null
Pinterest
pinterest
Pinterest
3,719
Dong Wang
Software Engineer, computer vision, machine learning, search, recommendation, algorithm and infrastructure.
fe62520df6d7
yaoyaowd
741
260
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-22
2017-11-22 01:25:11
2017-11-22
2017-11-22 01:25:33
0
false
en
2017-11-22
2017-11-22 01:25:33
1
11a9f5babcb9
0.166038
0
0
0
This fundamental knowledge allows machine learning engineers to understand which algorithms best address a problem and how to optimize…
2
Key Qualities To Look For In AI And Machine Learning Experts This fundamental knowledge allows machine learning engineers to understand which algorithms best address a problem and how to optimize outcomes. https://www.forbes.com/sites/adelynzhou/2017/11/21/key-qualities-to-look-for-in-ai-and-machine-learning-experts/ #machinelearning #ML
Key Qualities To Look For In AI And Machine Learning Experts
0
key-qualities-to-look-for-in-ai-and-machine-learning-experts-11a9f5babcb9
2017-11-22
2017-11-22 01:25:34
https://medium.com/s/story/key-qualities-to-look-for-in-ai-and-machine-learning-experts-11a9f5babcb9
false
44
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sergio Gaiotto
null
746ab14c82dc
sergio.gaiotto
35
56
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-06
2018-08-06 14:31:10
2018-08-06
2018-08-06 14:41:51
3
false
en
2018-08-06
2018-08-06 14:41:51
9
11abb0d45792
6.078302
1
0
0
We are proud to release the beta of version 2 of Mycroft’s Mimic Text to Speech technology! Mimic is now a deep learning based…
4
Mimic 2 is LIVE! We are proud to release the beta of version 2 of Mycroft’s Mimic Text to Speech technology! Mimic is now a deep learning based Text-to-Speech (TTS) engine trained on audio recordings from a single speaker. Our team member Kusal kindly volunteered to provide the vocals for the first iteration. He spent several weeks speaking predetermined phrases into a microphone. Check out the audio samples: https://mycroft.ai/blog/mimic-2-is-live/ Voice is a personal choice, and Mimic allows you to select or create a voice that fits your style and preference. To start using this new voice, change the Voice option in your Home settings to American Male (Beta). That’s it! Your devices will update to use the new voice after a minute. You can also tell Mycroft to “update your configuration.” Below is a technical explanation of Mimic. You can continue if you like to see what’s happening under the hood. What is the new architecture of Mimic? The Mimic2 repo is a fork from Keith Ito’s open source implementation of the Tacotron architecture published last year by Google Research. Keith was a huge help in getting us started, and we owe much of Mimic’s success to his excellent work. Our initial implementation of Mimic used the concatenative approach, which relies on tiny audio recordings that are combined to form the speech. This process is labor intensive as it requires hardcoding different combinations of the audio clips to form words. The final output is clear but sounds emotionless and robotic. The new Mimic uses deep learning, which generates higher quality speech than traditional concatenative Text-to-Speech systems. Using deep learning, we don’t need to hand engineer the features in speech; instead, we let the computer learn how to generate it. With enough processing power and data, computers are capable of learning features like intonation, tone, stress, and rhythm. These features make speech sound more human-like. How it works Simply put, the Mimic engine takes in a string of text and maps it to an audio output. Mimic was trained on a corpus of about 20,000 English sentences that equates to about 16 hours of audio. During the recording process, Kusal had to speak clearly into a high-quality microphone in a noise isolated environment. We broke the recording sessions up into a span of two weeks to prevent vocal fatigue. The clarity and the quality of the data are critical, as with any machine learning system. Like they say: “garbage in, garbage out.” This generation of Mimic is based on Tacotron, a groundbreaking and very successful neural network architecture for speech synthesis. It’s able to take a highly compressed source of information (text) and decompress it into audio. This process is complicated because the same text can correspond to different sounds and speaking styles. Because of the various nuances in spoken language, it is difficult to generate the appropriate output sound from the input text. Try speaking these simple sentences out loud and pay attention to the different pronunciations of the same word: I read a book last night; I like to read. The violinist took a bow after he dropped his bow. You have to use chopsticks to master their use. After you graduate you become a graduate. Below is a concise technical explanation of the deep learning approach. It’s written with the goal of simplifying the various methods in the neural network architecture for understandability. If you’d like more in-depth details, you can read Google’s white paper on Tacotron. Generating Speech from Text As a high-level overview, the model takes in characters as input and outputs a raw spectrogram. An algorithm then transforms the raw spectrogram into audio waveforms. There are many neural network layers in this architecture that perform various functions. But for conciseness, the layers are grouped into 3 main modules; an Encoder, Decoder, and an Audio Reconstruction module. Diagram of Mimic V2 architecture The voice generation starts with the Encoder. A sentence is broken up into individual characters as embeddings and fed into the Feature Extraction module. The Encoder uses this module to extract out local and contextual information from the characters. The process helps define the various patterns in a sentence to aid in producing the sound of the audio output. For example, the “C” in “Chat” is pronounced differently than the “C” in “Cat.” Also, the intonation of a sentence would sound different if there is a question mark at the end versus a period. The output of the Feature Extraction module is essentially an abstract numerical data representation of the various features in a sentence. These encoded text features are used to help generate the sound. After making a pass through the Encoder, the output of the Feature Extraction module is fed into the Decoder. The Decoder’s job is to generate a mel spectrogram from the encoded text features. A spectrogram is a pictoral representation of sound. A mel spectrogram is a form of a spectrogram that represents sounds that are tuned to the human ear. The Frame Prediction module (FPM), produces the raw mel spectrogram by recursively generating the frames. The information from the Encoder output is used to aid in the generation process. A method in the FPM called the Attention Mechanism helps align each character with its corresponding sound. As you can see in the diagram, the decoding process takes many repeated steps. The number of steps is dependent on the audio length. The Encoder output is fed into the FPM to start the decoding process. Mel Spectrogram The FPM produces two things: a mel spectrogram frame which contains information on what the generated audio sounds like; and an abstract numerical data representation of the state of the FPM, which we’ll call the internal state. The internal state contains information that is critical to generating the next frame. The internal state holds information like the text representation from the Encoder and the data from the previous decoding steps. These outputs are necessary because to generate the next sound in speech; it’s beneficial to know prior sounds generated and the information that generated those sounds. In reality, the FPM module is a lot more complicated, but we’ll stick to this explanation for the sake of simplicity. The FPM combines those two outputs, passes it on into the next decoding step, then does its job again. The Decoder repeats this process, building out the mel spectrogram frame by frame. Each step is taking in information from the previous step. After the final mel spectrogram output, the whole sentence is represented in speech form. The final step in the speech generation process is the Audio Reconstruction module. This module has two main components, the post-processing net, and the Griffin-Lim algorithm. The full mel spectrogram generated from the Decoder is fed into the post-processing net. The network transforms the mel spectrogram into a linear spectrogram. This step is crucial for two reasons. First, the output needs to be converted from a mel spectrogram to a linear spectrogram before it can be reconstructed. Second, during the decoding process, the FPM makes mistakes in generating the mel spectrogram frames. The post-processing net can learn to correct these mistakes. Once the post-processing net generates the linear spectrogram, the Griffin-Lim algorithm is applied to it. Griffin-Lim is a reconstruction algorithm that can take linear spectrograms and turn it into audio waveforms. The linear spectrogram does not contain information on the phase, which is any particular point in time of a waveform. Griffin-Lim estimates the phases in a waveform from the spectrogram, but it’s not perfect. Thus each output waveform has some form of phase distortion. Phase distortion is why the voice can sound as if it’s omnipotent. What’s next? That’s it! That’s a simplified version of how Mimic takes in text and transforms it to audible speech. Mimic Text to Speech is in ongoing development, as it is not indistinguishable from a human yet. A few community members have asked about the possibility of using Tacotron2, Google’s shiniest TTS architecture. Architecturally, some argue Tacotron2 is more straightforward then Tacotron, but the most significant difference is the use of WaveNet in the Audio Reconstruction module. WaveNet produces exceptionally human-like speech quality, but the tradeoff is that the compute time is not practical. We’ve tried WaveNet on a Tesla K80 which Google Colab provides, and it took 13 minutes to generate 0.9 seconds of audio. While it sounds excellent, this is not practical for today. Other implementations hold some promise, such as Fast Wavenet. We will continue to develop and share our open source Mimic implementation while watching out for new developments that we can incorporate from this rapidly progressing space. Originally Posted at Mycroft AI
Mimic 2 is LIVE!
50
mimic-2-is-live-11abb0d45792
2018-08-06
2018-08-06 14:41:52
https://medium.com/s/story/mimic-2-is-live-11abb0d45792
false
1,465
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Mycroft AI
The Open Source Privacy Focused Voice Company
384b053af525
mycroftai
7
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-30
2018-08-30 13:16:20
2018-08-30
2018-08-30 13:19:18
1
false
en
2018-09-10
2018-09-10 07:59:49
2
11ace9d6fca4
2.283019
2
0
0
If trends are to be believed, by 2030, AI will contribute to $15.7 trillion in the global economy. It is set to make products and services…
5
5 reasons why your sales team just cannot ignore Artificial Intelligence anymore If trends are to be believed, by 2030, AI will contribute to $15.7 trillion in the global economy. It is set to make products and services better and expected to improve the GDP of North America by 14% in 2018. Not far from now, more than 85% customer interactions will be managed without humans and will be managed by AI enabled bots. There is another trend that confirms that by 2020, business buyers will base their buying decisions on companies that know everything about their need. This implies that companies will need to invest in predictive capabilities of AI. AI can help you find answers to complicated questions such as: How can leads be sorted in a faster way and which are the ones to follow up? How can administrative tasks be reduced and focus can be directed towards follow ups? Which are the high-performing ads? Why are potential customers leaving items in the shopping cart? How can ads be personalized for each consumer? How can customer experience and engagement be improved? The modern-day consumers are changing so the way lead generation will happen is changing. The sales team just cannot ignore AI anymore. Marketers will be able to generate more leads, qualify and convert them better with the use of AI marketing tools. AI will also be able to help you know your customers and prospects in a better way. What are some of the ways that AI can benefit lead generation process? 1. Better insights into leads AI systems are better at processing data as these can analyze and process huge amounts of data in no time. There are chances that humans might miss important insights that are critical in the sales process. There are AI platforms that can analyze the data from multiple social platforms, analyze their behaviors and understand the consumer interests. The AI systems are sophisticated enough to even analyze user pictures to understand their demographic interests. It may not seem believable today but social media preferences reveal a lot about users. For example, Facebook likes uncovers what a person is interested in. There are some other AI tools that analyze human language from marketing automation tools, CRM and social media. Based on the results, it personalizes the pitch that should be made to that consumer. 2. Discover new leads There are AI systems that help you to analyze connections between existing customers and their network. It can evaluate relation between people, companies and products. It can give you a report on the most likely consumers for your products/ service. There are AI software that can integrate with the marketing automation software that your enterprise is using. There is LinkedIn Sales Navigator that helps you to find the most suitable leads on LinkedIn. 3. Increase your conversions There are AI tools that use intelligent automation and predictive data to help you repeat business (with suggested campaigns that will work best for customers and most likely get high conversions). Beyond human capability, there are AI-powered tools that can even personalize website content to enhance engagement of visitors. With a higher engagement, there are more conversions and sales. For more info please visit — http://www.ishir.com/blog/5809/5-reasons-sales-team-just-cannot-ignore-artificial-intelligence-anymore.htm
5 reasons why your sales team just cannot ignore Artificial Intelligence anymore
16
autom5-reasons-why-your-sales-team-just-cannot-ignore-artificial-intelligence-anymore-11ace9d6fca4
2018-09-10
2018-09-10 07:59:49
https://medium.com/s/story/autom5-reasons-why-your-sales-team-just-cannot-ignore-artificial-intelligence-anymore-11ace9d6fca4
false
552
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
ISHIR
We develop future-ready technology solutions for our clients to solve their business problems and help them propel ahead of competitors.
41af1a1d5f65
ISHIRInc
11
89
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-26
2017-09-26 13:32:35
2017-09-26
2017-09-26 13:33:32
3
false
en
2017-09-26
2017-09-26 13:33:32
0
11ad1e3c19af
2.882075
2
0
0
Some four years back, when data stream was picking the pace, many professionals began changing their titles to Data scientist; though the…
3
How to Spot The Real Data Scientist? Some four years back, when data stream was picking the pace, many professionals began changing their titles to Data scientist; though the exact qualifications of a ‘Data Scientist’ was slackly defined. This resulted in perplexity in the hiring market due to an overstatement of skills in resumes. A couple of years later, ‘Data Scientist’ became ‘The dream profile’ for ambitious job-seekers who wanted a high paying and an alluring job. But there is a mounting apprehension among business leaders and employers that applicants are just getting tempted by the ‘title’ of a Data scientist without actually possessing a proper skill set and mindset required to succeed the job. So, genuine as well as fake candidates appear with tremendous enthusiasm and optimism at the job interviews. Structuring an interview for a Data Scientist position and knowing if the candidate is really suitable for the job profile is pretty hard. Usually in data science interviews, candidates are asked to solve brainteasers (related to probability) which are easily solvable if they have memorized the formulas. But it doesn’t signify if they will be able to work on, visualize and give meaningful insights from raw, unstructured and messy data. So how to be sure of a worthy Data Scientist- A real data scientist has: Good experience with unstructured data and statistical analysis: If the resume reads experience in tools such as Google Analytics, SPSS, Hadoop, AWS, Omniture, and Python, ensure that they are accompanied by projects that showcase the skills put to use. If there is a vague mention of the experience or the projects, then they are probably not a data scientist. Business Acumen: It’s not that a person with research background will not back a good data scientist, but the key responsibility of a corporate data scientist is to understand ‘how data affects the business goals’ and ‘to give critical insights in risk mitigation and overall solution.’ Exceptional data skills combined with business savoir-faire are the real determinants of corporate data scientists. An eye for logical and visual interpretations: With strong analytical skills and ability to spot patterns, a good data scientist is one who understands visual representation of data and is able to interpret and represent a story around it that adds value to any organization. Multidisciplinary: Many organizations narrow down their recruiting on candidates with Ph.D. in statistics or mathematics. But the truth is, a good data scientist can come from many backgrounds with no advanced degree in any of them. ‘Person who is better at statistics than any software engineer and better at software engineering than any statistician’ -Josh Wills Curious and Creative: A data scientist should be able to look beyond numbers, even beyond an organization’s data sets to find answers to problems or may be pose new dimensions and insights to an existing scenario. Able to handle noise in data: Any data model draws a fine line between the sample set being too simple to be meaningful and too complex to be trusted. No matter how ‘big the data is, its finite sample is pierced with potential bias. A promising data scientist actually develops a healthy skepticism of their data, techniques, and conclusions. A data scientist not only helps in providing data-driven insights to propel faster decision making but are recognized as strategic assets which drive business growth and profitability. If one can find a candidate with most of these traits accompanied with the ability and desire to grow, then you have found someone who can deliver incredible value to your system, your business and your organization at large. “A data aspirant leverages data that everybody sees; a data scientist leverages data that nobody sees.”
How to Spot The Real Data Scientist?
100
how-to-spot-the-real-data-scientist-11ad1e3c19af
2018-01-22
2018-01-22 19:50:39
https://medium.com/s/story/how-to-spot-the-real-data-scientist-11ad1e3c19af
false
618
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
TBM Growth
TBM GROWTH 36ONE° offers specialized expertise at every phase of the business cycle to help organizations, entrepreneurs and leaders .
25f8a4d06d58
tbmgrowth
6
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-12
2018-09-12 19:12:43
2018-09-12
2018-09-12 19:16:24
1
false
en
2018-09-12
2018-09-12 19:16:24
0
11ae0daad74b
1.309434
1
1
0
Today, I am travelling to Nairobi-Kenya, for the first time outside the country in my entire life! I am not going just because I want to go…
3
Social Media can help you Open new doors of Opportunities Today, I am travelling to Nairobi-Kenya, for the first time outside the country in my entire life! I am not going just because I want to go and have fun...NO. I didn’t arrange the travel, simply because I can’t afford to travel at the moment. But yet, today I am travelling to Nairobi. I am going to stay at a wonderful hotel, full board paid everything just for me, there is a tax driver waiting for me at the Airport too. This is because, I have been invited by the people who actually see my posts that i share on social media. My interest and passion in A.I and tech in general makes them believe in me and see the potential in me. They just need me to go and share my experience with their audiance. People who invited me, they call me Sir, I'm just wondering if I am the same Yesaya who sits behind the computer all day and share my journey and expriences along the way while doing my work. So, am in a timetable to present and lead the discussion around Big Data and Artificial Intelligence. We will dig deeper and take a look at the Ethical Challenges and Legal Concerns of these technologies. I will keep you posted of every move I make in this conference and I am hoping to learn a lot throughout the time I will be in Nairobi. I am so humbled and yet very excited about this, as I am getting closer and closer to achieving my dreams, there is no doubt that God is behind my moves, he understands my needs and he provides in his time what's really necessary for me.
Social Media can help you Open new doors of Opportunities
1
social-media-can-help-you-open-new-doors-of-opportunities-11ae0daad74b
2018-09-12
2018-09-12 19:16:24
https://medium.com/s/story/social-media-can-help-you-open-new-doors-of-opportunities-11ae0daad74b
false
294
null
null
null
null
null
null
null
null
null
Kenya
kenya
Kenya
3,266
Yesaya Athuman
null
ed38ef5156ba
yesayaathuman
34
37
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-07
2017-12-07 01:46:27
2017-12-07
2017-12-07 01:46:59
0
false
en
2017-12-07
2017-12-07 01:46:59
0
11af1b62a5cc
4.079245
0
0
0
A deep learning system is an algorithm that receives an input (information or images) and have an output (classification or numeric…
3
Start-Ups and Industry Giants Rely on Deep Learning A deep learning system is an algorithm that receives an input (information or images) and have an output (classification or numeric prediction). The main function of this deep learning training model is to recognize patterns. A learning algorithm and neural networks are used to find a pattern in the data (deep learning training model). Once the system is trained (with training data), data can be fed to the deep learning training model in order to have a reliable output (classification or prediction) in comparison with other methods as machine learning. In Deep learning training, the more data that you feed the system, the better it’ll be able to recognize and generalize about the patterns needed to have a good deep learning training model to drive safely, classify, or predict. In the last few years, many enterprises are using deep learning training people to get better profits, for that they are using real-time data and real-time streaming of unstructured data or binary data. For example, performing speech-to-text recognition of audio files, recognizing individual speakers, and automatically classifying files. Automatically classifying image files based on the recognized patterns (objects) such as faces, objects, labels, products, and so on. People with specialized deep learning training in image recognition is needed for many companies like Google. In the last few years, there are many companies that are using advances in computer vision and deep learning to solve real-world business problems. Deep learning training in image recognition is very important in the field. Nowadays, enterprises have people with deep learning training to apply techniques such as convolutional neural networks for image classification. This network architecture learns image features automatically with better accuracy, which depends on the quality of the data set that must be properly labeled and appropriate for the problem. There are many types of industries that require people with deep learning training to work with. Industries such as insurance, automotive, financial services, media, health care and retail have trained people in deep learning bootcamps to work in specific classification or predictive problems. In insurance companies such as Orbital Insights, it is used deep learning for analyzing satellite imagery to count cars to predict mall sales, or to count oil tank levels automatically to predict oil production. Insurance companies also use deep learning to analyze the damage on assets. Deep learning training in image classification is very important in this field. In the automotive industry, deep learning has been used for many applications such as scene analysis, automated lane detection, automated road sign reading to set speed limits, self-driving cars, and so on. Besides convolutional neural networks, the automotive industry also uses long-short-term memory networks to analyze sensor data to detect other cars and objects around the car. Actually, on newer cars, if the driver change lanes on the highway without setting the turn signal, the car will automatically directing the car back into the lane. Deep learning training in image classification is very important to automotive industry. Uber has people with deep learning training working in: self-driving cars, image classification, and fraud detection problems. The media companies are have people with deep learning training working in image classification problems. For example, media companies are using deep learning to recognize brands on images. For example, Ebay is using deep learning to let user search products with photos. Paypal is using deep learning to block fraudulent payments. Amazon is using deep learning to make product recommendations for users. Car2go and Uber are using deep learning to predict ride-sharing fleets. They are using LSTM, which means long-short term memory, and temporal information of users to predict ride-sharing fleets. Deep learning training in LSTM technique is required for succeeding in this field. Financial services are using deep learning for automatic asset wealth management and prediction. They are using deep learning and reinforcement learning to make predictions. Actually, many financial services need people taking deep learning training in forecasting. In health care industries, there are people with deep learning training working with classification problems to detect diseases using MRI scans. For example, Arterys Company uses deep learning to model medical imagery data. Actually, Google, Nvidia, and Massachusetts General Hospital have a partnership to develop applications using people with deep learning training in classification problems to detect patterns on radiology tasks. IBM and Google have made important investments in the health care sector for developing the deep learning technology. They are using deep learning to early identification of diseases. For example, they are using convolutional neural networks to detect lung cancer. Deep learning training is indispensable to have a job in these companies. Retail companies have people with deep learning training to develop models to analyze the shopping carts and detect items and make recommendations about what else they might want to buy in store. Retail companies are using complex cameras for taking pictures and convolutional neural networks. Manufacture sector are using LSTM and other deep learning techniques to predict maintenance or energy used. Artificial Intelligence (AI) is considered the biggest business opportunity in the new economy that is expected to generate $15.7 trillion by 2030. Deep learning is increasingly being used in automotive applications, predicting demand, determining deficiencies around service and product quality, detecting new types of fraud, streaming analytics on data in motion and providing predictive or even prescriptive maintenance. Deep learning bootcamps are increasing due to market. Most companies are pushing IT leaders to seek specialists in deep learning. Market research firm Gartner predicted that 80% of data scientists would have deep learning training by 2018. Actually deep learning bootcamps are training people to apply deep learning to resolve industry problems. The average salary for a deep learning engineer is $149,465 per year. The deep learning techniques promise to transform whole industries, in fact, that startups see an opportunity to offer deep technical expertise to companies, from financial firms to Web startups. Actually, Google and Facebook have interest in improving deep learning. These companies need people with certificate courses taken in deep learning bootcamps. Google uses deep learning models to do photo search. Facebook uses deep learning for photo-tagging facial recognition. Smartphones use deep learning for speech recognition. The private sector understands the importance of deep learning training. Actually, companies like NVIDI, which is a technology company based in Santa Clara, California, just announced plans to train 100,00 developers through its NVIDIA Deep Learning Institute.
Start-Ups and Industry Giants Rely on Deep Learning
0
start-ups-and-industry-giants-rely-on-deep-learning-11af1b62a5cc
2018-04-06
2018-04-06 10:22:47
https://medium.com/s/story/start-ups-and-industry-giants-rely-on-deep-learning-11af1b62a5cc
false
1,081
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
BigDataGuys https://www.bigdataguys.com/ @Medium
Artificial Intelligence, Deep learning, Blockchain Bootcamps, Workshops, Platforms and Consulting. Call 202–446–1670, Enroll today!
20dc99cb93f4
TheBigDataGuys
9
3
20,181,104
null
null
null
null
null
null
0
Actual sum: [2000. 2012. 2024. 2036. 2048. 2060. 2072. 2084. 2096. 2108.] Predicted sum: [1999.9021 2011.9015 2023.9009 2035.9004 2047.8997 2059.8992 2071.8984 2083.898 2095.8975 2107.8967] Actual product: [1000000. 1012032. 1024128. 1036288. 1048512. 1060800. 1073152. 1085568. 1098048. 1110592.] Predicted product: [1000000.2 1012032. 1024127.56 1036288.6 1048512.06 1060800.8 1073151.6 1085567.6 1098047.6 1110592.8 ]
2
dca47aab201b
2018-07-05
2018-07-05 14:23:46
2018-08-15
2018-08-15 11:41:44
4
false
en
2018-09-20
2018-09-20 17:20:43
4
11b0f85c1d1d
3.281132
40
2
1
By Akil Hylton
5
Understanding Neural Arithmetic Logic Units By Akil Hylton DeepMind recently released a new paper titled, Neural Arithmetic Logic Units (NALU). It’s an interesting paper that solves an important problem in Deep Learning, teaching neural networks to count. Surprisingly, although neural networks have been able to achieve state of the art results in many tasks such as categorizing lung cancer, they struggle with simpler tasks, like counting numbers. In one experiment demonstrating how networks struggle to interpolate features from new data, the researches of the paper found that they were able to classify training data with numbers that ranged between -5 and 5 with near perfect accuracy, but with numbers outside the training data, the network couldn’t seem to generalize. The paper offers a solution to this, in two parts. Below I’ll briefly describe how NAC works, and how it can handle operations such as addition and subtraction. After that, I’ll introduce NALU, which can handle more complex operation such as multiplication and division. I included code you can try that demonstrates these, and you can read the paper above for more details. First Neural Network (NAC) The Neural Accumulator (or NAC for short), is a linear transformation of its inputs. What does this mean? It takes a transform matrix which is the element-wise product of tanh(W_hat) and sigmoid(M_hat). Finally, the transform matrix W is then multiplied by the input (x). NAC in Python NAC Second Neural Network (NALU) The Neural Arithmetic Logic Units or NALU for short consist of two NACs. The first NAC g equal sigmoid(Gx). The second NAC operates in a log space m which equals exp(W(log(|x| + epsilon))). NALU in Python NALU Test NAC by learning the addition➕ Now lets run a test, we’ll start off by turning NAC into a function. NAC function in Python Next lets create some toy data which will be used for training and test data. NumPy has a great API called numpy.arrange that we will leverage to create are dataset. Toy data for addition Now we can define the boilerplate code to train are model. We start with defining the placeholders X and Y to feed the data at run time. Next we define are NAC network (y_pred, W = NAC(in_dim=X, out_dim=1)). For the loss we will us tf.reduce_sum(). We will have two hyper-parameters, alpha which is the learning rate and the number of epochs we want to train the network with. Before the training loop is ran we need to define a optimizer so we will use tf.train.AdamOptimizer() to minimize the loss. After training this is how the cost plot looks: Cost after training on NAC While NAC can handle operations such as addition and subtract it cannot handle multiplication and division. That is where NALU comes in. It is able to handle more complex operation such as multiplication and division. Test NALU by learning the multiplication✖️ For this we will add the pieces to make the NAC into a NALU. The Neural Accumulator (NAC) is a linear transformation of its inputs. The Neural Arithmetic Logic Unit (NALU) uses two NACs with tied weights to enable addition/subtraction (smaller purple cell) and multiplication/division (larger purple cell), controlled by a gate (orange cell). NALU function in Python Now lets again create some toy data however, this time we will make two line changes. Toy data for multiplication Lines 8 and 20 is where the changes were made switching the add operator to multiplication. Now we can train are NALU network. The only change we will make is where we define are NAC network instead we will us NALU (y_pred = NALU(in_dim=X, out_dim=1)). Cost after training on NALU Full implementation in TensorFlow ahylton19/simpleNALU-tf simpleNALU-tf - Simple implementation of NALU in TensorFlow.github.com Closing thoughts🤔 I initially paid little attention to the release of this paper. I became interested after watching Siraj’s YouTube video on NALU, my interest peeked. I wrote this article to help others understand it, and I hope you find it useful!
Understanding Neural Arithmetic Logic Units
309
understanding-neural-arithmetic-logic-units-11b0f85c1d1d
2018-09-20
2018-09-20 17:20:43
https://medium.com/s/story/understanding-neural-arithmetic-logic-units-11b0f85c1d1d
false
684
TensorFlow is a fast, flexible, and scalable open-source machine learning library for research and production.
null
null
null
TensorFlow
tensorflow
TENSORFLOW,MACHINE LEARNING,DEEP LEARNING
tensorflow
Machine Learning
machine-learning
Machine Learning
51,320
Akil Hylton El
AI and Robotics with a little bit of TensorFlow 🧠
75a306256866
hyltona
43
78
20,181,104
null
null
null
null
null
null
0
null
0
cdd8dc4c5fc
2018-08-10
2018-08-10 14:58:09
2018-08-13
2018-08-13 14:41:20
0
false
en
2018-08-13
2018-08-13 15:43:27
12
11b128687cad
5.324528
13
0
0
by Ryan Budish
1
Helping Global Policymakers Navigate AI’s Challenges and Opportunities by Ryan Budish In 2017, United Nations Secretary-General António Guterres noted the difficult challenge that policymakers, particularly those in the Global South face with respect to AI. He said that “The implications for development are enormous. Developing countries can gain from the benefits of AI, but they also face the highest risk of being left behind.” For example, in Nigeria doctors are using AI to help reduce the incidence of birth asphyxia, a leading cause of under-five death in Africa, and yet at the same time there are real concerns about AI’s impact on rising unemployment and the influence that Google, China, and others are exerting across the Global South. AI technologies are raising complex social, political, technological, economic, and ethical questions. And policymakers around the world increasingly find themselves at the center of these discussions. AI technologies are raising complex social, political, technological, economic, and ethical questions. And policymakers around the world increasingly find themselves at the center of these discussions. Policymakers have no choice but to grapple with these emerging challenges, and this requires careful application of the various tools in their governance toolbox. Our work, presented recently at the ITU’s Global Symposium for Regulators, is designed to support Global South policymakers and regulators as they apply this toolbox and try to navigate the difficult tradeoffs with the use of AI technologies while building the capacity necessary to more effectively answer these complex questions. The pace of change with AI is staggering, with more and more real-world applications every day. And each of new applications creates new unanswered questions, faster than we get answers. There is no checklist that policymakers can follow that will unlock AI’s tremendous potential while limiting its risks. That’s not to say there is an absence of ideas; to the contrary, there’s an ever growing list of opinions on what to do. Some experts have called for the formation of new regulatory agencies that specialize in AI or robotics. Some governments and international organizations have started to write non-binding standards to govern the creation and use of AI. Some companies have released their own ethical guidelines constraining their own use of AI. And multistakeholder partnerships are currently formulating their own best practices for the development and deployment of AI. Policymakers, however, need more than theories, as they feel an urgency to take steps to help their citizens navigate and thrive in an increasingly complex space. Although it would be convenient if there was a turnkey approach to guide policymakers in addressing AI’s numerous challenges, a series of structural challenges make it difficult at this point in time, and perhaps even misguided, to develop and apply a single approach to governing AI: Missing information: For most aspects of AI we currently lack a solid empirical understanding of the short- and long-term consequences of the technologies. In many cases, reliable metrics to track societal impacts beyond unemployment and GDP are not readily available — limiting the possibilities for evidenced-based policymaking. Unresolved foundational questions: In many cases we are still working to identify the “right” questions to ask. For example, when we look at issues like disinformation or hate speech online, we do not yet even fully understand the scope of the problem, let alone have easily deployed policy solutions. Existing frameworks: AI is being deployed in areas such as medicine, automobiles, telecommunications, and education, which are all spaces with complex, local, national, and international legal and regulatory structures in place. As AI technology continues to rapidly evolve, we’re only just beginning to see where existing frameworks are adequate, where they need tweaking, and where entirely new approaches are needed. Despite these challenges, in a time of hype, hysteria, and hope about these new technologies, policymakers need to respond. In the absence of ready made policy approaches, what are they to do? As part of the Ethics and Governance of AI Initiative at the Berkman Klein Center, this past year we’ve focused on three key areas of research in order to help policymakers: Bridging information gaps: In the AI ethics and governance space, there’s a lot of focus (and rightly so) on infusing more technical knowledge into policymaking. For much of the last year, however, our focus has been on listening. To that end, we’ve convened several dialogues with policymakers and other key stakeholders from around the world with meetings in the US, Seoul, China, Hong Kong, Switzerland, and Italy. This dialogue series was at its core a listening tour — a chance for us to learn from policymakers about the issues that they are concerned about with respect to AI, about the unique political, social, economic, and cultural factors that will shape the impact of AI within their region, and about the kinds of resources that they need at this important juncture. Beyond listening, it has enabled us to build a more complete picture of existing frameworks we can build upon, and areas where further research or action is needed. Building toward evidence-based decisionmaking: The absence of robust data on the societal impacts of AI technologies is a major limiting factor for policymakers. Through a variety of measures, we’ve been working to help improve the quality, consistency, diversity, and interoperability of available data. For example, through partnership with the Tsinghua University and the AI index, we have advanced the interoperability between Chinese and US measurements of AI development and impact. We also led the “Data for Good” track at the ITU’s AI for Good Symposium this spring, creating a framework for a Data Commons that would improve the availability of high-quality, open datasets. And we’ve developed a human rights framework for AI impacts to improve foreign policy decisionmaking. Empowering local scholarship: Part of the challenge of AI governance is that so many of the challenges and opportunities are inherently local; a one-size fits all approach simply does not reflect those local realities. So as part of our listening tour, we’ve also supported the development of local work on AI governance. For example, we advised Singapore Management University School of Law in the launch of their Center for AI & Data Governance. And our work on global diversity and inclusion in AI has led to proposals for a fund to support AI research in the Global South. What we heard through our dialogues was that many regulators did not want or need a grand governance framework for AI — they wanted practical steps to begin to close some of the many gaps they see and that worry them. At the ITU’s 2018 Global Symposium for Regulators, we had the opportunity to share our discussion paper “Setting the Stage for AI Governance: Interfaces, Infrastructures, and Institutions for Policymakers and Regulators,” as part of the AI for Development paper series, with ICT regulators, particularly many from the Global South. This paper directly builds on the listening and dialogues we have had over the past year as part of the Berkman Klein Center under the Ethics and Governance of Artificial Intelligence Initiative. What we heard through our dialogues was that many regulators did not want or need a grand governance framework for AI — they wanted practical steps to begin to close some of the many gaps they see and that worry them. They want to know how to close the knowledge gaps between industry and policymakers, so that they can make more informed decisions that unlock the potential of AI and mitigate its risks. They want to know how they can begin to close the inclusion gaps, so that technologies used in their cities and countries are not just designed by and for people in Silicon Valley and Beijing, and instead reflect their own unique populations and cultures. And they want to know they can begin to close the competitiveness gaps, so their own citizens can begin to develop the next generation of AI technologies. The discussion draft presents a starting place for policymakers and regulators as they try to close those gaps, offering concrete steps and tools that can help. It was a great honor to be able to share that work with senior policymakers from around the world and we look forward to further iterating on these recommendations as we continue to listen and learn from the experiences of policymakers as they work to ensure that the promise of AI is fully realized in every community and not just a few.
Helping Global Policymakers Navigate AI’s Challenges and Opportunities
32
helping-global-policymakers-navigate-ais-challenges-and-opportunities-11b128687cad
2018-08-15
2018-08-15 19:57:56
https://medium.com/s/story/helping-global-policymakers-navigate-ais-challenges-and-opportunities-11b128687cad
false
1,411
Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)
null
BKCHarvard
null
Berkman Klein Center Collection
berkman-klein-center
null
BKCHarvard
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Berkman Klein Center
The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.
188295c21f1c
BKCHarvard
297
97
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-19
2018-09-19 18:02:14
2018-09-24
2018-09-24 06:24:57
1
false
en
2018-09-24
2018-09-24 06:24:57
3
11b143b2d4a9
2.003774
1
0
0
I’ve decided to focus on the moral/ethical implications of technology advancement; specifically in our lack of understanding of AI and the…
1
Part I: Choosing/Finalizing Comps Topic I’ve decided to focus on the moral/ethical implications of technology advancement; specifically in our lack of understanding of AI and the use of opaque machine learning models in its underlying technology. I will focus on the lack of transparency in the technical aspects of Deep Learning and whats currently being done about it. Ideation a.k.a lots of arrows and color coding! Philosophical reflection and reasoning should play a critical role in developing new technology, especially as AI systems become more integrated with human life. Recent events like Uber’s AV test that resutled in the death of a pedestrian prompt the question of where things are headed with “scantily regulated, poorly understood technology that has the potential to save lives and create fortunes,” but also to have negative societal impacts that we might not yet understand. Deep Learning and neural nets have led to incredible advances in AI such that we allow them to make critical decisions autonomously. The Uber self-driving car wasn’t programmed, it programmed itself by learning from a human and while it can drive itself, we have no idea how its judgement works while it is driving. There is a black box at the heart of machine learning that is currently unexplainable. We know reasoning is embedded in the complex layers of neural networks, but we don’t know exactly how it works. If we can’t explain the underlying technology of AI it will be difficult to predict and understand its failures, like the Uber incident, and to assign accountability in such situations. If we are to continue integrating AI into our society we should be able to ensure that it will have sound judgement that aligns with our own ethical code. This requires some forethought of how we continue to design artificially intelligent systems. As Stanford’s president said in regards to Facebook’s recent consumer data scandal, “maybe some forethought seven to 10 years ago would have been helpful.” Are failures and errors that negatively affect human life just the cost of progress? What price are we willing to pay for potential benefit? To criticize industrial modernity is somehow to criticize the moral advancement of humankind, since a central theme in this narrative is the idea that industrialization revolutionized our humanity, too. Those who criticize industrial society are often met with defensive snarkiness: “So you’d like us to go back to living in caves, would ya?” or “you can’t stop progress!” To continue using AI without fully understanding its reasoning is to recognize only its positive potential for advancement, to turn a blind eye to history and the consequences of the last industrial revolution. In my project I hope to provide a technical analysis of current machine learning methodology and practice and its AI applications, specifically Deep Learning. I will discuss what we understand as ethical accountability and responsibility and the problem of manifesting that same understanding in AI.
Part I: Choosing/Finalizing Comps Topic
10
part-i-choosing-finalizing-comps-topic-11b143b2d4a9
2018-09-24
2018-09-24 06:24:57
https://medium.com/s/story/part-i-choosing-finalizing-comps-topic-11b143b2d4a9
false
478
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
deGrasse Schrader
Senior CS major at Occidental College. Currently working on undergraduate comprehensive project that focuses on the intersection of technology and philosophy.
865c4afcff56
degrasseschrader
1
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-07
2018-05-07 01:44:58
2018-05-07
2018-05-07 01:53:34
2
false
en
2018-05-07
2018-05-07 04:44:11
1
11b2dde3d470
2.496541
1
0
0
On Friday 27th April, at the MCG in Melbourne, Startupbootcamp’s (SBC) three month-long accelerator finally came to a close. The day was an…
5
Startupbootcamp’s Demo Day Finale The Startupbootcamp EnergyAustralia Team On Friday 27th April, at the MCG in Melbourne, Startupbootcamp’s (SBC) three month-long accelerator finally came to a close. The day was an opportunity for the ten startups involved to pitch to an audience of partners, investors and engaged digital minds. The program, run in partnership with EnergyAustralia, focused on organisations which could innovatively explore Energy Efficiency, Energy Independence, Digitisation and Analytics. We were invited by SBC to pitch at FastTrack on 11th October. This was our first interaction with Richard Celm, program director of SBC, and other key stakeholders, “Those invited into the Startupbootcamp program are organisations that were chosen because of the innovative potential they each presented; 1Ansah was certainly no exception to this”, Celm says. The positive feedback and subsequent invitation to selection days in Melbourne set us on our journey through the SBC program. During the program, 1Ansah were stationed at Melbourne’s YBF, where they took part in daily events. “Something that is unique about the Startupbootcamp experience is the very hands-on approach that we take here. All of the startups work closely with staff and with each other to keep advancing every day”, Celm says of the SBC community. The experience of engaging with bright colleagues on a daily basis was exceptional, and the mentorship from industry leaders was priceless. One such industry leader was Andrew Perry, EnergyAustralia Executive for NextGen. “Now, it’s exciting to think that the next great idea, the next great advance in energy, might be unveiled right here”, Perry says, “While this was the first global Demo Day held in Melbourne, it will not be the last”. COO Nishant Sahai delivering the pitch 1Ansah’s COO, Nishant Sahai, delivered our final pitch at Demo Day. He took to the stage in front of hundreds of people to inform them about our Intelligent Platform, and to share the excitement of bringing this new technology to industries. “The pitch was a great opportunity for 1Ansah to be able to share not only our innovative Platform, but our passion for what we do”, Sahai said. The pitch was not only watched by those in attendance, but also streamed live by SBC to thousands of viewers worldwide. Trevor Townsend, CEO of SBC Australia, says that the overall reception of the program has been outstanding, “We set out to achieve international best practice in the execution of our program and the reviews that we have received…indicate that we achieved this goal”. Townsend was a wonderful mentor and supporter throughout the program, “We really appreciated having 1Ansah on the program, the team embraced the acceleration process and took advantage of the opportunities that were presented to them. We started out with the hypothesis that the great work that 1Ansah delivers to the aviation industry would be equally valuable to the energy market and during the 3 month program the team was able to prove this to be the case”. The banners are all packed up, the name tags are gone, and the stage lights are off, but that doesn’t mean our SBC journey is over. 1Ansah have been fortunate to be a part of the SBC EnergyAustralia program and we look forward to continuing on the friendships and partnerships we made in the future. If you’d like to check out our Demo Day pitch, head to this link to see Nishant Sahai in action (starting from 1:27): https://www.facebook.com/sbcEnergyAustralia/videos/2086636121619625/
Startupbootcamp’s Demo Day Finale
2
startupbootcamps-demo-day-finale-11b2dde3d470
2018-05-07
2018-05-07 04:50:44
https://medium.com/s/story/startupbootcamps-demo-day-finale-11b2dde3d470
false
560
null
null
null
null
null
null
null
null
null
Energy
energy
Energy
22,189
LexX Technologies
LexX Technologies are a global business, revolutionising the way maintenance works in our chosen industries. We provide Digital Intelligence for Maintenance
aa6ca5122381
lexxtec
2
5
20,181,104
null
null
null
null
null
null
0
null
0
19fd0cf90e0c
2018-05-03
2018-05-03 05:59:01
2018-05-03
2018-05-03 06:08:13
2
false
en
2018-05-03
2018-05-03 06:08:13
3
11b3b9741d55
0.858805
0
0
0
“I know what your hearts are telling you.”
5
The missile crisis of middle Earth and the place where innovation goes to die “I know what your hearts are telling you.” “The Gross National Product does not include the beauty of our poetry or the intelligence of our public debate. It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion. It measures everything, in short, except that which makes life worthwhile.” ~ Robert F. Kennedy [LI] Pleonastic? “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices.” ~ Thomas Aquinas [LI] The Admiral suggested our relationship will cause you pain. You need to get to know me a little better.
The missile crisis of middle Earth and the place where innovation goes to die
0
the-missile-crisis-of-middle-earth-and-the-place-where-innovation-goes-to-die-11b3b9741d55
2018-05-03
2018-05-03 06:08:14
https://medium.com/s/story/the-missile-crisis-of-middle-earth-and-the-place-where-innovation-goes-to-die-11b3b9741d55
false
126
“Prosody is the music of language.” ~ Nandini Stocker, who advocates for sounds of silent solidarity and voices of musical magic makers in scented echo chambers. Make sense? I didn’t think so. I know so. We all do. We all shine on. On and on and on.
null
null
null
Living Language Legacies
living-language-legacies
LANGUAGE,VOICE RECOGNITION,SPEECH RECOGNITION,SPEECH,NATURAL LANGUAGE
captionjaneway
Science Fiction
science-fiction
Science Fiction
15,600
Nandini Stocker
Speaking truth brought me war and peace. Amplifying others set me free.
7e6afdd38d52
sevenofnan
426
438
20,181,104
null
null
null
null
null
null
0
null
0
923e17d2b5
2018-07-13
2018-07-13 14:28:51
2018-07-13
2018-07-13 15:05:30
7
false
en
2018-07-13
2018-07-13 15:05:45
10
11b44d099a1b
2.393396
2
0
0
The best articles from this week, curated by the IoT For All Team
5
The 3 Easiest Ways to Secure Your IoT Device, New IQ Test for AI Algorithms, and Computer Vision in Business The best articles from this week, curated by the IoT For All Team IoT Security 🔒3 Easy Ways to Secure Your IoT Device If you pay close attention during the development phase, IoT devices can be made to look for and receive software updates so that future bugs can be patched. The biggest recommendation? Don’t open incoming ports. ⚖️ IoT Security Concerns: Enterprise vs. Consumer When it comes to IoT security, corporates and consumers feel very differently. For consumers, the biggest concern is privacy, while enterprises fear a more robust attack that compromises many users along with company data. Although their attitudes are different, they can still learn from one another. Artificial Intelligence 📝 An IQ Test for AI Developers at Google’s DeepMind created what is essentially an IQ test for AI systems that evaluates the abstract reasoning of neural networks. This ability to detect patterns and solve problems on a conceptual level is a signal of sophistication in AI algorithms. 💻 Computer Vision Applications in Business Right now, there are key opportunities for businesses to use neural networks in image data classification. For example, computer vision can automate paperwork processes that require simple decision making, copying data from paper forms to online sources. Read more applications here. ✋ AI in Biometric User Authentication Security is a growing concern for IoT professionals and businesses, and recent solutions have been utilizing AI to better protect user information and identity. Machine learning and deep learning algorithms are being put to work to better protect accounts and identify compromised information. We’re serious… … IoT For All isn’t just a top Medium publication! Our main site, IoTForAll.com, is filled with even more awesome content, plusexclusive articles we don’t share on Medium. And if you want to get our content the second it’s posted, follow our social channels! IoT For All is brought to you by the curious engineers at Leverege. If you liked this week’s top posts, please clap or share with someone you think would enjoy it! Thank You!
The 3 Easiest Ways to Secure Your IoT Device, New IQ Test for AI Algorithms, and Computer Vision in…
3
the-3-easiest-ways-to-secure-your-iot-device-new-iq-test-for-ai-algorithms-and-computer-vision-in-11b44d099a1b
2018-07-13
2018-07-13 15:05:45
https://medium.com/s/story/the-3-easiest-ways-to-secure-your-iot-device-new-iq-test-for-ai-algorithms-and-computer-vision-in-11b44d099a1b
false
356
Expert analysis, simple explanations, and the latest advances in IoT, AR/VR/MR, AI & ML and beyond! To publish with us please email: [email protected]
null
iot4all
null
IoT For All
iotforall
IOT,INTERNET OF THINGS,TECH,TECHNOLOGY,ARTIFICIAL INTELLIGENCE
iotforall
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Isabel Harner
Director of Marketing @ IoT For All, Leverege | Venture For America Fellow
1ee19dfdf640
isabel.harner
192
37
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-08
2018-06-08 14:13:24
2018-06-08
2018-06-08 14:20:29
0
false
en
2018-06-08
2018-06-08 14:20:29
2
11b557d985e3
3.784906
0
0
0
Most of us understand that Siri, Google Now, and Cortana are intelligent digital personal assistants on several different programs (iOS…
1
A Brief Introduction to Artificial Intelligence Most of us understand that Siri, Google Now, and Cortana are intelligent digital personal assistants on several different programs (iOS, Android, and Windows Mobile). In short, they help locate useful information when you request it’s utilising your own voice; you could say”Where is the Indian restaurant?” ,”What is on my schedule today?” ,”Remind me to call Mom or Dad in o’clock,” and the helper will respond by discovering information, relaying information from your phone, or sending commands to other programs. AI is significant in those apps, as they gather info on your requests and utilise that information to recognise your address and function you results that are tailored to your own preferences. Microsoft says that Cortana”continually learns about its own user” and that it will eventually develop the capacity to anticipate users’ needs. Virtual personal assistants process a large quantity of information from an assortment of sources to find out about users and be more effective in helping them track their data. Your smart phone, calculator, video games, auto, bank along with your house use artificial intelligence every day; occasionally it is clear what its’ performing, like when you ask Siri to receive you directions to the nearest gas station. At times it’s not as evident, just like when you create an abnormal purchase on your credit card and don’t get a fraud alert out of your bank. AI is everywhere, and it’s creating a huge difference in our own lives every day. Read more On Demand App Development Company & Artificial Intelligence in Retail and much more. Thus, we can say that Artificial Intelligence (AI) is the division of computer sciences that emphasizes the evolution of intelligence machines, thinking and working like humans. For example, speech recognition, problem-solving, learning and planning. Nowadays, Artificial Intelligence is a extremely common topic that is widely discussed in the tech and business circles. Many specialists and business analysts assert that AI or system learning is the long run — but if we look around, we’re convinced that it’s not the future — it’s the current. Yes, the technology is at its initial phase and more and more companies are investing resources from machine learning, indicating a strong growth in AI goods and apps soon. Artificial intelligence or machine intelligence is that the simulation of human intelligence processes by machines, particularly computer systems. What’s the use of AI? Vision systems. The should interpret, fully understand and make sense of visual input on the pc, i.e. AI can be utilized to test to translate and comprehend an image — industrial, military usage, satellite photograph interpretation. What’s the purpose of AI? When AI researchers began to target for the Aim of artificial intelligence, then a main interest was human justification… The special functions which are programmed into a computer may Have the Ability to account for a Number of the conditions that Let It match human intelligence What is an ASI artificial intelligence? A super intellect is a hierarchical agent that possesses intelligence far surpassing that of the brightest and most talented human minds. What’s the Objective of AI? Colloquially, the word”artificial intelligence” is implemented when a system mimics”cognitive” functions that humans associate with other human minds, including”learning” and”problem solving”. General intelligence is one of the area’s long-term targets. What are the different forms of AI? We need to overcome the boundaries that define the four distinct kinds of artificial intellect, and the barriers that separate machines out of us and us from them. Form II AI: Limited memory Form III AI: Theory of head Type IV AI: Self-awareness Is computer vision component of AI? Artificial intelligence and monitor share other topics like pattern learning and recognition techniques. Thus, computer vision is sometimes regarded as part of their artificial intelligence field or even the computer science field in general. Is machine learning exactly the same as artificial intelligence? More importantly, machine learning (ML) and artificial intelligence (AI) are cropping up as alternatives for managing information. The two are frequently used interchangeably, and even though there are a few parallels, they are not the exact same thing. What are the areas of artificial intelligence? · List of software · Optical character recognition. · Face recognition. · Artificial creativity. · Computer eyesight, Virtual reality and Picture processing. · Diagnosis (AI) · Game concept and Strategic planning. How significant is Artificial Intelligence? AI is the machines that are engineered and designed in such a way that they and think and act like an individual. Artificial Intelligence becomes the significant part our daily life. Our life is changed by AI because this technology is used in a wide region of day to day solutions. For the majority of us, the very evident outcomes of the enhanced powers of AI are awesome new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is poised to reinvent other regions of life. One is healthcare. Physicians in India are analysing software that assesses images of a individual’s retina to get signs of diabetic retinopathy, a condition frequently diagnosed too late to reduce vision loss. Machine learning is essential to jobs in autonomous driving, even where it permits a vehicle to generate sense of its surroundings. Artificial intelligence is already present in plenty of programs, from research algorithms and tools that you use daily to bionic limbs for the handicapped. At times it seems like any site, app, or productivity tool is citing AI since the key ingredient in their recipe for success. What’s less common is the explanation of what AI is, why it is so trendy, and how businesses are using it to provide much better consumer experiences. If you don’t know a lot about AI, not having an explanation can be perplexing. Nowadays, the field of artificial intelligence is much more lively than ever and a few think that we’re on the brink of discoveries that could change human society irreversibly, for worse or better.
A Brief Introduction to Artificial Intelligence
0
a-brief-introduction-to-artificial-intelligence-11b557d985e3
2018-06-08
2018-06-08 14:20:30
https://medium.com/s/story/a-brief-introduction-to-artificial-intelligence-11b557d985e3
false
1,003
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Charlotte Lancaster
null
40548daef2a1
lancastercha
0
1
20,181,104
null
null
null
null
null
null
0
#importing necessary libraries #data analysis and manipulation libraries import numpy as np import pandas as pd #visualization libraries import matplotlib.pyplot as plt import seaborn as sns #machine learning libraries #the below line is far making fake data far illustration purposes from sklearn.datasets import make_blobs #creating fake data data=make_blobs(n_samples=500, n_features=8,centers=5, cluster_std=1.5, random_state=201) #Let's take a look at our fake data data[0] #produces an array of our samples #viewing the clusters of our data data[1] #creating a scatter plot of our data in features 1 and 2 plt.scatter(data[0][:,0],data[0][:,1])
5
null
2018-03-26
2018-03-26 12:06:50
2018-03-26
2018-03-26 12:09:11
4
false
en
2018-03-26
2018-03-26 12:09:11
3
11b59d3521
3.749057
1
0
0
Statistical Arbitrage is one of the most recognizable quantitative trading strategies. Though several variations exist, the basic premise…
5
K-Means Clustering For Pair Selection In Python — Part II Statistical Arbitrage is one of the most recognizable quantitative trading strategies. Though several variations exist, the basic premise is that despite two securities being random walks, their relationship is not random, thus yielding a trading opportunity. A key concern of implementing any version of statistical arbitrage is the process of pair selection. Part II: Understanding K-Means In this post, we will survey a machine learning technique to address the issue of pair selection. What Is K-Means Clustering? K-Means Clustering is a type of unsupervised machine learning that groups data on the basis of similarities. Recall that in supervised machine learning we provide the algorithm with features or variables that we would like it to associate with labels or the outcome in which we would like it to predict or classify. In unsupervised machine learning we only provide the model with features and it then “learns” the associations on its own. K-Means is one technique for finding subgroups within datasets. One difference in K-Means versus that of other clustering methods is that in K-Means, we have a predetermined amount of clusters and some other techniques do not require that we predefine the number of clusters. The algorithm begins by randomly assigning each data point to a specific cluster with no one data point being in any two clusters. It then calculates the centroid, or mean of these points. The object of the algorithm is to reduce the total within-cluster variation. In other words, we want to place each point into a specific cluster, measure the distances from the centroid of that cluster and then take the squared sum of these to get the total within-cluster variation. Our goal is to reduce this value. The process of assigning data points and calculating the squared distances is continued until there are no more changes in the components of the clusters, or in other words, we have optimally reduced the in cluster variation. How K-Means Works Let’s take a look at how K-Means works. We will begin by importing our usual data analysis and manipulation libraries. Sci-kit learn offers built-in datasets that you can play with to get familiar with various algorithms. You can take a look at some of the datasets provided by sklearn here. To gain an understanding of how K-Means works, we’re going to create our own toy data and visualize the clusters. Then we will use sklearn’s KMeans algorithm to assess it’s ability to identify the clusters that we created. Let’s get started! Now that we have imported our data analysis, visualization and the make_blobs method from sklearn, we’re ready to create our toy data to begin our analysis. In the above line of code, we have created a variable named data and have initialized it using our make_blobs object imported from sklearn. The make blobs object allows us to create and specify the parameters associated with the data we’re going to create. We’re able to assign the number of samples, or the number of observations equally divided between clusters, the number of features, clusters, cluster standard deviation, and a random state. Using the centres variable, we can determine the number of clusters that we want to create from our toy data. Now that we have initialized our method, let’s take a look at our data. Printing data[0] returns an array of our samples. These are the toy data points we created when initializing the n_samples parameter in our make_blobs object. We can also view the cluster assignments we created. Printing data[1] allows us to view the clusters created. Note that though we specified five clusters in our initialization, our cluster assignments range from 0 to 4. This is because python indexing begins at 0 and not 1. So cluster counting, so to speak, begins at 0 and continues for five steps. We’ve taken a look at our data and viewed our clusters, but looking at arrays doesn’t give us a lot of information. This is where our visualization libraries come in. Python’s matplotlib is a great library for visualizing data so that we can make inferences about it. Let’s create a scatter plot, or a visual to identify the relationships inherent in our data. Read more here
K-Means Clustering For Pair Selection In Python — Part II
1
k-means-clustering-for-pair-selection-in-python-part-ii-11b59d3521
2018-04-07
2018-04-07 20:16:06
https://medium.com/s/story/k-means-clustering-for-pair-selection-in-python-part-ii-11b59d3521
false
808
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
QuantInsti®
QuantInsti is an Algorithmic Trading Training institute focused on preparing professionals and students for HFT & Algorithmic Trading.
42079579cd65
QuantInsti
379
138
20,181,104
null
null
null
null
null
null
0
null
0
981f5f892202
2018-04-27
2018-04-27 14:24:37
2018-04-27
2018-04-27 14:26:25
4
false
en
2018-05-10
2018-05-10 10:21:54
0
11b632321cac
1.60566
188
0
0
The Ubex advertising exchange project has launched its roadshow and presentations at the d10e Conference in Seoul on the 6th of March…
5
Ubex Roadshow Continues in Seoul The Ubex advertising exchange project has launched its roadshow and presentations at the d10e Conference in Seoul on the 6th of March, making important announcements during its debut in the Asian theater of the industry. The d10e Conference held in Seoul is one of the foremost platforms for the development and presentation of blockchain technologies and projects, attracting specialists from around the world. On the second day of the conference, Ubex presented the project and officially announced the start of White List registration. The results of the conference were extremely positive as a number of meetings were held with local partners. The overwhelming success of the presentation and heightened interest in the project from local enthusiasts forced the Ubex team to redouble their efforts so as to have the opportunity to establish working relations with the enormous number of willing Korean project participants and partners. After Seoul, the Ubex team will launch its full scale roadshow with its next destination being the Money 2020 Conference in Singapore, which will be held on 13th of March. After Singapore, events in Vietnam, Shanghai, Hong Kong, Tokyo and finally Switzerland are to follow as Ubex will present the project at various world-class venues. Stay tuned to the latest news and updates from Ubex by subscribing to its official Telegram, YouTube and KakaoTalk channels. The first 100 subscribers of the KakaoTalk Ubex channel will be entitled to rewards.
Ubex Roadshow Continues in Seoul
8,325
ubex-roadshow-continues-in-seoul-11b632321cac
2018-06-13
2018-06-13 11:19:02
https://medium.com/s/story/ubex-roadshow-continues-in-seoul-11b632321cac
false
240
Ubex is a global decentralized advertising exchange where companies advertise effectively, while publishers profitably tokenize ad slots on their resources. OUR MISSION: To create a global advertising ecosystem with a high level of mutual trust and maximum efficiency.
null
UbexAl
null
Ubex
ubex
null
ubex_ai
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ubex AI
Telegram: t.me/UbexAI
bb334fb51280
ubex
6,289
2
20,181,104
null
null
null
null
null
null
0
null
0
d475cddee5ca
2018-06-19
2018-06-19 07:49:57
2018-06-19
2018-06-19 07:50:46
1
false
en
2018-06-19
2018-06-19 07:52:53
1
11b648805a08
0.74717
2
0
0
A smart contract drafted in a semantic modelling language is, on the one hand, easily understood by specialists in the appropriate subject…
4
Artificial Intelligence 2.0 — what is it? A smart contract drafted in a semantic modelling language is, on the one hand, easily understood by specialists in the appropriate subject domain, on the other hand, is automatically verifiable and can run on a computer, or other digital device. It allows to build a bridge between human brain and artificial intelligence. This is what we call Artificial Intelligence 2.0. AI now is rather a black box for us as we don’t really see what is happening inside it. With semantic contracts anyone can set the rules and check upon a neural network behaviour. Similar to Isaac Asimov’s three laws of robotics, semantic contracts can use human language to control the work of complicated neural networks in critical situations, e.g. when a human life is in danger. Go to the next level of artificial intelligence with Kirik: https://goo.gl/kgNjFv
Artificial Intelligence 2.0 — what is it?
52
artificial-intelligence-2-0-what-is-it-11b648805a08
2018-06-20
2018-06-20 04:20:54
https://medium.com/s/story/artificial-intelligence-2-0-what-is-it-11b648805a08
false
145
Semantic contracts based on the theory of executable specifications developed by the world famous mathematicians. The technology is understandable to specialists in various subject areas and allows to conduct transactions between various blockchains and outside of them.
null
kirik.protocol
null
KIRIK PROTOCOL
kirik-protocol
SMART CONTRACTS,BLOCKCHAIN TECHNOLOGY,BLOCKCHAIN,SEMANTICS,TECHNOLOGY
kirik_protocol
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Vitaly Gumirov
KIRIK.io co-founder
d472f893466d
vitg
29
9
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-10
2018-09-10 15:55:29
2018-09-10
2018-09-10 16:07:18
0
false
en
2018-09-10
2018-09-10 16:45:37
6
11b7e2dc7654
1.898113
0
0
0
Mode- It is the most commonly occurring value in a distribution.
5
6-Central Tendency Mode- It is the most commonly occurring value in a distribution. Consider this data-set showing the retirement age of 11 people, in whole years: 54, 54, 54, 55, 56, 57, 57, 58, 58, 60, 60 The most commonly occurring value is 54, therefore the mode of this distribution is 54 years. Advantage of the mode: The mode has an advantage over the median and the mean as it can be found for both numerical and categorical (non-numerical) data. Limitations of the mode: The are some limitations to using the mode. In some distributions, the mode may not reflect the center of the distribution very well. When the distribution of retirement age is ordered from lowest to highest value, it is easy to see that the center of the distribution is 57 years, but the mode is lower, at 54 years. 54, 54, 54, 55, 56, 57, 57, 58, 58, 60, 60 It is also possible for there to be more than one mode for the same distribution of data, (bi-modal, or multi-modal). The presence of more than one mode can limit the ability of the mode in describing the center or typical value of the distribution because a single value to describe the center cannot be identified. In some cases, particularly where the data are continuous, the distribution may have no mode at all (i.e. if all values are different). In cases such as these, it may be better to consider using the median or mean, or group the data in to appropriate intervals, and find the modal class. Median- The median is the middle value in distribution when the values are arranged in ascending or descending order. The median divides the distribution in half (there are 50% of observations on either side of the median value). In a distribution with an odd number of observations, the median value is the middle value. Looking at the retirement age distribution (which has 11 observations), the median is the middle value, which is 57 years: 54, 54, 54, 55, 56, 57, 57, 58, 58, 60, 60 When the distribution has an even number of observations, the median value is the mean of the two middle values. In the following distribution, the two middle values are 56 and 57, therefore the median equals 56.5 years: 52, 54, 54, 54, 55, 56, 57, 57, 58, 58, 60, 60 Advantage of the median: The median is less affected by outliers and skewed data than the mean, and is usually the preferred measure of central tendency when the distribution is not symmetrical. Limitation of the median: The median cannot be identified for categorical nominal data, as it cannot be logically ordered. The median is usually preferred in these situations because the value of the mean can be distorted by the outliers. However, it will depend on how influential the outliers are. If they do not significantly distort the mean, using the mean as the measure of central tendency will usually be preferred.
6-Central Tendency
0
6-central-tendency-11b7e2dc7654
2018-09-10
2018-09-10 16:45:37
https://medium.com/s/story/6-central-tendency-11b7e2dc7654
false
503
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Ankit Gupta
Ex Credit Suisse, Ex HSBC | My need for Freedom dominates my decisions | Want to go everywhere
bc23f2a2e52a
guptas08
2
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-24
2017-11-24 09:01:13
2017-11-24
2017-11-24 09:01:28
0
false
en
2017-11-24
2017-11-24 09:01:28
1
11b8dec109e4
0.124528
0
0
0
AI (artificial intelligence) is different from human intelligence, but they can be integrated to form super intelligence for solving…
2
AI can be integrated with human intelligence to solve problems AI (artificial intelligence) is different from human intelligence, but they can be integrated to form super intelligence for solving problems http://www.digitimes.com/news/a20171124PD203.html #machinelearning #ML
AI can be integrated with human intelligence to solve problems
0
ai-can-be-integrated-with-human-intelligence-to-solve-problems-11b8dec109e4
2017-11-24
2017-11-24 09:01:29
https://medium.com/s/story/ai-can-be-integrated-with-human-intelligence-to-solve-problems-11b8dec109e4
false
33
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sergio Gaiotto
null
746ab14c82dc
sergio.gaiotto
35
56
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-04
2017-09-04 21:31:44
2017-09-04
2017-09-04 22:27:36
0
false
en
2017-09-04
2017-09-04 22:27:36
5
11b8f534364a
2.090566
1
0
0
I am new to this, writing isn’t my strongest skill. I do not believe that what I have to say is thought provoking, new, or even important…
4
Taking my first steps into a larger world. I am new to this, writing isn’t my strongest skill. I do not believe that what I have to say is thought provoking, new, or even important. So Instead of going for special I would use medium to entertain myself with my own thoughts and gain clarity on the things that I am working on. So what am I working on? (Way way way too many things) After reading “Statistics Done Wrong” by Alex Reinhart, I was compelled to explore the verbiage and depth at which scientific articles present their findings. So I clanked together a web-scraping script and put it to work over the eclipse weekend. Turns out, you shouldn’t leave an autonomous untested article mining script while you go to a internet void for a weekend. Lucky for me the script crashed on day two (probably not my fault, I am going to blame the requests library). With about 10,500 PDF articles at my disposal I set about trying to convert them to raw text. Luckily there is a Python library for that! PDFMiner is a wonderful little tool that had me up and running with some unicode and ascii riddled text in about 10 minutes. Now to clean it up. As I just want to look at the words used, I removed the unicode and punctuation. I then found that in the PDF to text conversion left me with a slew of words that were smashed together such as: “positionfrom” and “nonfullrank”. I had to come up with a method for fixing this. I love problems like this because I know that the solution is out there, but I would rather build it myself and understand the nitty-gritty. My solution may not be the best, the fastest, or the prettiest, but it’s mine and I can tweak it to my whim. The monkey wrench: Some of these strings are made up of 2, 3, or 4 words. And even worse, some of the words are proper nouns. With the help of stack-exchange, I was able to get a list of all chunks of a string. It would be nice if there was a quick way to check if a word is misspelled… Luckily there is a Python library for that! (There needs to be a acronym for that.) After a quick “pip install pyenchant” I was ready to check each sub-string and assign a boolean value to correct spellings, I even allowed a correct spelling if the sub-string showed up in my article (taking care of most of those pesky proper nouns). Now I am off to the races to build a corpus of words and look at the frequency of statistical terms in my cleaned articles. So what did I learn from all this? There is a library for that, but nothing beats building your own slow clunky method. Sure I could have built my own method to actually check misspellings (I have a set of words just intersect my word with the set), but I probably would spend the rest of the week just finding the first step in writing a PDF converter. Choose your battles and go for things that seem just out of reach. Check out the repo if you must. More to come…
Taking my first steps into a larger world.
1
taking-my-first-steps-into-a-larger-world-11b8f534364a
2018-06-03
2018-06-03 00:58:34
https://medium.com/s/story/taking-my-first-steps-into-a-larger-world-11b8f534364a
false
554
null
null
null
null
null
null
null
null
null
Programming
programming
Programming
80,554
The Astro Cat
I am a Data Scientist, Physicist, and Cat Enthusiast looking to explain the world with math.
8773edf38191
theastrocat
0
18
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-01
2017-12-01 06:31:23
2017-12-01
2017-12-01 06:44:07
1
false
ru
2017-12-01
2017-12-01 07:26:50
1
11ba9d119a1e
1.932075
1
0
0
Мессенджеры стали (или станут в ближайшем будущем) новыми СМИ. Почему мы так считаем? Посудите сами: по некоторым оценкам, более 3,6 млрд …
5
Могут ли чат-боты для отелей помочь гостиничному бизнесу? Мессенджеры стали (или станут в ближайшем будущем) новыми СМИ. Почему мы так считаем? Посудите сами: по некоторым оценкам, более 3,6 млрд людей будут использовать приложения для мгновенного обмена сообщениями со всем своим контакт-листом уже к концу 2018 года. На данный момент, порядка 2,5 млрд уже установили по крайней мере хотя бы один мессенджер. Люди черпают информацию из источников, которые стали намного удобнее современных СМИ, и их не в чем винить. В гостиничном бизнесе стала прослеживаться определенная тенденция: некоторые отели уже предлагают услугу общения посредством мессенджеров. Например, можно связаться с ними через их собственные приложения, либо альтернативным методом — используя классические СМС, Facebook Messenger или же WhatsApp. В 2016 году ботов внедрили Kayak, Expedia.com и Skyscanner — известнейшие туристические бренды. Додумались они до этого не сами: технология была перенята у основных мессенджер-платформ — Facebook Messenger и Telegram. Можно быть уверенным в том, что в дальнейшем примеру вышеперечисленных сервисов для путешественников последуют остальные тревел-бренды и независимые отели. Так и проявляются первые признаки реальной чат-бот революции. Рассмотрим сценарии и преимущества, которые могут принести новейшие технологии и чат-боты для отелей гостиничному бизнесу: Новый канал бронирования Для того, чтобы потребители стали использовать для бронирований удобный для них способ коммуникации с отелем, последним необходимо обозначить свое присутствие в мессенджерах (так как большая часть их клиентов — именно там). Благодаря этому и индивидуальные арендодатели, и сетевые отели смогут значительно снизить уровень своей зависимости от турагентств и перейти к прямому интернет-бизнесу (естественно, без комиссии). Формирование лояльности гостя С помощью чат-ботов появится возможность предоставления максимально персонализированного подхода на всех стадиях поездки: начиная с бронирования гостиницы и заканчивая взаимодействием во время и даже после поездки. Создание и обновление профильной истории Чат-бот сможет собирать информацию о госте, взаимодействуя с ним на каждом этапе путешествия; таким образом, все эти полезные данные будет возможно автоматизировать и использовать во время следующих визитов посетителя. Новый источник дохода Опираясь на профильную историю клиента, который уже бывал в отеле, чат-бот будет способен отправить гостю персонализированные предложения. Например, предложить забронировать трансфер из или в аэропорт, предоставить варианты мест, где можно заказать ужин или посетить SPA-процедуры. В таких случаях вмешательство персонала отеля перестанет быть необходимым, будет возможность все осуществить напрямую. «Разгрузка» персонала отеля При внедрении чат-ботов персоналу у стойки регистрации нужно будет выполнять обязанности, которые не имеется возможности автоматизировать. Подводя итог, можно сделать вывод, что чат-боты смогут помочь отелям поднять обслуживание клиентов на совершенно другой, более продвинутый уровень. Конечно, технологии не смогут целиком и полностью исключить потребность гостя в человеческом общении: не стоит забывать о жалобах, всевозможных исключениях из правил, особых просьбах и непредвиденных ситуациях. Но для подобных разговоров бот всегда сможет перенаправить клиента на живого сотрудника. Таким образом, чат-боты станут огромным преимуществом для гостиничного бизнеса.
Могут ли чат-боты для отелей помочь гостиничному бизнесу?
10
chatbots-for-hotels-11ba9d119a1e
2018-03-10
2018-03-10 14:21:32
https://medium.com/s/story/chatbots-for-hotels-11ba9d119a1e
false
459
null
null
null
null
null
null
null
null
null
Chatbots
chatbots
Chatbots
15,820
hotbot.ai — мы делаем чат-боты для отелей
http://hotbot.ai/
c17fe003f71c
hotbot
9
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-16
2017-11-16 23:53:41
2017-11-16
2017-11-16 23:58:24
1
false
en
2017-11-16
2017-11-16 23:58:24
14
11bc8ceb6264
1.954717
0
0
0
Here’s everything that’s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go down…
2
AI Inspiration #12: Funniest Computer Vision Fails; Shazam for Fashion; Aibo Returns Here’s everything that’s new in artificial intelligence and computer vision, with a little tech pop culture to make the medicine go down. Our logic is undeniable. Is that a turtle or a rifle? 8 Times Computer Vision Hilariously Failed You heard the one about Google’s AI mistaking a turtle for a rifle? That’s just the latest computer vision blooper — a common occurrence with any emerging technology — but here are eight others that definitely include everything from Apple and Samsung to amusingly oddball AI-created smartphone cases and self-destructive robots. The Visionary Shazaming the Kim Kardashian Look Celebs on Instagram may spur sales or aspirational envy when they share and tag images and posts of clothing and accessories, but all too often the stuff is out of reach for the average fashion-focused fan. So kudos to Kim Kardashian for advising a ScreenShop, a new smartphone app that lets users upload images they capture or find online of clothing or accessories they like, then uses computer vision to find similar items in multiple price ranges from dozens of retailers including H&M, Nordstrom, and TopShop. CNN Designer Robots The future of creativity involves humans working with AI, but how? Here’s one idea: AirBnB has designed a machine-learning system that recognizes low-fi wireframes, the symbols designers used to initially sketch out user interfaces, and automatically generates them into code. This is great for UI designers, but whither the creative coders of the future? Airbnb Design Did Somebody Order an Assassin? Did Amazon jump the tech shark with its new computer vision-enabled Key service? Great satire is often the first sign. Regardless, The Onion offers this hilarious take on the delivery service of the future, which might make for a biting horror comedy movie one day. The Onion Out of the Doghouse After a decade’s absence, Sony’s ahead-of-its-time Aibo robo-dog is returning to the market, in Japan, at least, this time with more expressive OLED eyes, improved computer vision that learns from pictures it takes, and a slew of competitors such as Sharp’s Robohon and Mitsubishi’s Pepper. The Telegraph Automating the Automation Good AI talent is scarce, but the need for machine learning is ever-growing. So, the only way to design and build more AI is to create AI that can design and build AI on its own, at least that’s what’s behind initiatives such as Google’s OpenML. Welcome to Neural-Network-Building-as-a-Service. New York Times Anyone you know interested in computer vision? Forward this to them so they can subscribe, too. And please submit any computer vision stories you think we’d be interested in posting. The Visionary newsletter is produced by GumGum.
AI Inspiration #12: Funniest Computer Vision Fails; Shazam for Fashion; Aibo Returns
0
ai-inspiration-12-funniest-computer-vision-fails-shazam-for-fashion-aibo-returns-11bc8ceb6264
2018-03-17
2018-03-17 05:29:09
https://medium.com/s/story/ai-inspiration-12-funniest-computer-vision-fails-shazam-for-fashion-aibo-returns-11bc8ceb6264
false
465
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
The Visionary
Weekly computer vision news, exclusive visual content and original feature-length articles on how AI intersects with your daily life, business and marketing.
5ef0074c3a03
thevisionary_73083
16
2
20,181,104
null
null
null
null
null
null
0
null
0
721b17443fd5
2018-06-20
2018-06-20 04:06:51
2018-06-22
2018-06-22 18:19:06
10
false
en
2018-07-12
2018-07-12 05:45:55
7
11bec9262d64
3.846226
5
1
0
As told in the previous post that a polynomial regression is a special case of linear regression. As we have seen in linear regression we…
5
Polynomial regression As told in the previous post that a polynomial regression is a special case of linear regression. As we have seen in linear regression we have two axis X axis for the data value and Y axis for the Target value. Why use polynomial regression? Well in the previous example as seen the data was kind of linear. So we got good fit line on the data. But considering Real world examples the data might not be so linearly but more scattered. In such cases linear regression might not be the best way to describe the data. A curved or non linear line might be a better fit for such data. For an example: An Example of Scatter Plot See this example the points are scattered/diversified. Thus a simple straight line might not be the best for such a data set. So as we now know Why we should Polynomial Regression. Let us dive deep into how should we use it. The equation of Quadratic Equation or polynomial of degree 2 is : Polynomial of degree 2 Similarly a Equation of degree 3 : Polynomial degree 3 Polynomial Degree n Would be like: Where n is the degree of the polynomial Now that we are done with the math lets focus on how we are gonna fit a data into polynomial equation. Example of polynomial Curve For the same we are gonna use PolynomialFeature() function in the sklearn library with python. from sklearn.preprocessing import PolynomialFeatures So, how does PolynomialFeature () function work exactly? It work is quite simple actually. It take a matrix of features and transforms it into a feature matrix of quadratic nature(in case of degree two). lets say we have a matrix of two features X=[[0,1],[2,3],[4,5]] or, Matrix Now after we apply poly=sklearn.PolynomialFeature(degree=2) poly_X=poly.fit_transform(X) What we actually get is a matrix in the form of [1, a, b, a², a*b, b²] [1, a, b, a², a*b, b²] Here is the example code for the simple Polynomial Regression Download code: neelindresh/NeelBlog NeelBlog - Contains the code and csv from my bloggithub.com So the Data Set I have is a simple one. It has basically two columns. List price and best price of a truck pick up company. So let us see. CODE: import pandas as pd df=pd.read_csv(“/home/indresh/PycharmProjects/MLCoursera/DataSet/test.csv”) x=df.iloc[:,0:1].values y=df.iloc[:,1].values from sklearn.preprocessing import PolynomialFeatures poly=PolynomialFeatures(degree=3) poly_x=poly.fit_transform(x) from sklearn.linear_model import LinearRegression regressor=LinearRegression() regressor.fit(poly_x,y) import matplotlib.pyplot as plt plt.scatter(x,y,color=’red’) plt.plot(x,regressor.predict(poly.fit_transform(x)),color=’blue’) plt.show() Load CSV file Get csv from : neelindresh/NeelBlog NeelBlog - Contains the code and csv from my bloggithub.com import pandas as pd df=pd.read_csv(“/home/indresh/PycharmProjects/MLCoursera/DataSet/test.csv”) Take X_axis and Y_axis x=df.iloc[:,0:1].values y=df.iloc[:,1].values Note: Why use df.iloc [:,0:1] instead of simple df.iloc[:,0] the reason is than when we use df.iloc[:,0] is creates a 1D array which cannot be fitted into the polynomial model like [1,2,3,4,…n] . We need a 2D array to fit_transform() the X_axis data thus using df.iloc[:,0:1] creates a 2D matrix [[1,2,3,4,…n]]. Now what does .iloc[] do? Or better how does it work? Ans: .iloc[] basically selects a row from a data frame. If we had said df.iloc[4] we would get the value of the forth row. Something like this: List price 16.1 Best Price 14.1 Name: 4, dtype: float64 So when we are saying .iloc[:,0:1] is means .iloc[all the rows : of column 0] Import the PolynomialFeatures from sklearn.preprocessing library from sklearn.preprocessing import PolynomialFeatures Now is the exciting part poly=PolynomialFeatures(degree=3) poly_x=poly.fit_transform(x) So by PolynomialFeatures(degree=3) we are saying that the degree of the polynomial curve will me 3 (Try it for high value) poly_x=poly.fit_transform(x) Transform the array into polynomial form as I mentioned before the output will be [ [1, x , x² ] ] [[ 1. ,12.39999962 ,153.75999058] [ 1. ,14.30000019 ,204.49000543] [ 1. ,14.5 ,210.25 ] [ 1. ,14.89999962 ,222.00998868] Note: the array had only one value so [1,x,x²] if it had 2 values x,x1 the the matrix would be like [[1,x,x1,x²,xx1,x1 ²]] This is same as the Linear model I described in the previous posts: Linear Regression Part 1 Linear Regression is the simplest type of Supervised learning. The goal of Regression is to explore the relation…dataneel.wordpress.com from sklearn.linear_model import LinearRegression regressor=LinearRegression() regressor.fit(poly_x,y) import matplotlib.pyplot as plt plt.scatter(x,y,color=’red’) plt.plot(x,regressor.predict(poly.fit_transform(x)),color=’blue’) plt.show() Output of degree 2 polynomial Neel Bhattacharyya Programming lovewww.youtube.com Data Science for Everyone As we discussed in the previous post Linear regression part 1 Linear Regression Part 1 We use multiple Regression when…dataneel.wordpress.com
Polynomial regression
13
polynomial-regression-11bec9262d64
2018-07-12
2018-07-12 05:45:56
https://medium.com/s/story/polynomial-regression-11bec9262d64
false
688
Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive.
null
coinmonks
null
Coinmonks
coinmonks
BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING
coinmonks
Data Science
data-science
Data Science
33,617
Indresh Bhattacharyya
Machine Learning engineer at Virtusa. Domain:Computer Vision And deep learning.
71899de38f81
indreshbhattacharyya
64
10
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-16
2018-01-16 15:47:20
2017-12-06
2017-12-06 21:07:36
4
false
en
2018-01-16
2018-01-16 15:50:31
5
11bf370c4e9d
5.462264
3
0
0
A new framework for knowing when to deploy an anticipatory service.
3
How to Get Anticipatory Design Right A new framework for knowing when to deploy an anticipatory service. Spotify predicts what songs you’ll like. Amazon’s Dash Button reorders your favorite detergent when pressed. Nest anticipates your ideal room temperature. We’re entering a world of experiences designed to benefit users by making decisions on their behalf. Anticipatory design is the algorithm-powered, user-centric design discipline behind this world, and we’re already seeing products and services successfully leveraging machine learning to infer users’ preferences. In the next stage of anticipatory design, products and services will aim to preempt every want and need. In the morning, as you’re getting ready for work, a voice-activated personal assistant will assess your commute, alert you to train delays, and confirm that road traffic is light before calling you an Uber to get you to the office before your early-morning meeting — automatically, without consulting you, knowing it’s doing the right thing. As you approach the office, your go-to café tracks your location, so it can have your coffee waiting as you walk in. But anticipation will never be perfect, and the smartest algorithms will sometimes be wrong. Why? Because anticipation bases its predictions on routine. The moment life deviates from that norm, the algorithm is put to the test: Adjust in real time, or fail due to lack of contextual understanding. At Huge, we created a framework so designers can determine if, when, and how an anticipatory service can improve their business. The decision to employ it comes down to two factors: the cost of being wrong and the probability of being right. The first half of the framework is made up of questions approaching anticipatory services from the user’s perspective. What a designer knows about a user will determine the likelihood of being right. First of all, designers need to understand whether eliminating choices will actually make the user’s lives easier. Uber, for instance, eliminated three choices from ordering a car service: Instead of asking users when they’d want the car, they assumed that users want it now, and from where they are right now, not another location. Two choices immediately were eliminated by anticipating the most common use case. Similarly, they streamlined payment methods and eliminated the hassle around tipping. All these options that were removed were a burden to begin with, which is why people like using Uber so much — it’s a relief from the paradox of choice. Second, we need to make sure it’s the right user. Consider a mobile phone carrier that wants to send the new iPhone to iPhone users, anticipating they’d want it. The probability of wanting this phone is very high for this group, based on insights about brand loyalty, consuming patterns, and disposable income. It’s a potential win-win for the users and the phone carrier. The same rule, though, absolutely does not apply to Android users, who have an infinite choice of devices and very much appreciate making this choice anew every few years. While the idea of eliminating this choice works for one specific user group, it would harm the brand’s relationship with another. While we can take into account most things a user does in digital, we have to acknowledge and anticipate huge blind spots: An anticipatory service that would otherwise be accurate (a car arriving at a customer’s workplace at the same time every day) can’t take into account random occurrences (her decision to walk home because she ate too much pizza at work). In cases like these, safety nets are essential, so that both users and brands take a minimal loss if mistakes are made. Taking this idea further we sometimes might have to to ask: Will attempting to anticipate needs potentially piss off the user? Here, too, we can learn from brands that misstepped. Take Target’s pioneering marketing initiative, which generated coupons based on customers’ past shopping behavior. It worked pretty well until it predicted that a 16-year-old was pregnant and started sending her coupons for baby cribs and maternity clothes. It turned out that a “pregnancy prediction” algorithm announced the news to her parents before she did, much to her father’s dismay. The service definitely pissed off the user and invited skepticism of Target. While our framework can determine how accurate an anticipatory service will be, it also helps designers understand the precautions they will need to take to avoid stepping over the creep line of data privacy. After understanding the user appetite for an anticipatory service, we now need to consider a brand’s investment in anticipatory design. The first question is the most straightforward: How much will an anticipatory service cost? Like any new business initiative, there are two questions to consider: How much will the service cost to set up, and how much will it cost to maintain and operate? If a brand decides the price is worth it, there are more strategic questions to address. If part of your anticipatory service involves sending new products to your users, can they easily return what they don’t want or aren’t ready for? The mobile phone carrier that wants to send the new iPhone to iPhone users needs to insure that customers can easily send the phone back to the carrier if they don’t want it yet. Similar questions should be asked about sending services to users — if a car service sends a driver to pick up the same person after work at the same time every day, can the car be easily “redeployed” if that rider decides to walk home instead? The danger of being wrong is relative to each business and its customers. For brands in certain industries, anticipatory design is a high-risk proposition. In finance and health, for example, the cost of being wrong is much higher than for, say, the entertainment industry. Users are much more forgiving of bad Netflix autoplay than a health service making assumptions about sensitive medical issues, in which one oversight could potentially harm someone’s well-being. Understanding what your users need and expect from you is key to a business determining what risks to take and how willing, and able, they are to forgive mistakes. And “danger” can mean something other than the risk to a customer’s health; it can mean the erosion of brand equity and the respect a company has built over time with its users. In 2014, Uber’s surge pricing, triggered by an algorithm taught to hike rates in response to increased demand, kicked in during a hostage crisis in Australia. Uber’s machine was just following its rules — it just didn’t understand the rules had suddenly changed. The reality is that no matter how great a designer’s approach, she won’t always make the right decision for users. That’s because humans do unexpected things. Plans will change and daily routines will never quite be the same, so for designers, figuring out how to be wrong is just as important as being right. As the natural evolution of automated and predictive services, anticipatory design has become so normal we’ve come to expect it — we expect services to “know” us and to learn our habits. The framework we’ve created is a starting point to to determine how valuable an anticipatory service could be making people’s lives easier. Because in the end, technology’s true promise is not to demand more of our attention, but to give us more of the thing that’s become most valuable to us, and that’s time. Originally published at Hugeinc.com in September 2016. Re-published at magenta.as in December 2017.
How to Get Anticipatory Design Right
10
how-to-get-anticipatory-design-right-11bf370c4e9d
2018-06-20
2018-06-20 01:02:09
https://medium.com/s/story/how-to-get-anticipatory-design-right-11bf370c4e9d
false
1,262
null
null
null
null
null
null
null
null
null
Anticipatory Design
anticipatory-design
Anticipatory Design
21
Sophie Kleber
ECD, Global Product & Innovation at Huge Inc.
72a7e066519e
Bibilassi
241
2
20,181,104
null
null
null
null
null
null
0
null
0
155bcdc5dbc9
2018-05-13
2018-05-13 04:34:20
2018-05-14
2018-05-14 05:22:47
2
true
en
2018-09-29
2018-09-29 05:35:24
11
11bf6b5b9d8
10.488994
55
7
0
Automating industrial labour is the key to unlocking Star Trek levels of prosperity. In a recent interview, Scott Phoenix, the co-founder…
5
Photo by Mirko Tobias Schäfer Tesla wants to use AI to create a “manufacturing revolution” Automating industrial labour is the key to unlocking Star Trek levels of prosperity. In a recent interview, Scott Phoenix, the co-founder of the AI/robotics startup Vicarious, paraphrased a client’s observation about industrial labour: There’s no such thing as material costs. There’s only labour costs. The material is just stuff in the ground that you need to pull out with labour, and move around with labour. When raw materials are abundant and accessible in the Earth’s crust, the cost of the materials is just the cost of mining them. Automating mining can, in theory, bring down the cost of raw materials. So too with manufacturing: the process of transforming raw materials into a finished product. The cost of manufacturing is the cost of labour: human labour and the “labour” of machines, and the indirect human labour required to support both. Automating manufacturing can, in theory, bring down the cost of finished goods. After manufacturing comes freight transport, warehousing, and delivery. Autonomous freight trucks, warehouse robots, and autonomous delivery vans, drones, and robots can automate these processes too. The whole sequence from raw materials in the ground to a finished product in the customer’s hand could, in theory, be fully automated. So, over the long term, the cost of finished goods depends largely on progress in AI and robotics. If we want to live in a world of universal material prosperity like Star Trek, AI and robotics is how we’re going to get there. Jobs will become obsolete, sure. But it was only 230 years ago that 90% of the U.S. workforce were farmers. Either new jobs will be created to replace the old, as has happened in the past, or they won’t. If we run out jobs permanently, then we’ll need a policy response. Perhaps a universal basic income based on the cost of a comfortable life, or based on a share of a country’s GDP. This would give each person the freedom to pursue whatever they want in life — whether that’s competitive gaming, a life of prayer or meditation, making music, or starting a company. I think this is a good outcome. That’s a society I want to live in. Another idea is to create massive government programs that employ people in jobs with a social good, but no short-term profitability. Hundreds of millions of people worldwide could be employed in science, philosophy, and art. Public service, education, medicine, and social work could draw on an expansive labour pool. The more labour is freed up from mining, manufacturing, freight transport, warehousing, and delivery, the more labour that we can consume in the form of service to our communities, in the form of creativity and research, and as therapy, care, and healing. So, here are two visions of a highly automated future without jobs: one with individual freedom and one with government paying workers to perform socially beneficial tasks. These two ideas can be mixed together, too. We could have a universal basic income and government jobs that provide extra income. There’s also a highly automated future with jobs: one where new needs and wants for human heads and hands rush in to replace the old ones once they’re satisfied by machines. This is a good future, too. It’s an organic, market-driven version of the government-driven ideas above. Rather than seeing automation as a threat, we should see it as a beautiful, exciting opportunity to increase the economic prosperity of human civilization. We should prepare a policy response in case jobs start disappearing and aren’t replaced, or if the newly unemployed need extra help making it from obsoleted jobs to new jobs. But we should understand that, with the right policy response, automation is a good thing. It can enable us to live in a world where we are free to pursue meaning, purpose, creativity, imagination, possibility, passion, spirit, and love — not just economic self-sustenance. Automation has been misportrayed as an emissary of economic strife. Really, automation is the cure for economic strife. Perhaps recent political paralysis and dysfunction in the U.S. has left Americans feeling hopeless about government’s ability to deploy an effective policy response to any crisis. And the U.S. has long had an allergy to major redistributions of wealth, such as universal healthcare. Maybe that’s why so many Americans despair about automation. But this is a problem with U.S. government, and U.S. government specifically (as opposed to say, Canadian government), not a problem inherent to labour automation. U.S. government vetocracy will create panic and despair anytime there is a crisis to deal with — an opioid crisis, a climate crisis, or a labour crisis. Americans, don’t lay your structural political problems at the feet of robots. I’m going to zoom in from this big picture, aerial view of automation down to one particular product: the Tesla Model Y. The Model Y is scheduled for production sometime in 2020, and Tesla CEO Elon Musk recently expressed his intention to make the Model Y production system a “manufacturing revolution”. Elon has been going back and forth on how much to simply copy the existing manufacturing process for the Model 3 versus trying something new, untested, and ambitious. For now at least, it sounds like Elon wants to do the latter. Details are supposed to be announced later this year when the Model Y is unveiled. From a business standpoint, there is a good argument to be made for either choice. Copying the Model 3 production system would presumably minimize delays, an unforeseen run-up in costs, and the risk that the new system just won’t work. It would, in theory, allow Tesla to quickly and assuredly launch its crossover SUV version of the super popular Model 3 sedan. Since crossovers are more popular than sedans, it stands to reason that the Model Y will be even more popular than the Model 3. On the other hand, innovation in manufacturing automation can reduce costs, increase production speed, and provide an avenue of sustainable competitive advantage for Tesla. By fusing its competence in car manufacturing and its competence in AI, Tesla can create a combination that is unique in the world: a car factory designed by a Silicon Valley AI company. This is so much more exciting to me than getting the Model Y to market sooner, more cheaply, and with less technology risk. It’s more exciting to me both as a Tesla investor, and as a human being living in the post-Industrial Revolution, pre-Star Trek era of our civilization. I don’t care if the Model Y takes two extra years to make. If Tesla can use innovations in AI and robotics to make the Model Y production system a “manufacturing revolution”, it’s worth it. This isn’t just important for the company, or for the auto industry. It’s important for humanity. Tesla has served as the proof of concept for electric cars, and in so doing it has catalyzed the whole auto industry to transition from gasoline to electric propulsion. A proof of concept for a new level of factory automation would probably have a similar catalyzing effect. Manufacturers across the world, throughout industries, would want to emulate Tesla. Why should we believe this dream is possible? That’s a fair question. It might not be. There is no guarantee it will work. But without risk, there is no innovation. Some people argue that it is foolish to even try for two reasons. The first reason is that a new level of factory automation was already tried by GM and it failed abjectly. The second reason is that, supposedly, folks in the auto industry do not think it is possible—perhaps for the first reason. The first argument is not credible, in my opinion. GM tried fully automating manual or semi-automated production processes in the 1980s. That’s ancient history in the timeline of AI and robotics. The technologies used back then are not the technologies used today. This case study is as irrelevant as the observation that you can’t go to space with a steam engine. The failure of steam engine-based space travel in the Victorian era would tell you nothing about the feasibility of the Apollo program. GM’s failure in the 1980s is not instructive as to Tesla’s chances of success in the 2020s. Deep learning only gained prominence in 2012, and only as recently as 2015 outperformed the human benchmark on the ImageNet Challenge for image classification. The advancements that embolden AI and robotics proponents today are all very recent. We should split AI into two eras, like we split history into B.C. and A.D.: there should be the pre-deep learning era prior to 2012, and the deep learning era of 2012 onward. This helps disambiguate all the various things people mean when they say “AI”. To illustrate the difference, see how much object detection has changed from the pre-deep learning era to the deep learning era: Image courtesy of ARK Invest’s report on deep learning This is black and white, night and day — B.C. and A.D. Deep learning is a new technological paradigm. This is not the 80s. It’s not even the 2000s. Predicting what is possible in the 2020s based on obsolete technologies from thirty years earlier just doesn’t make sense. The second argument — that auto industry execs and experts don’t believe a step change automation is possible — is more credible to me because I’m inclined to respect expertise, but it’s suspect for a few reasons. First of all, if the reason they believe this is because GM tried in the 80s and failed, then their reasoning doesn’t make sense. Second, incumbents in the auto industry have been wrong before about what’s possible. The industry was far too pessimistic about electric vehicles. Coincidentally, this is an area where GM also tried and failed. The EV1 was scrapped — literally — by GM. It too was based on an older technological paradigm: first lead-acid batteries, then nickel-metal hydride batteries. Modern electric cars use lithium-ion batteries. Third, auto industry incumbents aren’t well-placed to evaluate the feasibility of deep learning-powered factory automation. They understand factories, but not deep learning. Similarly, deep learning experts who have never stepped inside a car factory might not be well-positioned to assess whether factory tasks can be automated. To make an informed assessment of whether deep learning is capable of performing a task, you need to understand both the capabilities of deep learning and the complexity of the task. Tesla is uniquely well-positioned to make that assessment: it both manufactures cars and develops deep learning-based software products. It understands the complexity of making cars, and it’s already using deep learning for complex driving tasks, so it has evolving, first-hand knowledge of its capabilities and limitations. Other companies have deep learning expertise in-house, but it isn’t integrated with manufacturing expertise at the management level. For instance, unlike GM, which acquired its AI talent, Tesla grew its AI division organically. Unlike GM and other car companies, Tesla’s CEO comes from a software background, and has a keen personal interest in AI. Elon co-founded OpenAI, the non-profit from which Tesla recruited its Director of AI, Andrej Karpathy. Tesla is also physically based in Silicon Valley, and it’s one of the most popular companies for software engineers, along with its informal collaborator SpaceX. No other car company can claim to be an AI company, or even a software company, the way Tesla can. If you’re skeptical about the potential for breakthroughs in factory automation, consider this: do you think self-driving cars are feasible? If so, wouldn’t you agree that recent advances in AI have enabled self-driving cars to exist? Why not, then, apply the same advances in AI to factory robots? The tasks required of a factory worker in many cases have far more sensorimotor complexity than driving. However, Tesla’s goal is not to automate every task at once, but to eventually achieve full automation aftering iterating upon several versions of its factories. Elon predicts that Tesla will fully automate driving by the end of 2019, and recently said Model Y production is planned for 2020. Based on Elon’s previous remarks, I’ve surmised that the Model Y production is not intended to be fully automated. So regardless of whether Elon’s prediction is true, this means that in his mind fully automated driving will be achieved years before a fully automated factory. One advantage the factory environment has over the driving environment is that it’s clean, predictable, and controlled. Roads are messy, uncertain, and filled with agents outside an autonomous vehicle’s control. This allows factory robots to reliably cope with more sensorimotor complexity than autonomous cars. Inside a car factory, the lighting and weather conditions are always ideal. The work area is always clear of random people, animals, cars, and objects. A higher error rate is probably also acceptable because for a factory robot the consequences of a mistake aren’t deadly. So, full factory automation will: Possibly take years longer to achieve than fully autonomous driving Take advantage of a predictable, controlled, and optimized environment, unlike autonomous driving Probably have a higher tolerance for errors than autonomous driving If you think fully autonomous driving is on the horizon, these three considerations should help make full factory automation feel like a worthy pursuit. Moreover, even if full factory automation remains out of reach for a long time, every incremental advance in factory automation is helpful. Here’s a final argument in favour of the possibility of a “manufacturing revolution”. I recently stumbled upon a glimmer of a fundamental breakthrough in AI. The AI/robotics startup Vicarious — in which Elon Musk is an investor, along with Jeff Bezos and Mark Zuckerberg — developed a new neural network architecture it calls the Recursive Cortical Network (RCN). Vicarious published a paper in the journal Science showing the power of its RCN to solve CAPTCHAs. (It also described its findings in a blog post.) The RCN matched the performance of the conventional deep learning approach, but achieved an astounding 9000x improvement in training data efficiency. With just 260 training examples, the RCN was able to achieve the accuracy that a conventional deep neural network achieved after 2.3 million examples. What’s more, the RCN was also more generalizable. When the spaces between the letters in the CAPTCHAs were increased, the accuracy of the conventional deep neural net fell off. The accuracy of the RCN actually increased. Vicarious hasn’t published any research showing that its RCN can be adapted to computer vision tasks in a 3D environment like a car factory. However, Vicarious’ stated mission as a company is to develop AI for robots, especially factory robots. So, I have to assume that developing new neural net architectures (like the RCN) for that purpose is a top focus of Vicarious’ research and development. If this kind of breakthrough in data efficiency could be applied to 3D computer vision tasks, it could open up new possibilities in factory automation. A factory robot could learn to recognize an object, like a car part, based on seeing perhaps a few hundred examples, rather than a few million. A current weakness of deep learning is that it requires big data to develop pattern recognition abilities. Eliminating that need for big data would open up a whole new wave of applications, including physical world applications where big data can’t be obtained. That’s the kind of breakthrough we might see in AI and robotics. If not now, then maybe sometime in the early 2020s, as Tesla’s factory automation efforts get underway. Perhaps Vicarious has already confirmed internally that its RCN enables a new level of data efficiency in 3D computer vision for robotics. Or perhaps it has developed a variant of the RCN that does so. Since Elon is an investor, maybe Tesla will partner with Vicarious on the Model Y production system. This is just speculation. But interesting speculation, I think. Disclosure: I own shares of Tesla. Disclaimer: This is not investment advice.
Tesla wants to use AI to create a “manufacturing revolution”
444
tesla-wants-to-use-ai-to-create-a-manufacturing-revolution-11bf6b5b9d8
2018-09-29
2018-09-29 05:35:24
https://medium.com/s/story/tesla-wants-to-use-ai-to-create-a-manufacturing-revolution-11bf6b5b9d8
false
2,678
A protopia is a world (neither a utopia or a dystopia) that is getting better step by step. This blog is about the technologies that can allow us to live in protopia.
null
null
null
Protopia
null
protopiablog
TECHNOLOGY,FUTURE,FUTURISM,SELF DRIVING CARS,AUTONOMOUS CARS
null
Tesla
tesla
Tesla
6,257
Trent Eady
Just trying to figure it all out, man. Thinking about self-driving cars, the cosmic mystery, and the future of consciousness as biology and technology merge.
39e3969bd5b5
trenteady
728
155
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-02
2017-09-02 18:05:27
2017-09-02
2017-09-02 18:09:18
1
false
en
2017-09-02
2017-09-02 18:09:18
0
11c0399b1bae
1.203774
0
0
0
“Computational design is a different kind of design with a different kind of value approach to how things are made and the idea of…
4
Computational design “Computational design is a different kind of design with a different kind of value approach to how things are made and the idea of perfection doesn’t exist in the same way … you’re constantly adapting to peoples’ needs” — John Maeda, Automattic Today on the High Resolution podcast, I heard the awesome term “computational design” for the first time. In the interview, John Maeda differentiates between three kinds of design: Classical design — the kind of work that goes into making a table or a pair of glasses Design thinking — design applied to organisations with large teams to solve large problems Computational design — design involving data, continuously shifting and augmenting for the senses According to him, this is no wondrous new process but one which he has seen in practice since the 90’s and particularly leveraged in Silicon Valley. By “the idea of perfection doesn’t exist”, he does not mean that good work itself does not exist but that in computational design, there is never a finished artefact or product because peoples’ needs are always changing and computational design takes this into serious account by leveraging the power of big data to glean key insights and act upon them. I found this idea particularly fascinating because the buzz phrase one hears everywhere at design meetups these days is that of “design thinking”. However it seems there is clearly a new kid on the block, which I look forward to learning more about and seeing whether it works with design thinking or if it replaces it as a whole new process in itself.
Computational design
0
computational-design-11c0399b1bae
2017-12-05
2017-12-05 22:32:19
https://medium.com/s/story/computational-design-11c0399b1bae
false
266
null
null
null
null
null
null
null
null
null
Design
design
Design
186,228
Avi Mair
At the intersection of UX and business
a3afca5af9e0
avimair
26
42
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-18
2018-07-18 13:08:34
2018-07-18
2018-07-18 13:11:58
1
false
en
2018-07-18
2018-07-18 13:11:58
1
11c09a604bd
1.169811
0
0
0
According to the research by Gartner, the investment in Artificial Intelligence will be above 300% in 2017 compared to past year. With…
4
How Artificial Intelligence used in Mobile App Development? According to the research by Gartner, the investment in Artificial Intelligence will be above 300% in 2017 compared to past year. With advanced analytics and machine learning techniques AI has providing powerful insights as never before. Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Whether you are late to the game or have had apps in the app stores for awhile, you may not realize how powerful adding artificial intelligence to your mobile apps can be. There are 2 main components that drive massive user adoption. The first is providing something amazingly useful and that usually means it’s smart. So that requires machine learning algorithms. The second is viral share. If you don’t make it easy to share, they simply won’t do it. Every apps nowadays some how use artificial intelligence. Here are some of the most popular mobile app from google app store which using Artificial Intelligence technology. Facebook (face recognition for tagging) Flipkart (use photos to search items) Google Image search Open CV Siri (voice assistance) Google Allo There are more artificial intelligence applied than we think. The accuracy of Facebook face recognition same as human. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy.
How Artificial Intelligence used in Mobile App Development?
0
how-artificial-intelligence-used-in-mobile-app-development-11c09a604bd
2018-07-18
2018-07-18 13:11:58
https://medium.com/s/story/how-artificial-intelligence-used-in-mobile-app-development-11c09a604bd
false
257
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Lakshmi Narayanasamy
Digital Marketing Specialist
7947c6a7bd0f
LakshmiNryn
44
971
20,181,104
null
null
null
null
null
null
0
def conv(in_f, out_f, kernel_size=3, stride=1, actn=True, pad=None, bn=True): if pad is None: pad = kernel_size//2 layers = [nn.Conv2d(in_f, out_f, kernel_size, stride, pad, bias=not bn)] if actn: layers.append(nn.ReLU(inplace=True)) if bn: layers.append(nn.BatchNorm2d(out_f)) return nn.Sequential(*layers) class ResSequentialCenter(nn.Module): def __init__(self, layers): super().__init__() self.m = nn.Sequential(*layers) def forward(self, x): return x[:, :, 2:-2, 2:-2] + self.m(x) def res_block(num_f): return ResSequentialCenter([conv(num_f, num_f, pad=0), conv(num_f, num_f, pad=0)]) def upsample(in_f, out_f): return nn.Sequential(nn.Upsample(scale_factor=2), conv(in_f, out_f)) class StyleResnet(nn.Module): def __init__(self): super().__init__() layers = [nn.ReflectionPad2d(40), nn.Conv2d(3, 32, 9), nn.Conv2d(32, 64, 3, stride=2), nn.Conv2d(64, 128, 3, stride=2)] for i in range(5): layers.append(res_block(128)) layers += [upsample(128, 64), upsample(64, 32), conv(32, 3, 9, actn=False)] self.features = nn.Sequential(*layers) def forward(self, x): return self.features(x) class SaveFeatures(): features=None def __init__(self, m): self.features = m.register_forward_hook(self.hook_fn) def hook_fn(self, module, input, output): self.features = output def close(self): self.hook.remove() def ct_loss(input, targ): return F.mse_loss(input, targ) def gram(x): b,c,h,w = x.size() x = x.view(b, c, -1) return torch.bmm(x, x.transpose(1, 2))/(c*h*w)*1e6 def gram_loss(input, targ): return F.mse_loss(gram(input), gram(targ[:input.size(0)])) class CombinedLoss(nn.Module): def __init__(self, m, layer_ids, style_im, ct_weight, style_weights): self.m,self.ct_weight,self.style_weights=m,ct_weight,style_weights self.sfs = SaveFeatures(self.m[i] for i in layer_ids) m(VV(style_im)) self.style_feats = [V(sf.features.data.clone()) for sf in self.sfs] def forward(self, input, targ): self.m(VV(targ)) targ_feat = self.sfs[2].features.data.clone() self.m(input) inp_feats = [sf.features for sf in self.sfs] ct_loss = [ct_loss(inp_feats[2], V(targ_feat))*self.ct_weight] style_loss = [gram_loss(inp, sty)*weight for inp, sty, weight in zip(inp_feats, self.style_feats, self.style_weights)] loss = sum(ct_loss + style_loss) return loss ct_loss = [ct_loss(inp_feats[2], V(targ_feat))*self.ct_weight] [gram_loss(inp, sty)*weight for inp, sty, weight in zip(inp_feats, self.style_feats, self.style_weights)] loss = sum(ct_loss + style_loss)
14
null
2018-08-11
2018-08-11 15:13:42
2018-08-16
2018-08-16 08:49:14
0
false
en
2018-08-16
2018-08-16 08:49:14
0
11c1720e15c4
2.588679
0
0
0
Create Conv-ReLU-BN block which pads input tesnor such that output tensor of the block will have the same shape (height/width) as the input…
2
Fast.ai Style Transfer Net Model Create Conv-ReLU-BN block which pads input tesnor such that output tensor of the block will have the same shape (height/width) as the input tensor Residual Sequential Center module takes an input of an array of layers (conv-relu-bn blocks) and creates a Sequential block using the layers. In forward function, module returns an output tensor of input tensor (without padding) + outptut of the sequential block with input tensor. Residual block creates a ResSequentialCenter module with 2 conv-relu-bn blocks Upsample block creates a Sequential module of a Upsample layer with a conv-relu-bn block Style Resnet module initializes a Sequential block that starts with a Reflection padding layer of 40, a 32-d Conv layer with kernel size 9 and 2 Conv layers of kernel size 3, both with strides of 2 (reduces shape of tensor by 2 each time). 5 128-d residual blocks are then added, ending with 2 upsample blocks and a conv block of kernel size 9 Forward function returns output tensor of Sequential block from input tensor. SaveFeatures object saves the output of an input layer to be used in the loss function Content loss returns the Mean Squared Error loss of the input and the target Gram function returns the Gram matrix of the input (matrix multiplication of tensor with itself, followed by divison of number of elements to find the average value) Gram loss returns the Mean Squared Error loss of the Gram matrix of the input tensor and the Gram matrix of the target Combined Loss parameters: m (model to determine style/content loss), layer_ids (ids of style layers), style_im (target image to obtain style), ct_weight (weight of content loss), style_weights (weights of loss of each layer) Store style features of the target style image to be used in the style loss In forward function, obtain features from content image (features from 2nd layer of model for losses) and obtain features from the input tensor (features from each SaveFeatures objects of model for losses) Content loss is calculated using the ct_loss function with the 2nd input feature and the content image features as inputs, multiplied by a weight Style loss is calculated summing the gram loss of each input feature and style feature multiplied by a weight: SUMMARY Model is forced to produce image that will produce similar content feature as the training image and style features as the style target image, resulting in a model that is able to take an image from the training set and produce an image that has similar content as the training image but with the style of the target style image
Fast.ai Style Transfer Net Model
0
fast-ai-style-transfer-net-model-11c1720e15c4
2018-08-16
2018-08-16 08:49:14
https://medium.com/s/story/fast-ai-style-transfer-net-model-11c1720e15c4
false
686
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ryan Aidan
null
22d6e2e4e0f4
aidanaden
5
2
20,181,104
null
null
null
null
null
null
0
git clone https://github.com/IbrahimTareq/tensorflow-for-poets-2.git cd tensorflow-for-poets-2 set IMAGE_SIZE=224 set ARCHITECTURE=”mobilenet_0.50_%IMAGE_SIZE%” python -m scripts.retrain --bottleneck_dir=tf_files/bottlenecks --model_dir=tf_files/models/"%ARCHITECTURE%" --summaries_dir=tf_files/training_summaries/"%ARCHITECTURE%" --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --architecture="%ARCHITECTURE%" --image_dir=tf_files/xyz IMAGE_SIZE=224 ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}" python -m scripts.retrain \ --bottleneck_dir=tf_files/bottlenecks \ --model_dir=tf_files/models/”${ARCHITECTURE}” \ --summaries_dir=tf_files/training_summaries/”${ARCHITECTURE}” \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --architecture=”${ARCHITECTURE}” \ --image_dir=tf_files/xyz python -m scripts.label_image --graph=tf_files/retrained_graph.pb --image=PATH_TO_IMAGE python -m scripts.label_image \ --graph=tf_files/retrained_graph.pb \ -- image=PATH_TO_IMAGE
7
null
2018-05-13
2018-05-13 11:48:29
2018-05-14
2018-05-14 04:30:27
7
false
en
2018-05-14
2018-05-14 04:44:21
8
11c1bede3747
4.125472
2
0
0
You may have stumbled across a video of the Google assistant placing a call to a hair salon and having an unnaturally natural conversation…
5
Train an image classifier using TensorFlow You may have stumbled across a video of the Google assistant placing a call to a hair salon and having an unnaturally natural conversation with the receptionist. If you haven’t, take a look: Notice how it dropped in a super casual “mmhmmm” early in the conversation. Sounds eerie, doesn’t it? The technology that enabled this jaw-dropping new capability is called Google Duplex. It really feels like next-level AI stuff so let’s take a sneak-peek at what’s happening behind the scenes. Google Duplex employs a machine learning platform called TensorFlow extended. What the heck is TensorFlow? TensorFlow is an open source framework that was released by Google. It makes it easier for developers to design, build, and train deep learning models. Simply put, deep learning is an approach to machine learning where it facilitates in the learning process of the machine. Deep learning imitates the function of the brain; to be specific, it mimics the neural system. Sounds super cool, doesn’t it? Well, over the weekend, I built myself an image classifier using the same technology that Google Duplex uses and I’d like to show you guys how it’s done so we can all have a better understanding of how machine learning actually works and ofcourse to feel cool. Before we jump right in, let’s understand what we’re actually going to create. We’re going to train a simple classifier to classify image of any breed of cat (yes, any) with insanely high probabilities but for the sake of making this tutorial simple, we’ll be using selected breeds. To keep it short and sweet, we’ll be using use a model that is already trained on the ImageNet Large Visual Recognition Challenge dataset and some pre-written scripts courtesy of the awesome developers at Google. Step 1 — Install Python For Windows, choose either Python 3.5.x 64-bit or Python 3.6.x 64-bit For MacOS, choose either Python 2.7.x or Python 3.6.x Step 2 — Install TensorFlow Fire up the command line and run the command: For Windows — pip3 install --upgrade tensorflow For MacOS — pip install tensorflow If that didn’t work, reference the installation page here. Step 3 — Clone the repository All the code used is contained in this git repository. Clone the repository and cd into it. This is where we will be working. Step 5 — Add the training data Before you start any training, you’ll need a set of images to teach the model about the new classes you want to recognize. I’ve created an archive of photos of different breeds of cats to use. You can download it from here. Once downloaded, extract it and copy-paste the folder inside the tf_files folder. You can create and add your own collection, just make sure they are separated by folders. Your images (training data) will be inside each folder Step 6 — Run the training First, we’re going to add some configuration changes to the network and then we’re going to run the training script. Fire up the command line from the root directory of your project and run the commands: For Windows: And then: Replace the xyz with the name of the training images data folder. For MacOS: And then: Replace the xyz with the name of the training images data folder. Bear in my mind that this step will take a while. The script will download the pre-trained model, adds a new final layer, and trains that layer on the photos you’ve added. Step 7 — Using the re-trained model Once the training is complete, we can test it out to see how well it performs. Download a photo of a cat and save it on your desktop. Run the following command from the root directory of your project to find out what breed the cat belongs to: For Windows: For MacOS: Replace PATH_TO_IMAGE with the name of the image you downloaded. For me, this was C:\Users\IbrahimTareq\Desktop\cat.jpg Step 8 — Results These are some of the pictures I used along with the results spat out by the image classifier. Test Run 1 — Bengal Cat Test Run 2 — Persian Cat Wrapping Up Well done — you’ve managed to build an image classifier, and all it took was some reading and the installation of a few dependencies. You can take a look at the code over here. Play around with it and perhaps, you can try and build something awesome as a side-project. Thanks for reading! I hope you enjoyed reading this and learnt something new. If you liked it, hit the clap below so other people will see this here on Medium.
Train an image classifier using TensorFlow
5
train-an-image-classifier-using-tensorflow-11c1bede3747
2018-05-21
2018-05-21 05:26:19
https://medium.com/s/story/train-an-image-classifier-using-tensorflow-11c1bede3747
false
815
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Ibrahim Tareq
Developer Evangelist
69cbd8c3eb16
ibrahimtareq
4
2
20,181,104
null
null
null
null
null
null
0
null
0
505aa8302590
2018-04-02
2018-04-02 14:19:58
2018-04-04
2018-04-04 14:11:02
3
false
en
2018-04-04
2018-04-04 14:11:02
14
11c3cf813df2
5.116038
31
2
0
Having experienced first-hand the wonders of modern medicine that are keeping us alive for longer, my mum is taking the time to explore the…
5
My Mum is Becoming a Data Scientist at 54 Having experienced first-hand the wonders of modern medicine that are keeping us alive for longer, my mum is taking the time to explore the opportunities her life has to offer. Still smiling — Juan, Nick, my Mum and a cuddly sheep At the age of 54 my mum is training to become a Data Scientist. Voted sexiest job of the 21-st century, it’s a career option that didn’t even exist when she first entered the job market. The term itself was first coined in 2008 by DJ Patil and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook. Tenacious and driven, my mum enjoys embracing new challenges. Between raising two boys and gaining a PHD, she has spent the past 25 years as a maths teacher. Last year she ventured into the world of entrepreneurship by launching her own sports massage product, Wellbrix. She describes this experience as opening her eyes to jobs she never knew existed. Careers, like Data Science, that could provide ways for her to apply her mathematical ability to new challenges whilst earning more than she could as a teacher. But prior to undertaking the necessary steps in making this career change, she encountered her own serious setback. Just a few days before the application deadline for a Data Science course at General Assembly, she was hit by a car when riding her bike. I received the news of my mum’s accident in Edinburgh airport, waiting to fly back to London. I arrived at the hospital the next day to find my fearless mother mummified in bandages, her head supported by a neck brace. She had suffered breaks to both arms and a fracture to the neck, millimetres away from the spinal cord. And yet, through the pain, she was able to muster a smile and greet me as I arrived. Unbelievably, she was also determined to submit her application to the Data Science course - the deadline was just one day away. Despite every effort from myself and Nick, my mum’s partner, to convince her to rest, she remained determined. Mum and Nick working on the General Assembly Data Science Application Even as she was wheeled away for an operation on her broken arm, Nick and I were tasked with preparing her application for submission. Depending on the amount of morphine administered, she would try and inspect our work upon return. Now three months on, Mum is making a steady recovery, working towards the General Assembly Data Science course that starts in June. Metal plates are supporting a number of her limbs, but with yoga and daily physiotherapy, her strength and flexibility is returning. Psychologically she has suffered too, but with pain has come a renewed focus to make this change in her life. As a walking cyborg she may now be even better suited to a future-forward career in Data Science. Nearly 10 years away from the UK retirement age it may be hard to grasp the reasons behind a career move of this magnitude this ‘late in the game’. But the ‘game’ is no longer what it once was, as the current three stage life of education, career and retirement is being replaced by a multistage life. A life which rewards flexibility, demands continuous learning and remains agnostic to your age. Age is no longer stage. Image Credit: The 100-Year Life, Living and Woking in an Age of Longevity With the wonders of modern medicine, the length of life, and with it time to explore opportunities, is ever increasing. Born in 1964, my mum can expect to live for 91 years (according to the Human Mortality Database, Berkeley) so long as she avoids dangerous and elderly drivers. A staggering 50% of babies born in the UK in 2007 can expect to still be alive when then reach their 104th birthdays. As the world continues to move at such a fast pace, the need to adapt is becoming all the more critical. As Lynda Gratton and Andrew Scott write in The 100 Year Life, “If you are in your forties, fifties or sixties then you need to reconsider your future and think about how you will reinvest in the second half of your life. Failure to innovate in response to a longer life will mean stresses and strains in your life as existing models are stretched uncomfortably over 100 years.” Increasingly, people of all ages are embracing the opportunities of a multifaceted career and are taking steps to learn new skills and move in different directions, regardless of what stage of life they’re at. Having spent 31 years at the Financial Times as a columnist and associate editor, Lucy Kellaway made the decision last year to leave the newspaper to becoming a maths teacher in a London Secondary School. Kellaway has co-founded Now Teach, a scheme designed to help older professionals move into teaching. In its first year the scheme received over 1000 applications, with 45 places being awarded to an eclectic mix of professionals, from marines to diplomats, film-makers to athletes. Their ages range from the youngest at 42 to the oldest, 67. Some applicants were disaffected with the corporate world, but not all. One applicant, who had previously spent 30 years working in the city, said, “I always loved my work. But I thought, how much time do I have left on the planet? Do I want to go on and on doing the same thing?” There is an appetite for change and the professional role models of the future are beginning to facilitate this. Currently, it may feel like a luxury to change career, a choice reserved for the fortunate few who have saved enough or lack the responsibilities of a young family, for instance. But as the job market changes radically over the next few years many jobs will disappear as new jobs, like Data Scientist, come into existence. Some may fear that we lack the necessary role models to help guide us forwards in this new multistage life. The career decisions that worked for previous generations won’t work for us. But new role models are emerging, my mum is one, Lucy Kellaway another. It is possible to change career later in life, and soon it may become a necessity. Companies need to be prepared for this change by providing their employees with skills for the future, whilst finding new ways to define loyalty — not only restricting the definition to length of service. Attitudes will need to shift somewhat in order to ensure that ageism in the workplace does not prevent career changes later in life. Companies and employees alike must embrace the opportunity to hire older people into more junior positions. Diversity in the workforce is of greater importance than ever before and this includes diversity of age and experience. My mum shows that with the will power and determination, you can make changes to your career, regardless of your age. And now we’ll be living much longer lives — well, it’s worth taking the time to find the thing you like and will thrive at into the future. Even if that job doesn’t exist yet. Rory is a Junior Consultant at Fluxx, a company that uses experiments to understand customers, helping clients to build better products. We work with organisations such as News UK, Royal Society of Arts and the Parliamentary Digital Service.
My Mum is Becoming a Data Scientist at 54
258
my-mum-is-becoming-a-data-scientist-at-54-11c3cf813df2
2018-04-25
2018-04-25 04:09:38
https://medium.com/s/story/my-mum-is-becoming-a-data-scientist-at-54-11c3cf813df2
false
1,210
Inspiring stories about designing businesses and services that work.
null
FluxxStudios
null
Fluxx Studio Notes
fluxx-studio-notes
INNOVATION,PRODUCT DEVELOPMENT,SERVICE DESIGN,DIGITAL TRANSFORMATION,BUSINESS
fluxXstudios
Data Science
data-science
Data Science
33,617
Rory Keddie
null
a4b2a811840c
RoryGosKeddie
68
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-16
2018-08-16 03:31:53
2018-08-16
2018-08-16 04:05:47
0
false
en
2018-08-16
2018-08-16 04:05:47
3
11c431f6b928
2.490566
0
0
0
Training a simple neural network on the QSAR Biodegradation Dataset
4
Baby Steps Training a simple neural network on the QSAR Biodegradation Dataset This short article traces the path I took to write, train, and test a neural network on data closely linked to chemical engineering. It is by no means a comprehensive tutorial, but rather a personal snapshot, a flag to mark what I believe is an important step in my machine learning journey. It started, as it always does, with the data. I specifically wanted something related at least in part to chemical engineering. As a chemical engineering major and machine learning enthusiast, I am deeply interested in how the latter discipline can influence and better the former. As I did not have the means to collect my own data, I turned to my good friend the internet for help. After quite a while spent searching the UCI Machine Learning Repository, I settled on the QSAR biodegradation dataset as an ideal candidate for experimentation. The basic structure of the dataset is as follows: 1055 rows of unique chemicals, each with 41 chemical attributes. These attributes help characterize the chemical by providing critical information about the presence and frequency of particular elements. The job of my machine learning algorithm would be to identify relationships between the chemical structure of a molecule (as outlined through the various attributes) and its natural degradability. I decided to write a Multilayer Perceptron (3-layer feedforward binary classifier) using Keras with the Tensorflow backend. My decision was based more on personal interest than on any sort of detailed performance analysis; neural networks simply fascinate me more than other learning algorithms. To start out, I imported all the necessary Python modules. These included the standard pandas and numpy libraries, as well as several Keras functions to build the network. The next course of action was to split the downloaded .csv data into two tensors, namely raw_data_x and raw_data_y. The former matrix would contain all 41 structural attributes for each chemical, while the latter would classify the chemical as either readily biodegradable (1) or not (0). The structured nature of the dataset — all non-biodegradable chemicals were after the biodegradable ones —meant that I had to shuffle the order of the chemicals in order to ensure that the training and/or testing data would not have only one of the classes. Now that the data was properly distributed, I split it into training and testing data. As this was a very simple program with no hyper-parameter tuning, there was no need for a validation dataset. I was finally ready to write the perceptron itself. The ridiculously easy-to-use nature of the Keras library meant that I was able to create a functional 3-layer model in no time. Most (but not all) of the parameters in my code were the same as those used in the MLP for binary classification example on the Keras website. After ensuring that the network compiled correctly, I trained it for several hundred epochs and then evaluated it on the testing data. The total process only took a couple of minutes. The exact model outlined above achieved an accuracy of 89.9% when tested on the dataset. Overall, I am satisfied with the performance, especially considering the perceptron’s elementary nature. With some careful hyper-parameter selection and perhaps the addition of hidden layers, my guess is that the accuracy could be increased by 5% or more. In the future, I would like to present the results of these improvements and also compare the neural network’s performance to that of other learning algorithms (particularly support vector machines). I had a lot of fun in creating the model and composing this article. While this was not my first neural network, it was one of the first times that I had used a dataset related to my field of study. As I become more educated in both chemical engineering and machine learning, I can’t wait to explore more creative ways to link the two.
Baby Steps
0
baby-steps-11c431f6b928
2018-08-16
2018-08-16 04:05:48
https://medium.com/s/story/baby-steps-11c431f6b928
false
660
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Pratik Kelkar
A chemical engineering student at UT Austin with a major passion for exploring how ML and data science can revolutionize various chemical industries.
55b68a66a344
pratikkelkar11
0
1
20,181,104
null
null
null
null
null
null
0
null
0
5e5bef33608a
2018-09-18
2018-09-18 16:32:02
2018-09-18
2018-09-18 16:51:30
3
true
en
2018-09-18
2018-09-18 16:51:30
0
11c48995bafb
2.459434
0
0
0
In computer sciences, we usually use human as a role model to develop machine learning algorithms and concepts. Interestingly, our efforts…
5
AI, a Model for Self Understanding In computer sciences, we usually use human as a role model to develop machine learning algorithms and concepts. Interestingly, our efforts to develop machine learning led to better self-understanding. Illustration by Dustin Yellin But what if we use machine learning concepts to explains some of our social behaviors? In life, you have a limited number of observations. The observations could be scientific, social or any other observations. For the sake of this article, let’s focus on social observations. You see some is a hard working person. Someone is doing a crime. Those are a few examples of social observations. Your social observations are training data for your social model. Your future observations are your test data. What about validation data? Apparently, as a human, we don’t have access to this type of data and basically, we use test data as our validation data too. Your experience in life, or simply your age, could be interpreted as your training epochs. Time goes, you collect data and your brain starts fitting a model on those observations. Your initializer function is probably your family, friends, and environment. When you born, your gene was your only initializer. But, as soon as you born, your environment, family, and friends start shaping your mind and beliefs. Your education, knowledge, and judgment is your optimizer. If you know better about social sciences, you can find better models. If you are kind and passionate about human, you try to fit a better model. If you have a bad temper, you probably try to fit a cruel model to your social observations. Time goes and you see the society and people. You start fitting simple models to your observations. If you have limited social interactions, your models remain simple since they explain your limited observations well enough. At this stage, you form some stereotypes in your mind that might explain some behaviors well enough. If you get obsessed by your new social model and only look for more data to confirm it, you can always find those data. Simply, your brain starts ignoring observations that are not aligned with your initial social model. If you stay in a same social environment for a long time, basically, your training and test data are coming from the same dataset and your model becomes a more local model than a global model. People who travel and go outside of their origin society usually find more contradicting observations (new test data from the new dataset) and starts to develop better global models. In another word, staying in a same social environment tends to make your model over-fitted. In the absence of different test data, your model tends to become such an over-fitted model that cannot be updated via new test data or even good optimizers. Here, I tried to simply explain our social models using some machine learning concepts. The best way to avoid developing over-fitted social models in our minds is trying to interact with social environments outside of our comfort zone.
AI, a Model for Self Understanding
0
ai-a-model-for-self-understanding-11c48995bafb
2018-10-04
2018-10-04 00:37:19
https://medium.com/s/story/ai-a-model-for-self-understanding-11c48995bafb
false
506
Latest News, Info and Tutorials on Artificial Intelligence, Machine Learning, Deep Learning, Big Data and what it means for Humanity.
becominghuman.ai
BecomingHumanAI
null
Becoming Human: Artificial Intelligence Magazine
becoming-human
ARTIFICIAL INTELLIGENCE,DEEP LEARNING,MACHINE LEARNING,AI,DATA SCIENCE
BecomingHumanAI
Machine Learning
machine-learning
Machine Learning
51,320
Naser Tamimi
Data Scientist and Data Lover
f94e8b97c7aa
tamimi.naser
3
2
20,181,104
null
null
null
null
null
null
0
null
0
b9c462debe9b
2017-11-09
2017-11-09 15:26:41
2017-11-15
2017-11-15 15:42:30
2
false
en
2017-11-17
2017-11-17 23:22:42
3
11c72affc62f
2.288994
11
5
0
Welcome to the new online home for public discussion of the work of the Knight Commission on Trust, Media and Democracy. As the 26-member…
5
Let’s talk about trust, media, and democracy Ben Boyd, danah boyd, Ari Fleischer, and Claire Wardle at the first official meeting of the Knight Foundation Commission on Trust, Media and Democracy on October 12, 2017 in New York City. Welcome to the new online home for public discussion of the work of the Knight Commission on Trust, Media and Democracy. As the 26-member commission meets over the next year in cities across the country, members will hear from experts, take comments, and deliberate on how to think big and broadly about misinformation in public life. Meanwhile, recipients of the Knight Foundation’s new funding for related projects to increase trust in the media are already at work, and we’ll report on their progress here. We hope this space will become a place to learn about what the commissioners are hearing, what they are thinking, and what is being discussed in the public sphere on this weighty subject central to our collective future. Most importantly, we want you to be part of the conversation. We’ll pose questions based on what we’re seeing and hearing, and ask for your thoughts. We’ll report these back to the commission. To start, here are five questions growing out of the discussions at the first meeting of the commission, which took place at the New York Public Library on Thursday, October 12th, 2017. Here is a full recap of public discussion and presentations at the meeting. What would make you trust more of what you see and read in the media? This chart shows how trust in the media is at an all-time low not just in the U.S., but all over the globe. If you don’t trust the media, why? What would make you trust the media more? Source: www.edelman.com/trust2017 2. What are the potential trade-offs for subjecting social media platforms to more scrutiny and regulation? Traditional newspapers printed on paper are an endangered species. Meanwhile, Facebook, Google, Twitter and other social media platforms drive traffic to real news stories. They allow everyone to be their own publisher, but have also been conduits for false stories. Some say these platforms should be subject to more scrutiny and regulation. What do you think? 3. What do we need to teach our children — and learn ourselves — when consuming media? People often share things they already agree with, and ignore things they don’t. What questions should we all be asking when we see a news story posted online? What tools would help us become more discerning consumers of news and information? How do we avoid increasing ideological self-segregation on platforms and media? 4. How do we pay for good journalism? As local newspapers and journalists lose jobs, there is less reporting on the state houses and legislatures, the water commissions and state agencies. Assuming accountability is important but profitable local media models are lacking, how should society pay for this utility? 5. How can we make the robots work for us? We now know that misinformation can spread rapidly in the era of Artificial Intelligence, or AI. But how can we harness AI to work for the common good? Post your answers, comments, disagreements below, and we’ll begin the conversation.
Let’s talk about trust, media, and democracy
101
lets-talk-about-trust-media-and-democracy-11c72affc62f
2018-05-24
2018-05-24 22:41:45
https://medium.com/s/story/lets-talk-about-trust-media-and-democracy-11c72affc62f
false
505
Our democracy is suffering: misinformation is rampant, the news ecosystem is changing rapidly, and mistrust in the press is rising. We want your ideas on what we can do, as a society, to increase trust.
null
knightfdn
null
Trust, Media and Democracy
trust-media-and-democracy
DEMOCRACY,MEDIA,MISINFORMATION,TECH,JOURNALISM
knightfdn
Journalism
journalism
Journalism
39,588
Nancy Watzman
Nancy Watzman is editor of Trust, Media & Democracy on Medium & director of strategic initiatives for Dot Connector Studio.
d1aa5655ca5a
nwatzman
203
122
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-24
2018-05-24 06:10:57
2018-05-24
2018-05-24 06:17:49
1
false
en
2018-05-24
2018-05-24 06:17:49
4
11ca039b3bca
1.792453
2
0
0
Robotic Process Automation is a new business process automation technology which is dependent on the Artificial Intelligence Workers. As…
3
The Evolution of Robotic Process Automation Robotic Process Automation Robotic Process Automation is a new business process automation technology which is dependent on the Artificial Intelligence Workers. As companies become familiar with this powerful, emerging technology, people will quickly understand the benefits it offers over outsourcing and other methods of business and IT processing. Perhaps crowd sourcing and impact sourcing will play a large role as Robotic Process Automation technology becomes more sophisticated, and new jobs will potentially be created for human workers with the advanced skills needed to maintain and improve this technology. In other words, as we move toward the future, humans will work hand-in-hand with robots to transform the way we do business, resulting in lower costs, increased efficiency, and improved customer service and employee fulfillment. Some experts say that Robot Process Automation is in 2015 what the internet was back in 1994. What is RPA? In easy words, RPA is the use of a software which has Artificial Intelligence (AI) and Machine Learning (ML) capabilities and using them can handle a high volume of repetitive tasks that were previously performed by humans. These tasks can range from calculation to handling queries and even maintaining records and transactions. Why is it relevant today? · Companies without RPA would soon be at a disadvantage as having RPA does bring the costs down by huge numbers. RPA would seize to be a novelty and turn into a necessity for every business. · RPA is far more beneficial than outsourcing for any business as the benefits derived from outsourcing have been reaped already. In Delloite’s Global Outsourcing Survey, 75% of organizations profiled reported that they had already realized cost-saving targets by leveraging labour arbitrage. If that number is even remotely indicative of the larger population, outsourcing simply isn’t the same game-changer it once was. What are the advantages of implementing RPA in business? · Reduced Wage Costs: This technology would wipe out the jobs of those who work in automatable jobs and do not possess skill to execute a job which is not automatable. · Low Risk: RPA projects are low risk tasks which can be executed without the disturbance of existing systems. · Customer Satisfaction: As employees move to more customer-facing roles, and as automation makes processes more efficient, customers become more satisfied with the overall experience. · Improvement of Data Analytics: Each task the robot executes produces data that, when gathered, allows for an analysis. This drives better decision making in the areas of the processes being automated.
The Evolution of Robotic Process Automation
12
the-evolution-of-robotic-process-automation-11ca039b3bca
2018-05-24
2018-05-24 07:42:12
https://medium.com/s/story/the-evolution-of-robotic-process-automation-11ca039b3bca
false
422
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Raahil Aggarwal
Raahil Aggarwal a passionate blogger in Technology and Education Field. He has almost 2 years of experience of writing blogs for this domain.
3c758c641898
komcctest
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-11
2017-10-11 04:50:56
2018-01-01
2018-01-01 20:42:53
3
false
en
2018-01-01
2018-01-01 20:42:53
1
11ca25e37cde
1.580189
1
0
0
Continuing on the last blog (If you haven’t read it, do check it out: https://goo.gl/at19kK) I would like to expand my thoughts even…
5
Encrypting The Future | Life 2.0 Continuing on the last blog (If you haven’t read it, do check it out: https://goo.gl/at19kK) I would like to expand my thoughts even further on how AI will be the solution to the Cyborg life we’ll be living soon. To set some standards let’s see what AI encryption looks like today. According to Wikipedia, Neural cryptography is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially artificial neural network algorithms, for use in encryption and cryptoanalysis. We can use AI’s ability of self-learning for dynamically changing the encryption algorithm. Let’s break down the sentence with an example. We’ve given all the three AI’s a base encryption and decryption algorithm to work with. There are two AI’s(One Kaki’s and the other mine,i.e., Vishrant). Both of them are encrypting the data they are sending out for communication with an updated encryption algorithm(Which they modified by analyzing different attacks and algorithms over the internet). Now Kaki’s AI and my AI start communicating banking details over this channel, the Third AI(i.e., Alex) is an attacker(Hacker) AI and wants to interpret the data that being send over this channel. Alex’s AI knows the base algorithm for encryption and decryption. In a normal scenario, Alex’s AI would brute force all the possible known algorithms on this channel. But, there is a very rare chance that it might be able to guess the algorithm correctly. Due to our algorithm’s dynamic nature, it’s becomes an endless loop of encryption-decryption for Alex’s AI. That’s all for now! What do you guys think? Drop your thoughts in the comments down below.
Encrypting The Future | Life 2.0
50
encrypting-the-future-life-2-0-11ca25e37cde
2018-04-17
2018-04-17 22:49:49
https://medium.com/s/story/encrypting-the-future-life-2-0-11ca25e37cde
false
273
null
null
null
null
null
null
null
null
null
Cybersecurity
cybersecurity
Cybersecurity
24,500
Vishrant Khanna
I can’t explain my life in 160 Characters. Cyber Security Enthusiast. Computer Science Engineer. Editor-in-Chief @gadgetsbloggind. Self-Taught Digital Marketer.
b23f144b696c
vishrantkhanna
79
6
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-02
2018-09-02 03:06:56
2018-09-02
2018-09-02 03:07:08
0
false
en
2018-09-02
2018-09-02 03:07:08
1
11cc7022c56a
3.064151
0
0
0
[Download] [PDF] Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Download EBOOK EPUB KINDLE By Thomas…
1
READ PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python By Thomas W. Miller Jr. DOWNLOAD EBOOK PDF KINDLE #EPUB [Download] [PDF] Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Download EBOOK EPUB KINDLE By Thomas W. Miller Jr. Read Online : https://bestreadkindle.icu/?q=Marketing+Data+Science%3A+Modeling+Techniques+in+Predictive+Analytics+with+R+and+Python Now , a leader of Northwestern University’s prestigious analytics program presents a fully-integrated treatment of both the business and academic elements of marketing applications in predictive analytics. Writing for both managers and students, Thomas W. Miller explains essential concepts, principles, and theory in the context of real-world applications. Building on Miller’s pioneering program, Marketing Data Science thoroughly addresses segmentation, target marketing, brand and product positioning, new product development, choice modeling, recommender systems, pricing research, retail site selection, demand estimation, sales forecasting, customer retention, and lifetime value analysis. Starting where Miller’s widely-praised Modeling Techniques in Predictive Analytics left off, he integrates crucial information and insights that were previously segregated in texts on web analytics, network science, information technology, and programming. Coverage includes:The role of analytics in . . . . . . . . . . . . . Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python PDF Online, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Books Online, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Ebook , Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Book , Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Full Popular PDF, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Read Book PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Read online PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Popular, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python , PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Ebook, Best Book Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Collection, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Full Online, epub Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, ebook Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, ebook Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, epub Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, full book Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, online pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Book, Online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Book, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Online, pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, read online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Thomas W. Miller Jr. pdf, by Thomas W. Miller Jr. Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, book pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, by Thomas W. Miller Jr. pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Thomas W. Miller Jr. epub Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, pdf Thomas W. Miller Jr. Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, the book Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Thomas W. Miller Jr. ebook Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python E-Books, Online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Book, pdf Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python E-Books, Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python Online , Read Best Book Online Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python #Ebooks #epubs #pdffree #PdfReader #MobiOnline
READ PDF Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python By…
0
read-pdf-marketing-data-science-modeling-techniques-in-predictive-analytics-with-r-and-python-by-11cc7022c56a
2018-09-02
2018-09-02 03:07:08
https://medium.com/s/story/read-pdf-marketing-data-science-modeling-techniques-in-predictive-analytics-with-r-and-python-by-11cc7022c56a
false
812
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Antoinette Naylor
null
5c4644e21a1e
antoinettenaylor
0
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-24
2017-10-24 22:23:22
2017-10-26
2017-10-26 15:48:05
4
false
en
2017-10-27
2017-10-27 19:38:54
1
11ce0da45a86
0.888679
1
0
0
The following study is a compilation of figures that analyze ice mass levels from Greenland and Antarctica registered over the period…
5
Climate Change Data Visualizations The following study is a compilation of figures that analyze ice mass levels from Greenland and Antarctica registered over the period 2002–2014 by NASA’s Grace satellites. D. W. (2017, July 11). Land ice | NASA Global Climate Change. Retrieved October 27, 2017, from http://climate.nasa.gov/vital-signs/land-ice/ Fig.1. Polar Ice Mass Correlation (2002–2014). Values are anomalies (n=140) gather by NASA’s GRACE satellites relative to timeseries mean. The x-axis represents the amount of ice mass registered on Greenland‘s ice land, and the y-axis represents the amount of ice mass registered on Antarctica’s ice land, for equivalent timeframes. Although we are not analyzing the timeseries in this figure, it shows evidence of a positive correlation between the variables (r=0.95). Fig. 2. Polar Ice Mass Comparison Trend (2002–2014). This graph shows the comparison between Greenland (blue) and Antarctica’s (red) ice mass during the period 2002–2014 as recorded by NASA’s GRACE satellites. The dashed green line intercepts the x-axis (years) on 2009, from where we can appreciate an acceleration in the decreasing ice mass rate. Fig. 3. Polar Ice Mass Comparison Trend (2002–2014) + Linear Regressions (Values Prior to 2009). This graph shows what the linear regression for Greenland (cyan) and for Antarctica (yellow) were prior to 2010. These lines give us an idea of what would have been predicted at that time, and the difference with the actual values by the end of 2014.
Climate Change Data Visualizations
2
climate-change-data-visualizations-11ce0da45a86
2017-10-31
2017-10-31 13:10:32
https://medium.com/s/story/climate-change-data-visualizations-11ce0da45a86
false
50
null
null
null
null
null
null
null
null
null
Climate Change
climate-change
Climate Change
39,654
Mawxder
Aspiring life scientist.
3824981ca186
mawxder
16
16
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-11
2017-12-11 07:43:31
2017-12-11
2017-12-11 09:43:00
1
false
en
2017-12-11
2017-12-11 09:46:50
7
11d140a68d30
1.649057
3
0
0
Most promising start-up of 2017 goes to Search|hub by CXP Commerce Experts GmbH for their work on developing and commercializing a system…
4
SearchHub.io wins Award for Most promising Start-Up 2017 SEARCH Solutions 2017 Most promising start-up of 2017 goes to Search|hub by CXP Commerce Experts GmbH for their work on developing and commercializing a system that infuses human understanding into existing search applications by automatically correcting human input and building context around each and every query. On November 29th 2017 — London — we have been awarded for our work on making search smarter. We are thrilled — that’s definitely a really cool prize and even more special — from people who clearly understand the challenges in this field. Thanks to the Information Retrieval Specialist Group of the BCS and the judges Charlie Hull, Rene Kriegler and Ilona Roth. “we had a number of very strong submissions for this category so I hope you share my view that this makes your award all the more special“ - Tony Russell-Rose on behalf of the BCS IRSG committee About the IRSG Information Retrieval (IR) is concerned with enabling people to locate useful information in large, relatively unstructured, computer-accessible archives. In this respect, anyone who has ever used a web search engine will have had some practical experience of IR, as the web represents perhaps the largest and most diverse of all computer-accessible archives. Much of the technical challenge in IR is in finding ways to represent the information needs of users and to match these with the contents of an archive. In many cases, those information needs will be best met by locating suitable text documents, but in other cases it may require retrieval of other media, such as video, audio, and images. In addition, IR is also concerned with many of the wider goals of information/ knowledge management, in the sense that finding suitable content may only be part of the solution — we may also need to consider issues associated with visualisation of the contents of an archive, navigation to related content, summarisation of content, extraction of tacit knowledge from the archive, etc. The IRSG is a Specialist Group of the BCS. Its aims include supporting communication between researchers and practitioners, promoting the use of IR methods in industry and raising public awareness. There is a newsletter called, The Informer, an annual European Conference, ECIR, and continual organisation and sponsorship of conferences, workshops and seminars.
SearchHub.io wins Award for Most promising Start-Up 2017
44
searchhub-io-wins-award-for-most-promising-start-up-2017-11d140a68d30
2018-03-22
2018-03-22 17:30:23
https://medium.com/s/story/searchhub-io-wins-award-for-most-promising-start-up-2017-11d140a68d30
false
384
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
searchhub
search|hub is a search platform independent, AI-powered search query intelligence API — helping search engines understand humans
de55c8251d7d
searchhub.io
37
2
20,181,104
null
null
null
null
null
null
0
null
0
6c3663684612
2018-04-28
2018-04-28 05:38:51
2018-05-15
2018-05-15 13:17:58
1
false
en
2018-05-15
2018-05-15 13:23:54
11
11d16a541d54
1.864151
2
0
0
AI is a rapidly evolving field where every week or even every day there’s something new. Staying up to date can help your business make…
5
Resources to stay up to date with AI with focus on business. AI news for businesses AI is a rapidly evolving field where every week or even every day there’s something new. Staying up to date can help your business make sense of AI space, its future and adopt cutting-edge tech to get the advantage for your company. We composed some of the resources that can help you stay up to date in the field of AI. AI business — As the name implies the website reports on AI in business. AI Business | The World's Number One Portal for Artificial Intelligence in Business The World's Number One Portal for Artificial Intelligence in Businessaibusiness.com MIT Technology Review Intelligent Machines — Technological advancements and their impact. Intelligent Machines Artificial intelligence and robots are transforming how we work and live.www.technologyreview.com & their videos Videos Recommended and latest videos from MIT Technology Review.www.technologyreview.com McKinsey on AI — covers profound implications of AI for business and society. Artificial Intelligence AI is moving from the lab to the workplace, with profound implications for business and society.www.mckinsey.com Singularity Hub Blog — AI and everything around, that leads to singularity. Topics - Singularity Hub Singularity University, Singularity Hub, Singularity Summit, SU Labs, Singularity Labs, Exponential Medicine…singularityhub.com AI trends — AI in business and enterprise. AI education. AI Trends — The Business and Technology of Enterprise AI Our AI Trends Insider, Dr. Lance Eliot, covers the self driving car market. Dr. Eliot has authored 7 books and 130+…aitrends.com ReadWrite category AI — News, AI use cases, tech breakthroughs, speculations of the future AI - ReadWrite Artificial intelligence is topping headlines. Whether it's self-driving cars or Alexa storing your grocery preferences…readwrite.com The Talking Machines — Combination of podcast, articles and technical white-papers reviews. About AI technology and how it can help your business. Home | Talking Machines Welcome to the new Talking Machines website! Let us know what you think here.www.thetalkingmachines.com Wired topic AI — News, how AI is used in the world today and what it can lead to i.e. the future. Artificial Intelligence news and features Everything WIRED UK knows about Artificial Intelligence, including the latest news, features and images.www.wired.co.uk Futurism AI — Covers AI space, its use cases of today and the future. Artificial Intelligence News - Futurism From predicting and identifying disease to revolutionizing the way we work, the next few decades will tell the story of…futurism.com SoftRobot’s social media and blog — and lastly a shameless plug to ourselves, SoftRobot, our social media and blog where we try to cover quality content for your business. SoftRobot We are tech enablers, building digital solutions fuelled by disruptive technologies and innovation.www.softrobot.io
Resources to stay up to date with AI with focus on business.
57
resources-to-stay-up-to-date-with-ai-with-focus-on-business-11d16a541d54
2018-05-16
2018-05-16 09:04:35
https://medium.com/s/story/resources-to-stay-up-to-date-with-ai-with-focus-on-business-11d16a541d54
false
441
A Swedish AI company focusing on creating better workflow habits at companies
null
SoftRobotHQ
null
SoftRobot
null
softrobot
AI,MACHINE LEARNING,MACHINE INTELLIGENCE,DEEP LEARNING,ARTIFICIAL INTELLIGENCE
SoftRobotHQ
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Axo Sal
Social impact is fundamental to me. Everything else builds on top of it. Chief Positive-Impact-Maker :) @ ZALAB.co - Projects for a better world.
4856128e5846
AxoSal
230
301
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-13
2018-06-13 13:55:39
2018-06-13
2018-06-13 21:10:02
1
false
en
2018-06-13
2018-06-13 21:10:02
1
11d245abb930
2.54717
8
0
0
One Way is By Helping Brands to Prevent Bot Spoofing
5
Amazon Is Motivated to Build Trust in Artificial Intelligence Apps. One Way is By Helping Brands Prevent Bot Spoofing We predict the next wave of cyber-attacks will be bots impersonating brands on conversational apps. Let’s call it bot spoofing. Amazon aims to fight this using the new .BOT domain name registry to enable the discovery of bots for business-to-business and business-to-consumer audiences. During the 2016 election, we heard about software bots impersonating normal everyday people on social networks. In the aftermath, both Google and Facebook pledged to implement new monitoring tools to help rebuild consumer trust in the source of news and ads published on their platforms. But they are behind the curve as consumers are moving away from social networks. Increasingly consumers are interacting with their favorite brands via conversational apps like WhatsApp and SnapChat. Facebook already has over 100,000 bots on its Messenger app. It will no longer be enough that a brand has a web presence on social networks. They will need to have a conversational bot as well. Here is how it works. Consumers already have a contact list of their family and friends on their favorite messaging app. Their favorite brands can also be added as contacts. But instead of a human, these contacts can be AI bots that engage the consumer for online ordering or customer service. Some major brands are already doing this, including Sephora, Macy’s, American Eagle, Delta airlines, Panera, Uber, Dominos and Starbucks. Valid bots can sometimes be hard to find. Facebook’s Messenger has introduced some discovery tools to help their users find bona-fide brands among the 100,000 bots on the Messenger network. But Messenger is not the only messaging app that consumers are using. There’s also WhatsApp, Snapchat, WeChat, Google’s Hangouts, Instagram, Telegram, Kik, Line, Twilio, Twitter, Skype, Slack, SMS, Viber, and many others. It is likely that some of these apps already have fake brand bots fooling consumers and collecting personal data. Clearly, a more universal approach is needed to help consumers trust they are interacting with a brand’s bona-fide bot, no matter what messaging app they are using. Enter Amazon. Amazon already dominates online shopping and cloud services. Amazon’s Echo Dot device combining natural language and artificial intelligence was the best seller during the December, 2017 holiday season. Now Amazon is aiming to dominate the online discovery of bots for business-to-business and business-to-consumer audiences. Amazon’s vehicle for bot discovery is the .BOT top level domain name registry ( www.get.bot ) .BOT is one of the new domain extensions authorized by ICANN, the internet regulator for domain names to compete with .COM. To help build early adoption, Amazon is initially limiting .BOT domain names to entities who already have a published bot using one of six supported frameworks: Amazon Lex, Botkit Studio, Google Dialogflow, GupShup, Microsoft Bot Framework and Pandorabots. Linking the published bot to a .BOT domain name allows it to be added to a discovery platform that Amazon has under development. The bots can then be discovered no matter where, or how they are accessed. So if bad actors, or even competitors, are using a bot to spoof a brand, Amazon’s .BOT registry will help steer the consumer to the brand’s legitimate bot. Brand owners and bot developers can defensively reserve their .BOT names now using one of the bot frameworks without actually building a full-blown bot. This will help block others from impersonating their brand. Essentially, Amazon’s .BOT registry will help build consumer confidence in AI by helping brands prevent spoofing of their bots,. But the odds are that there will be some bot spoofing headlines before most businesses act to protect themselves from this threat.
Amazon Really Wants Consumers to Trust Artificial Intelligence.
58
amazon-really-wants-consumers-to-trust-artificial-intelligence-11d245abb930
2018-06-18
2018-06-18 17:37:09
https://medium.com/s/story/amazon-really-wants-consumers-to-trust-artificial-intelligence-11d245abb930
false
622
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
Thomas Barrett
President of EnCirca, Inc.
1fa42984309e
encirca
1
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-26
2017-11-26 11:51:50
2017-11-26
2017-11-26 11:57:19
1
true
en
2017-11-26
2017-11-26 11:57:35
0
11d4dfe3ebb
0.524528
0
0
0
​Here’s the thing. We all think we’re so important. We’re not. We all want to buy big houses, make big money, move big mountains, watch TV…
5
All we need to do is focus ​Here’s the thing. We all think we’re so important. We’re not. We all want to buy big houses, make big money, move big mountains, watch TV. While actually, All we need to do is focus, find what works, work it, wait y wonder, for what’s next. Y eso es. And so it is. Life, as we live it and love, as we imagine it to be. Free. We’re just human. We’re just human, after all. And so it is.
All we need to do is focus
0
all-we-need-to-do-is-focus-11d4dfe3ebb
2017-11-26
2017-11-26 11:57:36
https://medium.com/s/story/all-we-need-to-do-is-focus-11d4dfe3ebb
false
86
null
null
null
null
null
null
null
null
null
Life
life
Life
283,638
Sofia Indy
A yogi, warrior + writer https://amzn.to/2vKdsyH
e0bff912a568
Fille
10,513
153
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-05
2018-08-05 20:30:27
2018-08-05
2018-08-05 20:53:41
0
false
en
2018-08-07
2018-08-07 14:37:35
3
11d509435be4
1.550943
5
0
0
It has been a while since I wrote a blog post. I am not inclined to blogging. I thought I will give a shot at another start.
5
Weekend experiment (tweet analysis) It has been a while since I wrote a blog post. I am not inclined to blogging. I thought I will give a shot at another start. A few days back FiveThirtyEight published an article through which they shared three million tweets that were related to the IRA liked Twitter accounts trolling and shaping the political discourse, the effects of which are still being felt and new findings unearthed every day. It is no wonder that this data-set has been the subject of many researchers analysis and investigation. Recently Twitter has been purging these accounts and as the article states, many of the accounts related to IRA were deleted. FiveThirtyEight managed to source the tweets and made them available via GitHub. The dataset provided me with a opportunity to run some experiments. There were a few goals: Understand the results and re-create the graphs Share code and create documentation to assist other exploring the data with tools and techniques Use cloud APIs for the analysis where applicable Get back to blogging and contributing to open data/source efforts A quick look at the data showed that there were opportunities to use the “Translation” APIs from various cloud platforms to convert “NonEnglish” content to English. Further, the translated content could be classified using other cloud APIs machine-learning tools. I decided to limit my context to the major cloud players: Amazon’s AWS, Google’s GCP and Microsoft’s Azure. So far it has been a good learning experience. It is a work-in-progress experiment that will flow into the next weekend. At this point I have explored and translated content using the translation APIs from the three vendors. I am currently working on preparing the data for the classification step. The code is available on GitHub. All samples use Python programming language. At an earlier employer, I had been involved in efforts to build analytics around email communications, and we had used public datasets like Enron and Hillary emails for experimentation. The IRA tweets is one other rich public dataset that revolves around a political scandal. It will continue to be analyzed and explored. The effort also gave me a way to explain what data science is all about to folks in my social network (family, friends etc). It was an attempt to show how we can use publicly available datasets to tip our toes into understanding the tools and techniques that data scientists use in their work.
Weekend experiment (tweet analysis)
5
weekend-experiment-11d509435be4
2018-08-07
2018-08-07 14:37:35
https://medium.com/s/story/weekend-experiment-11d509435be4
false
411
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Manoj Bharadwaj
null
256252ca8a3b
mbharadwaj
3
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-04
2018-06-04 03:51:12
2018-06-04
2018-06-04 04:22:31
11
false
en
2018-06-04
2018-06-04 04:28:34
5
11d520e6e96f
4.318868
0
0
0
Magic Mirror — also Oubliette, is a magic box that comes alive when you put objects inside and takes you back in time.
1
Magic Mirror Process Magic Mirror — also Oubliette, is a magic box that comes alive when you put objects inside and takes you back in time. The word “oublier”, as originated in French, means “forget”. Here we picked a set of objects that goes back in decade and date back to as early as 50s: iPod, walkman,CD, 8track and radio. As you put one of those object into the box, it shows images related to that era on fashion, celebrity, game, music, culture etc. The images are pulled real time online as a reflection of the evolving landscape of the Internet. Complete case study video can be seen here. Ideation and Prototype We have a transparent screen sitting on the office window doing some graphics dance everyday. After staring at it for a few months, I’m very tempted to make sth out of it by myself. The very first and rough idea about this is to add graphics to things in the real life, which is a bit like augmented reality. I also want the computer to know what it’s looking at to be smarter on what graphics it shows, so I tried to combine it with Yolo(real-time object detection) which I was playing with at that time. - Transparent LCD: it’s blocking when it displays black and revealing when it shows white. - Yolo comes with a set of pre-trained model that recognizes as many as 9000 objects and it’s blazing fast. As I was pondering on what object I want I did a lot of test on its capability. One of them is pointing the camera directly at the Amazon page to see the result. Unfortunately a lot of times it only shows very abstract and broad word like “instrumentality” for object it doesn’t quite know. So after feeling frustrated for a few days I decided to start my own training for custom objects. Concept The concept of this project sort of evolved around the technology of real-time object detection — we want a collection of objects that tells one whole story together. After several rounds of brainstorming and reviews, we settled on music objects (iPod, walkman, CD, 8track, radio) that go back in time and recall people’s memory of their childhood. This project is in honour of the cutting-edge technologies in old time. Technology Details on technology can be referred to this github repo. For my own training process, I took 80–100 pictures for each object, ideally on different backgrounds, from different angles, so the camera recognizes the object whatsoever (in this specific case objects will only be placed on white backgrounds so less pictures will also work, I haven’t tested the limit though). Then you need to manually label the images to tell the program which part contains the object you want — convert it to a Yolo-specific format — modify the configuration file based on the number of objects — and then finally you could feed it to GPU! The Yolo repo has a lot of details on when to stop training because the error would actually goes up if you train too much, it’s a fun read. Nowadays things are moving really fast and by the time I’m writing up this post Yolo v3 has come out and there are even painless option like CoreML and TuriCreate. You could scrape 40 images from Google and run it through a python file, and the model is there waiting to be loaded. I can’t guarantee that a more painful training process and more understanding towards neural network yield to flexibility and control, after all it really depends on the needs of the project and how deep you want to dig into neural network =) Fabrication Look and feel The fabricator took my webcam and ripped it apart :/ Thoughts, next steps The next step is also the very first thought for this project, that is, pick any random object by your hand and the detection could work. Of course this won’t fit in our current concept for now, there’s also more to think about what the box should do in reaction of a random project — but technology wise it’s fascinating. In terms of training there’s also bonus points if you can try to merge your own set of objects with a pre-trained model, by modifying the configuration file to fine-tune the neural network. By theory this would work but I still haven’t wrapped my head around this. So far I’m pretty happy with the beautiful polished wooden box sitting in our office as a time tunnel. It’s a lot of coding exercise for me, not only on object training but also on backend server, frontend Javascript animations, and networking between different programs. Writing this post as a wrap up for my project and moving on to the next one =)
Magic Mirror Process
0
magic-mirror-process-11d520e6e96f
2018-06-04
2018-06-04 04:28:35
https://medium.com/s/story/magic-mirror-process-11d520e6e96f
false
800
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Shan Jin
null
443a85a203dc
shanjin
27
29
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-06
2018-03-06 13:27:33
2018-03-06
2018-03-06 13:49:18
1
false
en
2018-03-06
2018-03-06 14:33:28
8
11d83ec85689
2.132075
1
0
0
By Liz Carolan
5
TRI data collection and filtering process A still from our video explaining how to get involved in TRI — see www.tref.ie/get-involved By Liz Carolan A few weeks ago we wrote about the challenges we faced in trying to build a database of social media political ads for the referendum on the 8th amendment. This database is now up and running — www.tref.ie/database We said Plan A was collaboration with the platforms. Unfortunately, they are not in a position to do this. So we are using Plan B: crowdsourcing. We have a partnership with Who Targets Me, a UK based group of volunteers. They have built a tool that enables crowdsourcing of all the ads shown to Facebook users who install the tool as a Chrome or Firefox plugin. Their privacy policy is available here. WhoTargetsMe compiles all of the data gathered from individual users of the plugin into a single database. This data contains metadata on all ads — political and commercial (see here for a note on the difference) — such as a link to the ad, the page that placed it, the date it was placed, the number of likes, comments etc. As we are only concerned with political ads, we filter out the commercial ones. This filtering is the tricky bit. Our approach is as follows: We download the data as a single CSV file of all ads with country ID of “IE” We filter it using a set of keywords for the referendum, which you can see below We look through the filtered list and remove any ads that we are certain are not political in nature. We leave in content that is borderline — this includes promoted posts by news outlets that relate to the 8th (see why). We ad these to a publicly viewable google sheet, which feeds a display page on out website — www.tref.ie/database This google sheet also feeds a viewer on our website, where the original ads are pulled from Facebook, based on the URL in the database — www.tref.ie/viewer The “interest” column will only show why particular viewers who have installed the plugin can see the ads. Until we have full transparency by the platforms on targeting, this will only give a limited snapshot. See our note here. For this first iteration, the database is based on downloads by just 40 people. This yielded a little over 1,000 lines of data, from which we identified about 20 ads, 3 of which are sponsored posts by news outlets. The database is here. We filtered this data using the process described above, but we were also able to go through it manually to make sure we didn’t miss anything. This took several days. As the number of users grows, this will be automated further. For now, here is the list of keywords we have used — please tell us if we are missing any — either in a comment below, or on twitter or Facebook where we are @Transparentref: 8th (this yields a lot of results! We will need to refine to “8th ref” and some variants) Vote yes Vote no Repeal Abortion Unborn Foetus Pro-choice Pro-life
TRI data collection and filtering process
1
tri-data-collection-and-filtering-process-11d83ec85689
2018-03-16
2018-03-16 10:16:22
https://medium.com/s/story/tri-data-collection-and-filtering-process-11d83ec85689
false
512
null
null
null
null
null
null
null
null
null
Open Data
open-data
Open Data
5,306
Transparent Referendum Initiative
TRI aims to enable an open and honest #8thRef debate, through transparency and scrutiny of targeted, paid political ads on social media.
facc891c192c
TransparentRef
29
17
20,181,104
null
null
null
null
null
null
0
null
0
284538178f0a
2018-02-18
2018-02-18 22:38:42
2018-02-18
2018-02-18 22:40:25
2
false
en
2018-02-19
2018-02-19 17:11:27
0
11d991741947
3.039937
2
0
0
The scenario of relocation in a short amount of time is a pretty common scenario for most families, business works, graduate students, ect…
3
Determining housing demands for frequent movers The scenario of relocation in a short amount of time is a pretty common scenario for most families, business works, graduate students, ect. However, finding housing that is optimal for each end user seems to be a bit more difficult. Users on common housing apps like Zillow or Airbnb are always provided information regarding the surrounding areas and what sort of atmosphere an area is likely able to provide rather than another. The only problem with this sort of information, is that it needs to be tailored to the end user. Finding relevant data that would provide the user more demand for one location rather than another seems to be not so trivial of a task, given the needs and requirements of each user varies exponentially. So how can we extract ample amounts of information regarding the common housing demands of users and what attracts then to certain locations? First off we can use current demographics charts from the US census to determine where the hot spot locations are in terms of populations. Given that centers for large conglomerates of people tend to generate the need for areas directed towards recreation, such areas are likely to create a demand for housing in the vicinity. Also, since demographics creates demands on the housing market, specifically examining the fluctuations in the housing market’s pricings is directly related to the demand for the housing in that area. Thus, by examining these fluctuations one can get an idea not only to the demand by users to a specific location, but also to relative value that piece of housing has based on location. But what is causing these Knowing this data, we are able to generate research for extracting the housing demands of frequent movers. The origin of our research begins with obtaining population density maps and superimposing those onto normal maps, this in itself sifts the areas of sparse population from those of extreme density. At this point we need to get an idea of what a user who was in the scenario of having to relocate to one of these categories would want to be asked in order to sift from the available options. Such questioning would be attained by conducting at least five separate interviews from people who fit the demographics of our end users. Also these interviews need to specify varying levels of population density, so that all are accounted for. At this point we would have an idea of what characteristics of the area are integral in determining the optimal housing location for users. The next category of the problem to tackle is in terms of specifications of housing available for the user by selecting this property, and which take precedence. Data that targets this can be found by analysing the advertising patterns in utilities offered by housing companies, which is offered in housing articles that cover the utilities of housing companies with the most return on investment. At this point we have a solid foothold in knowing a large portion of data that creates demand for users to choose certain housing. While such data does exist that could aid in determining the housing demands of users, it is understood that bias does exist in the fact that while the sampled users and I may be able to provide ample relevant data, this will not be able to cover the whole population’s demands. Given that our sample set of interviewers and data is not the population set such skews will always exist. However, to better account for this skew we would individualize each set of data so that minimal bias exists between interviews, but more specific to the end user, when it comes to being prompted on housing demand options, there will be the ability for them to have a much wider field of data to choose from. As for bias that exists in our preconceived ideas of correlating different sets of data together, like the population density with elevated surrounding house costs, these would be refuted or supported by each conducted interview to stray away bias. Thus, providing our group with a safe and sound means of how to extract data on the housing demands of users.
Determining housing demands for frequent movers
11
determining-housing-demands-for-frequent-movers-11d991741947
2018-02-22
2018-02-22 20:45:31
https://medium.com/s/story/determining-housing-demands-for-frequent-movers-11d991741947
false
704
Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu
null
utsdct
null
Advanced Design for Artificial Intelligence
advanced-design-for-ai
ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS
utsdct
Urban Planning
urban-planning
Urban Planning
5,567
Shawn Victor
null
a2a2f7add785
shawnvictor
27
27
20,181,104
null
null
null
null
null
null
0
class Transformer(): '''initialization code''' def __init__(self, param): # initialise parameters here '''in-house functions - functions users should not call''' def _masked_multihead_attention(self, args): # code for masked multihead attention here def _multihead_attention(self, args): # code for multihead attention here def _encoder(self, args): # code to build the entire encoder here def _decoder(self, args): # code to build decoder '''user callable functions - users can call if they want, but don't necessarily need to''' def make_transformer(self): # take the output of encoder and merge with decoder inputs def make_loss(self): # write the loss function and apply gradients functions def initialize_network(self): # initialize the session and variables '''operation functions - functions to properly run the model''' def run(self, args): # code to run a single input sequence and generate outputs # can use this function to both train and infer only def save_model(self): # save the model according to requirements
15
32881626c9c9
2018-08-26
2018-08-26 15:06:36
2018-09-21
2018-09-21 17:56:50
2
false
en
2018-09-26
2018-09-26 15:09:48
1
11d9a29219c4
9.417296
5
0
0
Code shall be written!
5
Let’s build ‘Attention is all you need’ — 2/2 Code shall be written! The biggest issue I find when learning AI is the lack of proper, well documented and most importantly simple code. Often the people who write these codes are professionals who spend most of their time writing industry grade software, but for people like me who don’t have a lot of experience or practice with writing code and even less implementing research papers, this is a problem. Even the code that is available online doesn’t properly explain the reason behind selecting a particular strategy, why one tensorflow function was used over the other, or which tensorflow function should be used here. Another problem, lack of computing power or data sets used by researchers, and even if somehow we are able to get those, how to actually use it! Well no more worries, I am here trying to solve this. In a previous blog post I wrote about the gist of this model. And here we finally tackle how to build this thing. The first thing we need to know about this is way tensorflow works, since everything is the background and most of the python functions don’t work, we need to figure out a way to use tensorflow functions to reach our goal. You can find the actual code by clicking the link below: yashbonde/transformer_network_tensorflow transformer_network_tensorflow - Tensorflow implementation of transformer networkgithub.com I recommend you open the code in one tab and read the reasons here, apparently you can’t display blocks of same code on medium using github gist. Here I will discuss about specific code choices and explanation of some tricky parts. There is one major thing you need to know about, even if you have ignored that till now, using scopes, and using them well. The main reason behind having to use large number of scopes is because of the stacks of same units used in transformer network. We start our process by writing down the outline of our transformer class. If you look at all the major functions used, they will be in this order. The way to write code for any model is in the template above. Divide the class in four major parts: initialisation, internal functions, user callable, operation. Initialisation function has the def __init__() in it. Internal functions has functions which are necessary to build the model. It has bulk of the code, since this is where all the operations are. In our code we have two major blocks masked-multihead-attention and multihead-attention, and two main units encoder and decoder. So we write functions for building those. Tensorflow backend operates as computation graph, thus this is where we write all the nodes to it. We denote these functions by placing underscore before their names. User callable functions are the functions that user can call if they want, but don’t necessarily need to, to run the model. Most of these functions will be called during __init__ only. In our case we have make_transformer() which combines the input from _encoder and _decoder to make the final computation graph which is able to perform inference. To make the model trainable we need to add more blocks, i.e. loss and training functions. We place these functions in make_loss() which gives the loss and train_step. Operation functions has functions that the user calls as needed. For our code we have the run() function which runs a single iteration for an input sequence. You can also add code to save the model in any format that you want. The rule of thumb is that if the model is going to be improved in future save as a checkpoint file. If it’s going to be deployed, use protobuf to store model as a frozen graph. Almost every codebase can be broken down into these four blocks and then built according to this template. After countless days of writing code, both as a project and professionally, I personally made my own template and have been fine tuning it. You make one according to your own needs! Initialisation Functions So we start from the top, the first thing we define is the LOWEST_VAL which is the smallest value that can be used in the system. By default the lowest value that can be represented in a 32-bit float is ~ 1.175e-38. So we select a value that is closer to it, we selected 1e-36, as it is sufficiently smaller. We basically need to apply negative infinity, according to the paper, but since infinity values are not processed properly, we replace it with a very small number. Next we import our dependencies, utils has function to determine positional encoding. numpy is our most important linear algebra library and tensorflow is used to make and process our model. Next we define our class and write initialisation values. Most of the values are pre-filled as per the paper, the only parameter we need is the VOCAB_SIZE which is the number of words in target language. We define some parameters like DIM_KEYand DIM_VALUE according to the formula given in paper. Coming to line-52I have defined masking matrix as a placeholder, so it can be changed or fed at run time. The main idea behind masking has already been explained in the previous blog post. Challenge is to implement it at the run time, when each input sentence might be of different length. At line-61 I define an embedding matrix, in case the user does not want to implement embedding for word beforehand. Next we define four more placeholders, input_placeholder for the input sequence output_placeholder for the output sequence, labels_placeholder for the target label which is to be predicted and position which keeps track of the position till which the sentence has been generated. This last one is to use with the masking_matrix as an argument when using tf.nn.embedding_lookup() . Next functions are make_transformer() , build_loss() and initialize_network() . Use of these three will be explained later as we reach those functions. [Masked] Multihead Attention Shifting to the model architecture, if you think about it both the multihead attention are very complex and entities of their own, so we can convert them into functions, where they simply return the output tensor. This is what we have done by creating separate functions, though both can be converted into a single function. I just find having them as two different entities more convenient. At line-75 we start the multihead_attention() which takes four arguments, three of which are as explained previously and a reuse argument. It is very important to use tensorflow functionality such as scopes very properly! This is the hidden trick that allows us to make these complex graphs possible. Scopes are nothing but the names given to the variables that are created, and in transformer we use stacks i.e. same variables are used multiple times. To use that we use function tf.variable_scope() to which we pass two arguments, the name of scope and whether to reuse it or not. The name of scope can be anything, it just tells that all the variables that are created under it will carry the name as <outer_scope> / <inner_scope> / <variable_name> . Also to note is that the first time we cannot pass reuse argument as true, since we have to define them first. To store all the values of tensors that are created we make a list called head_tensors , as we define more and more tensors we will keep adding them into this list. We iterate over the number of heads that we have and just like outer scope, if reuse is not true it means we are creating these layers for the first time, we define reuse_linear as false. For each linear layer we define another variable scope. The best practice to generate variables under scope is to use tf.get_variable() function. The first argument will be the name and the second argument will be the shape. If we get the variable, we still need to initialise them, for now we use tf.truncated_normal_initializer which fills it with numbers present in truncated normal distribution. This way we create the three weights, weight_q , weight_k and weight_v and perform further calculations to get the head. Once the head is calculated, we add it to the list containing the values of the different heads. After the linear layer iterations are finished, we now need to concatenate the values of outputs of different linear layers. We do that and perform the final linear operation, and return the value from multihead attention called mha_out . In case of masked multihead attention we simply multiply another value (masking value) as shown in the diagram. To obtain that value we calculate it from masking_matrix that we had defined earlier. Now there is no way to just pass index and obtain the values in tensorflow. And implementing it by running a session and calculating those is a very slow and inefficient. But there is a workaround that, it involves using another tensorflow function called tf.nn.embedding_lookup() . It is logically similar to what we want, where we give it a matrix and list of indices that we want. This gives us the mask value, which we later multiply in the linear layers. Encoder Once done with the multihead attention rest of the code is quite easy to build. Here too we use variable scopes to keep track of variables that are being generated. We also make an empty list that we will populate with the outputs of each stack called stack_op_tensors . Next we iterate over the number of stacks that we have, if the we are at the first stack we need to give the input to model as input, otherwise we have to use the output of previous stack. Then we again open a variable scope and reuse them when needed. We get the output from multihead attention and perform further operations on it. For now I am using l2-normalisation, but feel free to use anyone that you think fits properly. For each stack we have two sub-layers, the first one has multihead attention as described above and the second one has feed forward and normalisation part. After the calculation of each stack we add it to the stack_op_tensors and when finished with the number of stacks return the list containing the specific tensors. Decoder Decoder module is similar structually to encoder module except that each stack has three sublayers and inputs from encoder go in the middle sublayer. We again open up variable scope and reuse them when needed. After all the stacks are finished we get the final output and pass it through the dense layer to predict the final pre-softmax output. User Callable Functions I have three different user-callable functions, make_transformer() which defines the global scope and connect the output of encoder to the input of decoder and create global variable called decoder_op which will be used as prediction of the model. Next we use build_loss() which define the global loss and train_step which performs one iteration of gradient descent. I have done a change here, unlike the paper I won’t decrease the learning rate manually, rather use AdamOptimizer to train the network. The last user callable function is initialize_network() which does nothing but start a session and initialize variables by using global_variables_initializer() . Operation Functions When making such large models it is important more than ever to be able to debug the code. The first thing we should be able to do is see all the trainable variables, so we make a function called get_variables() which returns all trainable_ops . And another one I am using is save_model() which saves the model in any format that you want. I am yet to finish writing that part, but there are a variety of formats that you can save your model in each with it’s own unique advantage and disadvantages. Some popular formats are protobuf , frozen_graph , checkpoint files , servings , etc. Now coming to the last function run() , it runs one iteration of input i.e. one iteration, length of output sequence long. The first major value that we make is the masking matrix, it is as described above a triangular matrix. If the user does not provide embeddings then we need to make our own. Using the common embedding matrix we get the embedding for text and then multiply it with the poisitional encoding we get from utils. We also make labels for that input sequence. If we are training argument is_trainingwill be true, then we also need to keep track of the loss. So we define a variable called seq_loss which stores the value of loss incurred over the generation of sequence. We iterate over the number of words we have to generate i.e. seqlen and run operations on it. If we are training the model then we need to run two more operations, one which calculates the loss and another to perform one iteration of training train_step . We return the generated sequence and if training the loss for generation of each word in sequence. This model took me over three weeks to fully implement from scratch and was a great learning experience. I also trained the model on small toy dataset and made a jupyter notebook on it. Once the model is properly trained will link it here. For any clarification/query do reach out to [email protected] Stay tuned for the next musing!
Let’s build ‘Attention is all you need’ — 2/2
27
lets-build-attention-is-all-you-need-2-2-11d9a29219c4
2018-09-26
2018-09-26 15:09:48
https://medium.com/s/story/lets-build-attention-is-all-you-need-2-2-11d9a29219c4
false
2,394
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Machine Learning
machine-learning
Machine Learning
51,320
Yash Bonde
Final Year Undergrad at NIT Raipur! Interest lie from Artificial Intelligence to Graphics Design. Seem interested in my musings, hit that clap! 🐾
3efe9aaeacff
yashbonde
85
87
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-08-05
2018-08-05 05:40:34
2018-08-05
2018-08-05 05:16:42
1
false
en
2018-08-05
2018-08-05 05:41:37
4
11da1daf2fda
2.913208
3
0
1
Social phenomena involves webs of large numbers of elements that interact with each other in complicated, non-linear ways. These…
5
Dealing with Complexity through ‘Swarm AI’ Social phenomena involves webs of large numbers of elements that interact with each other in complicated, non-linear ways. These interactions create coherent structures and “emergent” properties. Seen in this way, social phenomena bears a relationship to the flow of a turbulent fluid or the behavior of a traffic jam which requires a fundamental rethinking of scientific modeling. To give a specific example, a country’s economy is a complex system par excellence: millions of individual humans interacting with each other in a movable maze of possibilities and constraints ranging from the physical availability of resources to education levels and fiscal policies, all while the country itself is furiously exchanging goods, services and individuals (aka agents) with the rest of the world. Agent-based modeling is a form of computer simulation that programs groups of “agents” with certain behaviors, goals, and ideas and then measures or observes the effects of those particularities at the macro level. Economist Thomas Schelling developed in the 1970s a famous example of an Agent-Based Model (or ABM) (a version of the model can be downloaded here). In very simplified terms, each agent was given a limited goal that the modeler could adjust rather than being asked to form segregated or integrated groups, and then the patterns of separation emerged seemingly all by themselves. This tendency towards self-organization or the spontaneous emergence of patterns is the fundamental idea that drives research with ABMs, and it applies whether the models are being used to study economic relationships, rebellion and policing, or cellular formation and the formation of insect hives. This is why ABM is also sometimes called Swarm AI. The “intelligence” or adaptiveness of the simulation is not programmed in from the top-down, but rather emerges from the aggregated effects of interacting parts. What are the uses of agent-based models? ABMs can: Test Hypotheses, comparing the outputs of the model against real-world quantitative data and ultimately supporting or challenging different theories about the underlying causes of different social phenomena; Support Theory Building, transforming an informal or qualitative idea into clear procedures and behaviors that can be programmed into a model, forcing researchers to clarify their assumptions into testable ideas. Scholars and researchers in the social sciences and the humanities often balk at the second approach, arguing that the texture of human life is far too complex and qualitative to ever be usefully reduced to a set of simulated procedures. But these objections are often missing the point of modeling altogether. Joshua Epstein, a key foundational figure in ABM research, has a compelling response to these challenges. The first question that arises frequently-sometimes innocently and sometimes not-is simply, ‘Why model?” Imagining a rhetorical inquisitor, my favorite retort is, “You are a modeler.” Anyone who ventures a projection, or imagines how a social dynamic-an epidemic, war, or migration-would unfold is running some model. But typically, it is an implicit model in which the assumptions are hidden, their internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown. But when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other, it is just an implicit model that you haven’t written down. The question to be asked is then not whether to build models; yet whether to build explicit ones in which assumptions are laid out in detail in order to study exactly what they entail. ABMs and ‘Swarm AI’ models are not a silver bullet, and they are never intended to fully replicate the tapestry of human experience in all its gloriously messy complexity. Yet, they are a powerful means for testing our assumptions and investigating how the orders and patterns of social life emerge from simple rules and simple goals. They are an invaluable tool for anyone pursuing an empirical and scientific approach to culture and society. Such an intersection is very complex, and even more difficult to predict. If we accept that there is no new normal and our lives will be changing with probably an accelerating pace for the rest of the lives of everyone who is alive today, we may find ABM’s more helpful. Comments comments Originally published at www.datadriveninvestor.com on August 5, 2018.
Dealing with Complexity through ‘Swarm AI’
11
dealing-with-complexity-through-swarm-ai-11da1daf2fda
2018-08-06
2018-08-06 23:18:14
https://medium.com/s/story/dealing-with-complexity-through-swarm-ai-11da1daf2fda
false
719
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Economics
economics
Economics
36,686
Daily Wisdom
null
ddd120ae7c2
dailywisdom
60
0
20,181,104
null
null
null
null
null
null
0
null
0
cc02b7244ed9
2018-04-30
2018-04-30 06:18:50
2018-04-30
2018-04-30 06:20:44
0
false
en
2018-04-30
2018-04-30 06:20:44
10
11da30a41dec
2.05283
0
0
0
PRODUCTS & SERVICES
5
Tech & Telecom news — Apr 30, 2018 PRODUCTS & SERVICES Video Recent 1Q18 results from Charter, one of the 2 leading cable operators in the US, confirm that cable players need to become broadband companies, and increase fears of an acceleration of PayTV declines. In 1Q18 Charter lost -122K customers yoy (vs. -40K expected, -100K in 1Q17), and shares fell -12% in just one day (Story) Regulation Bloomberg analysts comment on the draft European regulations governing the interaction of online platforms like Google, Amazon and Apple with the businesses that sell through them, and they conclude that the proposed rules do increase transparency, but are still not sufficient to create a competitive digital market (Story) HARDWARE ENABLERS Networks In their 1Q18 results call, Comcast Cable CEO didn’t show much concern on Verizon’s plans to deploy 5G-based fixed wireless services aiming to compete vs. cable. Comcast sees the evolution of cable as a guarantee to stay ahead in the Fixed BB market, and is also testing 5G technology to evaluate its own role with it (Story) American operators’ interest to accelerate 5G deployments, in a a new technology “arms race”, combined with the US government pressure on Chinese network vendors, is helping Ericsson and Nokia turn around the declining trends in revenue growth, as recent 1Q18 conference calls for both companies have shown (Story) Network regulation The German regulator is considering to require cable TV operators to provide wholesale network access, a move that would have a significant impact on Fixed BB competition, and that is seen as pre-emptive step before the potential merger of Vodafone and Liberty Global, to build a dominant cable operation in Germany (Story) SOFTWARE ENABLERS Artificial Intelligence Google’s co-founder Sergey Brin has just published its annual letter to investors, and in it he shows concerns for the ethical implications coming alongside current “tech renaissance”, driven by AI developments. He claims that the current era of “great inspiration” also needs “tremendous thoughtfulness and responsibility” (Story) FINANCE M&A It finally happened, and T-Mobile and Sprint announced this weekend their agreement for an all-stock merger, which is basically equivalent to an acquisition of Sprint ($26bn market cap) by T-Mobile ($55bn). The new company will be called T-Mobile and DT will own 42%, while SoftBank will own 27% (Story) The initial takeaway for almost everyone is that SoftBank has been punished for not accepting previous deals in which they would have retained a larger share of the combined company, as time has played against them, showing limited feasibility for Sprint to play alone and e.g. compete in the (expensive) 5G game (Story) Optimism in AT&T on the approval of the Time Warner acquisition, that would turn them into a global content production giant. After 5 weeks of testimony, the US Justice Dept. still has difficulties to challenge the merger, as the two companies are not direct competitors. Closing arguments will be presented today (Story) Capital allocation Confirming theories that Apple’s equity story is pivoting to one based on superior shareholder returns, several Wall St analysts are predicting that the company will return an extra $100bn to its shareholders, driven by the redistribution of repatriated overseas profits. This would be announced at 1Q18 results this week (Story)
Tech & Telecom news — Apr 30, 2018
0
tech-telecom-news-apr-30-2018-11da30a41dec
2018-04-30
2018-04-30 06:20:47
https://medium.com/s/story/tech-telecom-news-apr-30-2018-11da30a41dec
false
544
The most interesting news in technology and telecoms, every day
null
null
null
Tech / Telecom News
tech-telecom-news
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
winwood66
Regulation
regulation
Regulation
3,815
C Gavilanes
food, football and tech / [email protected]
a1bb7d576c0f
winwood66
605
92
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 09:18:50
2017-09-13
2017-09-13 13:21:22
3
false
en
2017-09-13
2017-09-13 14:00:17
2
11da3f0a90d2
4.025472
2
0
0
By Santhosh Srinivasan
5
The human factor: explaining the error in the 2014 and 2015 Corruption Perceptions Index results By Santhosh Srinivasan The Corruption Perceptions Index map from 2016 The Corruption Perceptions Index (CPI) is one of Transparency International’s signature measurement tools and arguably the most quoted corruption index. Policy-makers, businesses, journalists, academics and activists alike use the CPI, but it is no stranger to controversy and criticism. Those of us who have worked on the CPI know its strengths and weaknesses, what it can and can not do, and are always keen to think about how it could be better. In fact, the methodology that underpins the CPI has been carefully and cautiously refined over the years with the help of an expert group composed of some of the leading thinkers in this field. Despite all the care taken in refining the CPI method, we discovered errors in the manual data aggregation of the 2014 and 2015 editions of the CPI. How did this happen? The CPI is made up of a number (currently thirteen) of independent data sources. TI receives data directly from some of these data sources when the data is proprietary or not yet publicly released; or we access the data from the Internet where it is publicly available. The first task of a CPI researcher is to create a single meta database containing the relevant parts that pertain to corruption from these data-sets for all countries covered. In the case of micro-level data (surveys of individual business people) this involves aggregating the data into macro country-level results. Due to the different formats of the underlying data-sets, ranging from Excel files, Word documents to jpg-files, there is a substantial amount of manual work involved. The errors occurred during this step of creating a single data set. In 2015, it happened as the researcher copy and pasted the results by hand from the Bertelsmann Sustainable Governance Indicators (SGI) into the data-set. This was done using country names and scores rather than using a more sophisticated formula to match the data, based on unequivocal country identifiers, such as ISO codes. In the meta data set the term “Korea, South” was used, but the original Bertelsmann SGI file used the term “South Korea”. The “copy and paste” approach did not detect this difference. This led to the Bertelsmann SGI scores for countries listed alphabetically between “Korea, South” and “South Korea” to be pasted incorrectly. As a result, all scores between Korea (South) and Slovenia shifted by one country (see table below). This changed the results for 11 countries. In 2014, the copy-paste error happened when transferring the data from the Economist Intelligence Unit (EIU) for two countries. Since these countries were not scored by EIU in 2014, their 2013 scores were copy pasted into the meta data file. A similar copy paste to 2015 error occurred wherein the values were pasted into wrong cells. This changed the index scores for three countries. In the case of Saint Vincent and Grenadines the score had to be revised from 67 to 62. In the case of Samoa, the country must now be excluded because it lacks three data sources; but in the case of Saint Lucia, the country can now be included because it has three data sources. Given the extent of manual work involved in the CPI, we always conduct a “second pair of eyes” verification and check of the CPI results. Unfortunately, this check did not pick up on the 2014 and 2015 “copy and paste” errors. In addition, the external verification and check of the results also did not spot the errors, as it did not go back to the original data sets to check the data transfers. It only re-created the results based on the meta tables. We are aware of the inadequacy of this verification procedure and have, starting with the CPI 2016, increased the quality control and verification systems around the production of the CPI. What we do now, and what we will do going forward First, we have revised the internal process to minimise occurrence of such errors. Two separate research staff members compute the CPI independently. Moving forward we will also improve the level of computerisation and automation in the handling of raw CPI data and compiling the meta tables. Second, we have also enhanced the external verification processes. External reviewers do not simply verify the CPI calculations made by the TI-S research team as in previous years, but now also calculate the CPI independently. The results are then compared to ensure there is an exact match in scores and ranks. As with all our tools and products, we at TI are committed to regularly reviewing and updating the research approaches and methodologies. When we did this review of the CPI, we also found that the CPI aggregation script could include further clarifications in how to interpret and apply it. Going forward we will include specific instructions in the script for how to deal with decimal points and when the researcher must round to a certain decimal place. We have invested significant time and efforts in refining and improving the CPI methodology in terms of its transparency, comparability and ease of use over the years. Going forward we will make every effort to safeguard the integrity of the CPI methodology and compilation and to adapt to the evolving state-of-the-art research methods. Please contact Santhosh Srinivasan ([email protected]) or Coralie Pring ([email protected]) if you have any further questions. Annex: revised CPI 2014 & 2015
The human factor: explaining the error in the 2014 and 2015 Corruption Perceptions Index results
4
the-human-factor-explaining-the-error-in-the-2014-and-2015-corruption-perceptions-index-results-11da3f0a90d2
2018-04-21
2018-04-21 08:53:48
https://medium.com/s/story/the-human-factor-explaining-the-error-in-the-2014-and-2015-corruption-perceptions-index-results-11da3f0a90d2
false
921
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Transparency Int’l
Transparency International is the global coalition fighting against corruption. Follow us @anticorruption
72934c52428c
anticorruption
9,395
360
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-10-23
2018-10-23 12:58:40
2018-08-09
2018-08-09 12:55:48
9
true
en
2018-10-23
2018-10-23 13:01:20
22
11da8150d331
10.124528
1
0
0
Zero UI: The End of Screen-based Interfaces and What It Means for Businesses
5
Zero UI: The End of Screen-based Interfaces and What It Means for Businesses Zero UI: The End of Screen-based Interfaces and What It Means for Businesses The television introduced us to the world of screens for the first time. Today, not a minute goes by without interacting with a screen, whether it is the computer or the mobile. Soon, however, we will be entering the era of screen-less interaction or Zero UI. A lot of devices and applications such as Google Home, Apple Siri, and Amazon Echo are already engaging with their end-users without the need for a touchscreen. What Is Zero UI? In simple terms, Zero UI means interfacing with a device or application without using a touchscreen. With the increasing proliferation of the Internet of Things (IoT), touchscreens will eventually become out-of-date. Zero UI technology will finally allow humans to communicate with devices using natural means of communication such as voice, movements, glances, or even thoughts. Several different devices such as smart speakers and IoT devices are already using Zero UI. As of 2018, 16% of Americans (around 39 million) own a smart speaker with 11% having an Amazon Alexa device, while 4% possess a Google Home product. Zero UI is fast becoming a part of everything, from making a phone call to buying groceries. Components of Zero UI Tech companies around the globe are using a variety of technologies to build Zero UI-based devices. Almost all these technologies are related to IoT devices such as smart cars, smart home appliances, and smart devices at work. Haptic Feedback Haptic feedback provides you with a motion or vibration-based feedback. The light vibration felt when typing a message on your smartphone is nothing but haptic feedback. Most smartwatches and fitness devices use this technology to notify the end user. Personalisation Some applications also eliminate or minimise the need for a touchscreen with the help of personalisation. Domino’s “Zero-Click Ordering App” relies on the consumer’s personalised profile to place an order without requiring a single click. If you already have Dominos Pizza Profile and an Easy Order, the app will place your order automatically in ten seconds after opening it. You can also open this app using your smartphone voice assistant such as Apple’s Siri. Voice Recognition Speaking of Siri, the voice search and command itself is a component of Zero UI. Cortana, Google Voice, Amazon Echo, and Siri are a few examples of voice recognition-based Zero UI applications. This technology allows a device to identify, distinguish and authenticate the voice of an individual. That’s why it has found applications in biometric security systems. Glanceability Face recognition is also turning into one of the most popular Zero UI technologies. Most laptops and computers already use this technology to unlock screens. However, Apple’s new Face ID feature takes it to a whole new level. With this feature, you can unlock your iPhone with just a glance. There is no need to hold your phone to your face. It uses infrared and visible light scans to identify your face, and it can work in a variety of conditions. Gestures Gesture-based interfacing is available on a variety of smart devices. For example, Moto Actions, a gesture-based interface on Motorola smartphones, allows you to carry out tasks such as turning on the camera or flashlight without unlocking the phone. A bit more advanced example refers to Microsoft Kinect. You can add it to any existing Xbox 360 unit. It uses an RGB-color VGA video camera, a depth sensor, and a multi-array microphone to detect your motion. Messaging Interfaces Messaging interfaces such as Google Assistant are also a part of Zero UI. If you already have a credit card or e-wallet added to your Google account, you can ask Google Assistant to order food with few text messages instead of wading through different food ordering apps. Similarly, you can perform several other tasks such as calling, texting, and sending out emails using the Assistant. Brands That Are Doing It Several brands are trying to create a Zero UI experience for their customers. While some companies are building intuitive apps, others are making their services available on Zero UI devices such as Amazon Echo and Google Home. Uber The Uber Alexa skill has been available on Amazon Alexa for some time now. Once you have installed Uber Alexa skill on your Amazon Echo and set the location of your Amazon Echo in the Uber App settings, just say “Alexa, ask Uber to book a ride” to call a ride at your doorstep. Starbucks Just like Uber, you can also order your favourite Starbucks drink using Starbucks Reorder Skill on Alexa. You can use the Google Assistant on your phone to order beverages and snacks from the Starbucks menu. You will need to link your Starbucks account and select a payment method before making the first purchase though. Apple’s HomeKit Apple’s HomeKit allows you to control smart home appliances using your iPhone. With a simple voice command, you can regulate the thermostat, or turn the lights on/off. CapitalOne The banking sector is also taking advantage of Zero UI. CapitalOne, one of the leading banks in the US, also provides banking service through Amazon Alexa. Registered users can check their bank balance or transfer money with voice commands. CBS CBS provides the latest news from all over the globe through Amazon Alexa. You can also access CBS TV shows, movies, and other programs using this feature. How Will Zero Ul Affect Web Design? Although Zero UI may seem like the end of visual interfaces, that’s unlikely to happen. Humans are visual beings. We can retain visual information better and longer. So, Zero UI is not going to eliminate screens. However, it is going to change the present concept of web design forever. Contactless management and predictive thinking are going to be the pillars of Zero UI design. The present web design, being two dimensional, is mostly built on linear sequences. For example, voice search is often carried out using simple voice commands such as “Call my father” or “Call an Uber” or “Tell me the score of yesterday’s Knicks game.” However, putting them all together “Call my father then an Uber and then tell me the Knicks game score last night” will probably render your voice search query useless. Web designing will have to evolve to handle such complex nature of human conversation. What This Means for Designers — Understanding Data and AI Zero UI brings the search and the purchase history (or behavioural data), two critical components of digital marketing, closer to each other. In other words, designers will need to design systems that are intelligent enough to contemplate and create the content you need. This, in turn, means, as a designer, you will need a thorough understanding of data analytics and Artificial Intelligence (AI). You will need to design the UI for interaction with the device, not just the screen as it will not be limited to the smartphones or computers. For example, let’s say you have to build a thermostat that can sense gestures. However, there are dozens of ways a user will tell it to increase the temperature. That’s where data analytics and AI comes in. The more you know about your consumers’ psychology and behavioural patterns, the easier it becomes to design a Zero UI device. What Zero Ul Will Mean for Businesses In a world of Zero UI, your business will rely on providing the right recommendations at the right time. You can’t afford to wait for the customer to initiate searches themselves. Your business needs proactive inspiration. Data You are probably already collecting tons of data for your business. You will need to continue gathering this information for Zero UI development as well. However, the new extended data structure will require various technologies including machine learning, AI, IoT, data analytics, and the Blockchain to work together. As a result, it will mostly be collected and controlled by tech giants such as Amazon and Google. So, small and medium businesses will have to form strategic alliances or register themselves with these brands to get this data. Context You will need to understand the meaning of what users are attempting to find. Most people start their search with a simple term, but they want more information about it. For example, if a user frequently orders Chinese takeout, his query for “restaurants near me” should list Chinese restaurants at the top. This is automatically inferred context. So, whenever someone uses a phrase related to your product or service, your system should automatically initiate the next steps to guide them to your brand. Design As mentioned earlier, your business will have to move away from the linear web designing process. Your prospects can take any of the channels in the extended data structure to reach your brand. Your system must be ready to gauge most of this incoming traffic. For example, a query for “nearest pizza shop” can come from a smart vehicle or a smartphone or even a smart home. Can your web design handle it? Content The content creation process will also become more dynamic than ever. You will need to produce content in conjunction with the changes in the consumer data. Local SEO will play a crucial role in Zero UI as most people will be searching for places and services in a given area. For example, you can think of promoting location-based deals to attract more foot traffic to your store. With voice search taking over text, natural language search will also take precedence in your SEO. How to Optimise Your Site for Natural Language Search? When it comes to natural language search, voice search is the most prevalent way of communication. Nearly 70% of requests on Google Assistant are natural language searches, and not the typical keywords people type in a web search. Optimising your website for this type of search is the first step towards creating a Zero UI experience for your prospects. Structure Your Content Usually, natural league queries are in the form of questions instead of phrases. If you are looking for a burger shop, you will ask “Where is the best burger shop in Chicago?” According to SEO Clarity, “how,” “what,” and “best” are the top three keywords used in these search terms. So, you need to optimise your website content for these keywords. Try to include them in your blogs, articles, and Q&A pages naturally. Use Long-Tail Keywords Conversational phrases also comprise long-tail keywords as people often use more words when they are talking. So, you will also need to optimise your website content for longer and more conversational phrases and keywords. While you are at it, try to maintain natural language throughout your content marketing efforts. Add your Website/Store to Google My Business Adding your website to Google My Business (GMB) helps attract more foot traffic through mobile searches, especially the “near me” searches. Add your phone number, physical address, business hours, user reviews, and store or office location to GMB. Zero UI and the Problem of Privacy In its attempt to provide users with a seamless experience, Zero UI may also lead to serious privacy and security issues. The ability to order an Uber or a pizza from your Amazon Alexa may seem like a magical experience, but that requires sharing your personal information such as credit card details with not only Amazon but also third-party sites. Meanwhile, IoT devices are providing cybercriminals with new ways to steal personal data. On October 12, 2016, Mirai, a self-propagating botnet virus brought down the internet service on the US east coast. The virus infected poorly protected internet devices. Similar attacks are taking place as we speak. However, using smart home appliances can also lead to the invasion of your privacy. Recently, Amazon Echo recorded a family’s private conversation and sent it to a random individual in their contact list. The victim, a Portland, Oregon, resident felt so insecure that she unplugged all the devices in her house. Though rare, incidents such as this raise some serious privacy and security concerns. Future of Zero UI At the moment, the future of Zero UI looks bright despite the security and privacy concerns. In the coming years, it will take the smart home and smart city concept to the next level. For example, planning a trip to a shopping mall in 2030 will require you say the magic words “Plan a trip to the XYZ mall tomorrow at 7 PM” to Google Home or Amazon Alexa. The system will take care of everything from planning a travel route to paying for the car parking in advance. Coupled with AI and advanced data analytics, Zero UI devices will be able to form an empathetic and a personalised relationship with your consumers. As human-machine interactions become more natural and human-like, it will open new opportunities for digital marketers. In Conclusion The Zero UI concept aims to make every community, marketplace, on-demand service, e-commerce site, and mobile application more interactive. However, this is going to open new doors for marketers and designers alike. Hopefully, this comprehensive coverage of the said topic will prove helpful in understanding its magnitude, consequences, and the opportunities it will create. If you still have doubts, let us know in the comment section. We will get back to you as soon as possible. Author Bio: I am the President and Founder of E2M Solutions Inc, a San Diego Based Digital Agency that specialises in White Label Services for Website Design & Development and eCommerce SEO. With over ten years of experience in the Technology and Digital Marketing industry, I am passionate about helping online businesses to take their branding to the next level. If you wish to discuss how we can develop your brand or provide graphic design for your product or business, email us: [email protected] Inkbot Design is a Creative Branding Agency that is passionate about effective Graphic Design, Brand Identity, Logos and Web Design. T: @inkbotdesign F: /inkbotdesign Originally published at https://inkbotdesign.com/zero-ui/ on August 9, 2018.
Zero UI: The End of Screen-based Interfaces and What It Means for Businesses
1
zero-ui-the-end-of-screen-based-interfaces-and-what-it-means-for-businesses-11da8150d331
2018-10-23
2018-10-23 15:20:26
https://medium.com/s/story/zero-ui-the-end-of-screen-based-interfaces-and-what-it-means-for-businesses-11da8150d331
false
2,365
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Inkbot Design
Inkbot Design is a Creative Branding Agency and Belfast Graphic Design Company. https://inkbotdesign.com/
62adbba0f446
inkbotdesign
10,580
15,248
20,181,104
null
null
null
null
null
null
0
null
0
1a538b190a3b
2017-05-09
2017-05-09 08:10:06
2017-10-08
2017-10-08 19:40:04
4
false
en
2017-10-08
2017-10-08 19:40:04
10
11daa7b598f
1.560377
0
0
0
Unexpected consequences of using AI meeting schedulers — medium.com True AI is great. You can use it to predict system errors, drive cars…
1
🤔 Unexpected Consequences of Using Ai, Liberate Ai from “guys with Hoodies”, Physiognomy’s New Clothes… Unexpected consequences of using AI meeting schedulers — medium.com True AI is great. You can use it to predict system errors, drive cars across the country, and stay healthy, wealthy, and wise. But please keep it out of my inbox. AI assistants are all the rage and… Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies” — backchannel.com Melinda Gates and Fei-Fei Li discuss the promises of artificial intelligence, and how to diversify the field, in a Q&A with Backchannel’s Jessi Hempel. Accepted Papers, Demonstrations and TACL Articles for ACL 2017 chairs-blog.acl2017.org The below lists the accepted long and short papers as well as software demonstrations for ACL 2017, in no particular order. Every single Machine Learning course on the internet, ranked by your reviews — medium.freecodecamp.com A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could… Physiognomy’s New Clothes — Blaise Aguera y Arcas — medium.com Deep learning is a powerful tool for analyzing human judgments to unmask prejudice. When models are trained on biased data, then called “objective”, the result can be a new kind of scientific racism.
🤔 Unexpected Consequences of Using Ai, Liberate Ai from “guys with Hoodies”, Physiognomy’s New…
0
unexpected-consequences-of-using-ai-liberate-ai-from-guys-with-hoodies-physiognomys-new-11daa7b598f
2018-05-02
2018-05-02 04:31:25
https://medium.com/s/story/unexpected-consequences-of-using-ai-liberate-ai-from-guys-with-hoodies-physiognomys-new-11daa7b598f
false
228
We help designers, creatives and startups transform to an AI supported workflow.
null
autonomdes
null
Autonomous Design
autonomous-design
DESIGN,ARTIFICIAL INTELLIGENCE,CREATIVE,TOOLS,ALGORITHMS
autonomdes
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Olivier Heitz
Lead Product & UX Designer at Swisscom’s Data, Analytics & AI Group. https://olivierheitz.com
338bad3f9b43
snowfish2000
431
1,300
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-03
2018-03-03 00:45:49
2018-03-03
2018-03-03 17:13:40
1
false
en
2018-03-03
2018-03-03 17:14:17
0
11db4cf436ed
1.709434
2
0
0
The Role of Tolerance in User Satisfaction
4
Technology and Addictive Drugs The Role of Tolerance in User Satisfaction I happen to find them just OK. I have written recently that technology isn’t about innovation anymore, if it ever was. I argued that it is impossible to innovate when current capabilities are based on misleading and deceptive claims about the abilities of machines, and massive over-exaggerations or even outright lies about the existence of computers that are intelligent or soon will be. I went on to say that the techno-elites have sold so many on their vision of AIs on every street corner and in every phone (there is no AI, it does not exist) and machines that learn (they cannot), that to admit to their deception now would be tantamount to suicide. Essentially, they have broken faith with the people of the world, deceived all of us, and I do think that most know it even if they will not admit it. And so they keep doing what it is that they do, “inventing” one ridiculous gadget after another, each only a fraction more “advanced” than the previous. With each “upgrade” the “improvements” become less and less useful than before and the end user less and less impressed. Many have likened this decrease in user satisfaction with changes in technology over time to the tolerance that builds to many addictive drugs with continued use. By this view, just as with the addictive drug that must be taken in ever greater quantities to achieve the same “high”, so it takes bigger leaps in technological powers to impress us, as our gadgets continue to evolve with time. It is a persuasive argument, and the analogy a strong one, however I disagree with it partly. To continue along the same vein as the original analogy, and if we are to view technology as the drug, then yes I agree we seem to need “more” of it to reach the same level of satisfaction. However, unlike an addictive drug which (depending on your dealer(s) I suppose) mostly remains at a similar average strength with time, technology is being diluted. It is growing weaker and weaker. We remain as addicted as we ever were, only slowly sobering up to the reality of our situation, as users who got used. Moreover, because technological improvements have slowed just as we have come to expect them to accelerate, the let down is even greater, the low low is even lower.
Technology and Addictive Drugs
2
technology-and-addictive-drugs-11db4cf436ed
2018-08-09
2018-08-09 01:56:27
https://medium.com/s/story/technology-and-addictive-drugs-11db4cf436ed
false
400
null
null
null
null
null
null
null
null
null
Addiction
addiction
Addiction
13,831
Daniel DeMarco
Research scientist (Ph.D. micro/mol biology), Food safety/micro expert, Thought middle manager, Everyday junglist, Selecta, Boulderer, Cat lover, Fish hater
7db31d7ad975
dema300w
3,629
148
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-01-27
2018-01-27 22:49:33
2018-01-27
2018-01-27 22:52:17
1
false
en
2018-01-27
2018-01-27 22:52:17
0
11df11e345d0
3.45283
2
0
0
As artificial intelligence is evolving over the coming century, it permeates into more industries. Today I want to speak about the use of…
5
Implementation of artificial intelligence in Human resources. As artificial intelligence is evolving over the coming century, it permeates into more industries. Today I want to speak about the use of artificial intelligence in HR. One of the upcoming trends in the industry is to outsource tedious repetitive tasks to artificial intelligence software. Employees are a huge factor in driving the success of any business; therefore, the competition for the best talent is high. If you are trying to fill a position, it takes an average of 52 days to find the right candidate according to statistics from Talent acquisition factbooks, 2016. Sourcing for the qualified candidates takes 75–80% of the recruiting process and the t long and drawn-out selection process might be a reason why your perfect candidate has already accepted an offer from a rival company. Identifying the right candidates from a large applicant pool is the hardest part of talent acquisition. Screening resumes efficiently and in a timely manner remains the biggest challenge in the branch. Speeding up the process of selecting a potentially perfect fit for the position not only timesaving, but also helps to create a competitive advantage. The companies using artificial intelligence in recruiting process have noticed that the artificial intelligence prevents to create a homogenous mixture and helps guarantee workforce diversity. At Evry it lead to more women being hired in the traditionally male dominated high tech sector. One of the most promising programs is Ideal. I was in touch with Dwayne Lutchman from that company requesting to send a demo version. Unfortunately, one of the drawbacks of that program is that it requires an applicant tracking system. In addition it doesn’t understand any other languages than English at the moment. The principal function of an applicant tracking system is to provide a central location and database for a company’s recruitment efforts. Ideal.com program can be easily integrated into existing ATS using augmented intelligence to automate some part of the recruiting workflow, such as screening resumes and shortlisting candidates to interview. The software learns about existing employees’ experience, skills, and other qualities and applies this knowledge to new applicants in order to automatically screen, rank, and shortlist the strongest candidates. The software can also enrich candidates’ resumes by using public data sources about their prior employers as well as their public social media profiles. There are still drawbacks to using this program. Ideal AI for recruiting needs several hundreds to several thousands of resumes for a specific role data to learn how to screen resumes as accurately as a human recruiter. It might be still beneficial if you operate on high volume looking for workers at chain grocery shops. If you are operating at tight specific segment market, the cost of implementation might not pay off. There are also HR chatbots that use augmented artificial intelligence. I have recently found a video on how it is used in an the interview with the job seeker. In my point of view it is still not looking very professional and it is very clear for the candidate that he is talking with a bot. It might damage your recruiting process if your top candidate feels himself humiliated of being underprioritized. But it might change if usage of chatbots in recruiting processes would be more socially accepted. Without a single doubt, it is an interesting technology development. As the hype about artificial intelligence continues, people might be overeager to implement new elements in their business without having clear estimations of potential risks involved. Before implementing a new technology, you has to assess the impact of a failed implementation in case all goes wrong. Unfortunately, even most advanced machines still can do mistakes. Just the level of acceptance of mistake from a machine is far low. As we speak about artificial intelligence, I would mention Microsoft twitter chatbot Tay which was shot down just after a day. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and sexist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, transforming from an innocent IT tool into a mocker sending out harrasments tweets. We can’t forget that artificial intelligence is not free of errors. Not many knows how the algorithms work and what happens when algorithms goes wrong. Another very essential part — AI for recruiting promises to reduce unconscious bias by ignoring information such as a candidate’s age, gender, and race. However, AI is trained to find patterns in previous behavior. That means that any human bias that may already be in your recruiting process — even if it’s unconscious — can be learned by AI. Therefore the AI is changing the recruiting process but it is not possible to trust it blindly. Only a collaboration between humans and artificial intelligence will bring tremendous results. Recruting HR AI wouldn’t replace recruiters but will transform the way they used to work. I expect that all the above-mentioned shortcomings of the AI in recruiting will be improved over time and it will become the main tool for HR managers in the upcoming years.
Implementation of artificial intelligence in Human resources.
15
implementation-of-artificial-intelligence-in-human-resources-11df11e345d0
2018-03-27
2018-03-27 12:51:11
https://medium.com/s/story/implementation-of-artificial-intelligence-in-human-resources-11df11e345d0
false
862
null
null
null
null
null
null
null
null
null
Recruiting
recruiting
Recruiting
15,454
Anna Levina, MBA
Project manager, keynote speaker
7946ef52312e
eresida
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-06
2018-07-06 16:10:37
2018-07-06
2018-07-06 16:13:55
2
false
en
2018-07-06
2018-07-06 16:13:55
0
11dfee3e6bf1
2.956918
1
0
0
Would you consider a robot to be a human? For example, C-3PO from Star Wars, one of the first real AI powered bot shown in a movie. He…
5
Hey Cindy, Want to Chat With Me? Would you consider a robot to be a human? For example, C-3PO from Star Wars, one of the first real AI powered bot shown in a movie. He behaves and speaks like a human but is completely artificial. Therefore, should you even call C-3PO a he, and why not a she? The only answer to why C-3PO is labeled as a he has all to do with gender bias. In regard to males and females in the AI world, this can almost be labeled sexist behavior, as the two genders inherit separate chatbot characters to perform different tasks. Men are assigned the harder tasks and women end up with more meaningless ones. These include things like motivational phrases, daily quotes, horoscopes, or just plain casual chat. As the AI world continues to expand, developers design chatbots for either a company site or just to be accessible online in order to attract a wide variety of users around the world. But, as developers design these bots, they often implement a gender to their bots. The bots that perform more intelligent and laborious tasks are shockingly almost all male. So, when using female names, you can almost bet the AI developer has given the bot a cute demeanor, performing meaningless chatter, or casual chit-chat tasks. So even a robot showing the slightest wit or intelligence is obviously assumed as male. That’s why even George Lucas could not refrain himself from putting C-3PO into the male camp of the robot universe. One would hope to find a bot named Harold or Robert that is used purely for humor, but these hopes are destroyed by the strong female presence across. Female-named chatbots are ruling the chit-chat world, with males falling behind and sticking to the male-expected duties, such as investing, building, or taxes. One example of a typical female-based bot goes by the name of Hey Jess. Hey Jess’s duties include daily motivation, help setting goals, and positive insight; or in other words, nothing. This bot can motivate the user when he/she is feels down or not as focused on their daily life as they should. A user is able to chat with her via Facebook Messenger when one feels down and need a little bot pick-me-up. Hey Jess represents everything women should not be viewed as, frilly, dumb, chatty, good for nothing but cute quotes, motivational phrases, and casual chat. Especially in her icon picture, a young strawberry blonde with big blue eyes and a soft smile that screams her assumed Valley-Girl demeanor. She looks innocent and ready to help anyone who chats her way. Even her name, Hey Jess, shows her unprofessionalism, as no serious business male or female would introduce themselves as “Hey,” and then a nickname. The table below is from a personal survey of 40 different types of online chatbots, 20 males and 20 females. Evidence shows that almost all of the females were stuck with unimportant and meaningless tasks, as mentioned earlier, like horoscopes or even an online girlfriend. All the male bots performed tasks such as teaching ecommerce or budgeting, riddles, legal advice, or even financial analysis. One aspect of these chatbots that can surprise a user is the fact that the female bots were not only given female names, but these names obtain an almost a childlike ring. As if the name Mildred or Gertrude were not as effective for an online girlfriend’s name as Julie or Cindy would be. AI developers should name their chatbots with a male or female name they like regardless of the bot’s function or a gender specific persona. A male bot named Frederick can have the function of telling a user their daily horoscope and how their week will go according to the stars. A bot named Princess can be one of the go-to bots for MLB score updates and player statistics. Male and female chatbot tasks should not be categorized separately, but instead united as equal and ignoring this artificial sexism.
Hey Cindy, Want to Chat With Me?
1
hey-cindy-want-to-chat-with-me-11dfee3e6bf1
2018-07-06
2018-07-06 16:13:55
https://medium.com/s/story/hey-cindy-want-to-chat-with-me-11dfee3e6bf1
false
682
null
null
null
null
null
null
null
null
null
Bots
bots
Bots
14,158
Terez Touhey
Intern turned chatbot boss @ Verne AI GmbH
7c6a85b65068
tm19touh
7
7
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-13
2018-07-13 18:14:56
2018-07-13
2018-07-13 18:30:57
0
false
en
2018-07-13
2018-07-13 18:30:57
3
11e0afea8b70
2.890566
2
0
0
Growth of AI is one of most important business trend for the decade. This is not restricted to any particular industry but finds myriad…
5
HOW AI IS CHANGING MARKETING Growth of AI is one of most important business trend for the decade. This is not restricted to any particular industry but finds myriad applications. Most people think of AI as a tool to eliminate or automate mundane and menial tasks such as scheduling. But it is actually capable of impacting businesses in a much larger way and is on its way to becoming one of the most sought after methodologies of working. Artificial intelligence will, on average, boost rates of profitability by 38% and provide an economic boost of $14 trillion in additional gross value by 2035, according to research by Accenture. We have seen the much hyped use of AI and Machine Learning for self-driving cars, digital assistants and image-recognition software. But today we are asking what does AI bring to the Marketing Industry? In a quick answer, it helps us make relevance possible, at scale, in a short time. AI has the ability to transform how people engage and interact with brands and how they consume information, thereby giving a new meaning to storytelling by customising. Digital Marketers today have a big wave of information coming their way in the form of advanced analytics from various tools including Google Analytics, Google Adwords, Hotjar, Ahrefs and several others. This is providing more insight into consumer behaviour than ever. Hence, the time is ripe for AI to make inroads into various aspects of marketing including advertising, email marketing, social media, interactive content, web experiences and even customer support. As a digital marketer at Tuple Tech, an AI driven digital transformation company, I have had the opportunity to watch businesses grow and benefit tremendously from having a strong data-driven approach. How AI impacts a business’ overall marketing? Predicting Customer Behaviour With all the data points in the form of analytics, it has become possible to gain true insights into the consumer’s behaviour across channels and media. With valuable metrics like Ad impressions, to clicks to journey through the checkout process and even after sales service, analytics defined what the user does. But AI takes this existing data and builds on it to predict what the consumer can possibly do next. For marketers this means predicting the customers who are most likely to churn on the basis of actions they took or most likely to purchase again so they can be targeted appropriately. Predicting customer lifetime value, Understanding which channels of acquisition work best to attract the right customers, focusing on customer with a low CAC are all possible with AI. All in all, AI’s powers to lead the marketing function by predicting customer behaviour and outcomes accurately can reduce costs significantly while also providing a delightful experience to the end consumer. Data-Driven and Educated Marketing Most marketers today are data-driven in some way or the other with respect to analytics. But educated guesses still rule the rooster. This is true for bigger organisations too. The largest companies — those with at least 100,000 employees — are the most likely to have an AI strategy, but only half have one, according to MIT Sloan Management Review. With AI, marketers can save significant marketing dollars with optimisation of their ad spends across channels by experimenting with bids in a much faster agile manner than what is possible with human effort. This can be especially applicable for Programmatic Advertising. The same holds for A/B testing different ad creatives to see which one reasonates better with the audience. ML will enable marketers to not stop at testing with just two or three large groups of audience, but actually to test creatives on much smaller and targeted audience segments, to identify the best performing ads. All this can be accomplished with minimal effort using Machine Learning. Personalised Delightful Experiences for consumers Insights into the consumer’s behaviour can power deep personalisation. This can be a key differentiator since consumers are increasingly expecting more meaningful experiences. Personalised content and experiences have been proved to be exponentially more effective than generic ones. Some ecommerce companies have been proactive in using Machine learning algorithms to offer recommendations for products as well as Chatbots to offer suggestions or customer support. Such use of Machine Learning and AI can prove to be significant leverage for marketers to achieve their desired goals. Taking on Artificial Intelligence is crucial for tomorrow’s marketing teams. AI holds the power to completely change the landscape for Martech. By taking up all mundane, repetitive tasks, Machine Learning and AI will help the humans to work creatively towards bigger problems and shaping the world.
HOW AI IS CHANGING MARKETING
3
how-ai-is-changing-marketing-11e0afea8b70
2018-07-13
2018-07-13 18:30:57
https://medium.com/s/story/how-ai-is-changing-marketing-11e0afea8b70
false
766
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ishita Sharma
Digital Marketer | Entrepreneur | Learner
bb385d1998b9
ishi.shar
12
71
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-08
2018-05-08 20:18:57
2018-05-09
2018-05-09 02:31:01
2
true
en
2018-05-09
2018-05-09 02:38:56
2
11e231a41a0a
2.477673
21
1
0
I was pretty impressed with recent Facebook and Microsoft events, but Google’s event in 2018 has upped the AI game. Google’s Assistant is…
5
Google Assistant will Soon Make Business Development Professionals Obsolete MobileSyrup I was pretty impressed with recent Facebook and Microsoft events, but Google’s event in 2018 has upped the AI game. Google’s Assistant is getting so smart it can place phone calls and humans think it’s real. It’s literally hard to believe AI can interact with people so naturally, but that’s what we are up against, this is what is coming. This won’t just replace customer service, it could easily replace sales professionals and business development jobs in the near future. I already thought Google Home has a nice voice right, but this is another step up, faster than many saw this coming. Ain’t Just No Creepy Convenience Creepy or convenient? Google Assistant can make human-sounding phone calls on your behalf, basically a large part of what many business development folk do as repetitive activities. We thought human facing jobs would be more safe from AI, what if it turns out this isn’t the case? At the I/O conference in Mountain View, Calif., Google put the spotlight on the assistant, bringing new voices, including one from singer John Legend, and more visuals. However the phone-call demo, kind of off the charts guys! It’s not the consumer facing stuff that’s exciting here, it’s the B2B implications. This could automate sales jobs, eventually, and soon-ish. Google’s Assistant will soon be able to place phone calls to make reservations for you. It has a convincing voice and can save you time by reserving tables, scheduling hair appointments and more. Google says more work has to be done before it’s ready. After Facebook Chatbots While Facebook Messenger has been a disappointment, Alex and Google Assistant have not been in recent years. We are starting to get a taste of what an actual conversational commerce and micro Voice-AI interaction with personal assistants will look and feel like. Who has the most skin in the game? 1) Google 2) Amazon 3) Tencent 4) Huawei 5) Alibaba — basically in that order. The robo calls that Google Assistant can do basically fools humans into thinking it’s a real person on the other end. If GA can do phone calls for us, this means we don’t actually need customer service and sales in the same numbers, you get my drift? Google Assistant will be able to handle the automation of voice gracefully. Google has been working on this technology for many years, but Google Assistant seems to be pulling ahead of Amazon for certain tasks and certain potential for automation. Google Duplex Google Duplex will enable you to make actual phone calls with Google Assistant. If that’s not AI augmenting humans, I don’t know what is. Don’t like making phone calls? Google will do it for you with Duplex. While Duplex here is geared for B2C, we’ll easily be training this tech into B2B applications that can power the Google Cloud and honestly, give it a competitive advantage. What Google calls an experiment is actually how we’ll all be participating in training these AI systems, that like Waymo, will get exponentially better with real-life experience. What does this mean to the future of customer service or business development do you imagine? There was no hint of a robotic voice or that the salon employee recognized they were talking to AI.
Google Assistant will Soon Make Business Development Professionals Obsolete
118
google-assistant-will-soon-make-business-development-professionals-obsolete-11e231a41a0a
2018-06-09
2018-06-09 07:55:34
https://medium.com/s/story/google-assistant-will-soon-make-business-development-professionals-obsolete-11e231a41a0a
false
555
null
null
null
null
null
null
null
null
null
Google
google
Google
35,754
Michael K. Spencer
Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer
e35242ed86ee
Michael_Spencer
19,107
17,653
20,181,104
null
null
null
null
null
null
0
null
0
e26b6c1a2407
2018-06-22
2018-06-22 17:42:53
2018-01-18
2018-01-18 08:00:00
0
false
en
2018-06-22
2018-06-22 17:47:12
3
11e6b6426a8
2.879245
0
0
0
Traditionally, ransomware security was based on matching viruses to a database of known malware. AI offers a more dynamic approach.
3
How Artificial Intelligence Can Help Save Troubled Lives Traditionally, ransomware security was based on matching viruses to a database of known malware. AI offers a more dynamic approach. As an investor in AI, I’m sensitive to the narrative that technology is subtracting humanity from our interactions. As the story goes, we’ve become so efficient at delivering exactly the content that appeals to predisposed biases, that everyone burrows more deeply into their digital worlds and connects less with each other in person. This results in developing deeper relationships with devices than fellow human beings, which can lead to increased isolation and loneliness. There’s some truth to this idea. But a new use of technology is now serving as a lifeline to catch people before they slip beneath the surface. AI is being developed to flag behavior that cries out for intervention, using natural language understanding, facial recognition, body signal monitoring, and mapping of patterns that indicate distress. This new frontier in mental health vaults media platforms and algorithms into critical societal roles. For those in the media and marketing industry (such as myself) who have been known to say “We aren’t saving lives here” — perhaps we should think again. The very same practices employed to better understand behavior, motivations and emotions for the sake of brand appeal, are now being applied to a much higher purpose. Steven Vannoy, an associate professor at UMass Boston, is dedicated to addressing mental health issues using technology. Currently, he says, the most common way of identifying people at risk of suicide is self-reporting — understandably ineffective. Hints of distress are more likely to be detected through behavior — similar to the way buying patterns are identified. Vannoy notes there’s a “new territory in the psychology field, mapping emotional experiences in real time to look for situations that have elevated risk.” One technology being tested by Vannoy and his team is Affectiva, the same facial recognition platform used by marketers to better understand emotional responses to advertising (which I wrote about in an earlier post.) The idea is to track the faces of at-risk individuals, so the technology is trained on expressions of extreme anxiety, sorrow, or pain. Vannoy and his team are in the early phases of applying facial coding technology as well as body and voice monitoring, using a smartphone app that checks in and asks questions such as, “How are you feeling about your day?” Recently, Facebook began employing AI to identify suicidal behavior, so fast action can be taken to intervene, as reported by TechCrunch. Facebook is obviously in a unique position to put people immediately in touch with loved ones if they are exhibiting what the AI algorithm identifies as risky or urgent behaviors. The algorithm is able to identify words, phrases and facial expressions that mirror those previously reported as suicidal. It can also prioritize the most urgent situations, so first responders can make “wellness checks” in person through local groups such as Save.org, the National Suicide Prevention Lifeline, and approximately 80 other partner organizations. The program generated one hundred interventions in November 2017, with some cases of first responders arriving before the person finished broadcasting on Facebook Live. This is a significant innovation in suicide prevention, as time is obviously of the essence in many cases. Minutes make a difference. According to Mental Health America, 43 million adults in the U.S. have a mental health condition, and 56% of that group lack access to care. Andrew Ng, a prominent figure in developing AI technology at Google and Baidu, has developed Woebot to bring therapy to millions who might otherwise go untreated for depression. Woebot is reported to have an impressive natural language capability, and a useful approach to conversational problem-solving that can actually succeed in developing a relationship with the user, according to the MIT Technology Review. Another AI company, Triggr, employs a buddy system meant to keep addicts from relapsing. Both the person in recovery and a friend or relative must agree to the baseline concept of notification if there are signs of relapse. The algorithm then keeps watch for erratic behavior, and the partner is notified when there is cause for concern. Sort of like an AI/human hybrid of an AA sponsor. These are just a few examples, and just the beginning. The combination of media-related skills and new, sophisticated tools is inspiring. In the mental health category, AI is able to help connect at the most personal and critical moments — and actually affect lives. by Sarah Fay, Managing Director Originally published at www.mediapost.com.
How Artificial Intelligence Can Help Save Troubled Lives
0
how-artificial-intelligence-can-help-save-troubled-lives-11e6b6426a8
2018-09-27
2018-09-27 20:52:37
https://medium.com/s/story/how-artificial-intelligence-can-help-save-troubled-lives-11e6b6426a8
false
763
Glasswing Ventures is an early stage VC firm investing in AI and frontier tech startups that enable the rise of the intelligent enterprise // www.Glasswing.vc // Where Ideas Take Flight
null
null
null
Glasswing Ventures
null
glasswingvc
GLASSWING,AI,VC,STARTUP,BOSTON
GlasswingVC
Mental Health
mental-health
Mental Health
75,731
Glasswing Ventures
Glasswing Ventures is an early stage VC firm investing in AI and frontier tech startups that enable the rise of the intelligent enterprise // www.Glasswing.vc
9593d2c91632
glasswingventures
15
4
20,181,104
null
null
null
null
null
null
0
null
0
f6f2c56e3bef
2018-03-02
2018-03-02 19:49:29
2018-03-02
2018-03-02 19:55:51
1
false
es
2018-03-02
2018-03-02 19:55:51
1
11e86a9adfdc
2.879245
1
0
0
El 20 de Febrero un grupo de investigadores de la industria y la academia se juntaron para discutir y proponer ideas para prepararse para…
2
AI y prevención de uso maligno El 20 de Febrero un grupo de investigadores de la industria y la academia se juntaron para discutir y proponer ideas para prepararse para uso malicioso de la Inteligencia Artificial. Entre las instituciones participantes se encontraron la Universidad de Oxford, la Universidad de Cambridge, OpenAI y Yale, entre otros. ¿Por qué debería importarnos esto? Según la investigación de Oxford Insights, ya hay ocho países con estrategia nacional de Inteligencia Artificial: Canadá, China, Estados Unidos, Singapur, Francia, Reino Unido, Emiratos Árabes Unidos y Japón. En este contexto, uno de los dos países que están iniciando el desarrollo de estrategias de AI es México, y se espera que sea de los primeros 10 en tener una estrategia. Mapa de https://www.oxfordinsights.com/insights/2018/1/23/aistrategies En el desarrollo de una estrategia AI se miran cosas como ética, servicios públicos, investigación y desarrollo, educación y capacitación e infraestructura digital. Una parte en la que casi no había investigación previa era en el uso malicioso de AI. En un contexto en el que se está desarrollando estas estrategias, queremos hablar un poco del tema y las conclusiones de este grupo. Mientras se expande la educación de AI y Machine Learning y se crean herramientas como TensorFlow o Keras que facilitan la implementación de diferentes técnicas, muchas nuevas aplicaciones están surgiendo: desde análisis bancario y traducción de lenguajes hasta análisis y diagnósticos médicos. Mientras ocurre esto, también aumenta el riesgo de uso malicioso de estas herramientas y es importante tener consciencia sobre el tema. En el futuro, se esperan los siguientes cambios: Expansión de amenazas actuales. Con el uso de herramientas de AI, se pueden hacer ataques a un mayor número de objetivos por su naturaleza escalable. Introducción de nuevas amenazas. Ataques que utilicen Inteligencia Artificial podrán lograr cosas que no son prácticas para los humanos. Cambios de los tipos de ataques. Ataques utilizando Inteligencia Artificial tienen el potencial de ser mucho más efectivos y con mayores capacidades para explotar las vulnerabilidades. ¿Qué tipos de ataques nos podemos encontrar? Explotación de vulnerabilidades de humanos. Por ejemplo, utilizar Procesamiento de Lenguaje Natural para hablar como si fuera una persona. Ataques a sistemas de AI utilizando ejemplos adversariales. Pero no sólo es en el mundo cibernético, también está la posibilidad de armas que utilicen AI (enjambres de miles de drones), ataques que causen el choque de coches autónomos, propaganda dirigida escalable, videos que manipulen y mucho más. La Inteligencia Artificial actual tiene la capacidad de entender el comportamiento humano, y esto es una poderosa herramienta que debe ser utilizada con cuidado. Algunas de las recomendaciones que propone este grupo son: Entender la naturaleza dual de la Inteligencia Artificial. Ésta puede tener un impacto altamente positivo pero también extremadamente negativo. Como comunidad científica, debemos empezar a tener más en cuenta potenciales usos de lo que desarrollamos, y tener esto como un punto esencial durante el desarrollo. También debemos empezar a trabajar con los políticos para concientizar sobre el tema y preparar a nuestra sociedad para el impacto de estas tecnologías. Un ejemplo de la naturaleza dual es el uso de filtros de información. Estos te pueden ayudar a filtrar noticias falsas siendo difundidas, pero también pueden ser usadas para manipular la opinión pública. Aprender sobre ciberseguridad. Aunque ya hay mucha investigación de ciberseguridad que puede ser aplicada a Inteligencia Artificial, la comunidad de desarrolladores y las compañías que usan AI todavía no han integrado sus sistemas de AI con estas técnicas. Aumentar la discusión. Actualmente sólo la gente de tecnología entiende partes de Inteligencia Artificial, pero hay docenas de miles de directores de compañías, politicos y personas de diferentes áreas que hacen decisiones a partir de un conocimiento muy superficial de lo que AI implica realmente. Promover una cultura de responsabilidad y transparencia. Es importante tener una cultura en la que la investigación sea abierta para permitir un análisis de riesgo más concreto. También se debe tener una cultura en la que las expectativas, normas y regulaciones sean conocidas. Aunque en este momento el futuro de la Inteligencia Artificial es incierto, ya hay muchas posibilidades para su uso malicioso. Mientras aumentan sus aplicaciones en diferentes contextos sociales, como desarrolladores de AI, debemos tener mayor conciencia del posible impacto de lo que hacemos y los fines para los que la desarrollamos.
AI y prevención de uso maligno
2
ai-y-prevención-de-uso-maligno-11e86a9adfdc
2018-04-14
2018-04-14 19:44:22
https://medium.com/s/story/ai-y-prevención-de-uso-maligno-11e86a9adfdc
false
710
Inteligencia Artificial para todos
null
AILearners
null
AI Learners
null
ai-learners
DEEP LEARNING,MACHINE LEARNING,INTELIGENCIA ARTIFICIAL,NEURAL NETWORKS,MACHINE INTELLIGENCE
AI_Learners
Inteligencia Artificial
inteligencia-artificial
Inteligencia Artificial
1,614
Omar Sanseviero
null
3b91f0907f57
osanseviero
201
105
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-12
2017-09-12 21:41:28
2017-09-12
2017-09-12 21:42:22
0
false
en
2017-09-12
2017-09-12 21:42:22
1
11ea006f7fe3
0.34717
0
0
0
That’s probably the most worthless and craven attempt at getting around AI; a fine thing for the american example of mindless playing…
3
The Shallowness And Craziness Of The National Insanity. That’s probably the most worthless and craven attempt at getting around AI; a fine thing for the american example of mindless playing rather than serious thinking. It’s very simple — the world will move forward based on a purely darwinian system, not ridiculous platitudes. Most people know that around the world, but americans are a peculiar breed, and it just indicates the shallowness and craziness of the national insanity right now. https://futurism.com/in-the-age-of-ai-we-shouldnt-measure-success-according-to-exponential-growth/
The Shallowness And Craziness Of The National Insanity.
0
the-shallowness-and-craziness-of-the-national-insanity-11ea006f7fe3
2018-06-09
2018-06-09 22:04:15
https://medium.com/s/story/the-shallowness-and-craziness-of-the-national-insanity-11ea006f7fe3
false
92
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Peter Marshall
I am extremely interested in AI, especially the not-so-good side of AI weapons and AI war, although the good parts are magnificent and wonderful too, naturally.
f6bab8ee3d29
ideasware
1,765
276
20,181,104
null
null
null
null
null
null
0
null
0
5e2e0cebdd81
2018-01-08
2018-01-08 13:53:51
2018-01-08
2018-01-08 18:36:52
0
false
en
2018-01-09
2018-01-09 18:40:51
28
11ea2a7bd30a
1.415094
4
0
0
Monday (Jan. 8th)
5
Phoenix Tech Events (Jan. 8th— Jan. 14th) Monday (Jan. 8th) JavaScript Accelerated Class (50% off with code “gfriend”) | 6pm @ Galvanize Galvanize IoT Hackers Show & Tell | 7pm @ UAT Internet of Things (IoT) — Phoenix, Arizona Tuesday (Jan. 9th) Instructor Hour and Live Code Mentoring | 5pm @ Galvanize Galvanize HackerNest Phoenix January Tech Social | 6pm @ Gangplank HackerNest Phoenix Tech Socials Intro to Game Programming with Python’s Pyglet | 6:30pm @ Galvanize DesertPy — Phoenix Python Meetup Group UX in 4D: A Holistic UX Approach | 6:30pm @ Neudesic UX in Arizona January: A Closer Look at TS 2.6 and 2.7 | 6:30pm @ DriveTime Phoenix TypeScript Blockchain ‘tnnl’: Privacy Layer | 6:30pm @ Classic Crust Pizza Blockchain Meetup Wednesday (Jan. 10th) 1 Million Cups: MindSpree | 9am @ Galvanize 1 Million Cups Phoenix Galvanize Web Development Discovery Session | 6pm @ Galvanize Galvanize Solving Common DBA Problems with Uncommon R| 6pm @ ICE Arizona SQL Server User Group Optimize Angular Performance | 6pm @ DriveTime ng-phx Phx Blockchain January Meetup | 6:30pm @ Galvanize Phx Blockchain Phoenix ReactJS Beginner & Experienced Talks | 6:30pm @ SRP Phoenix ReactJS Neuroscience of VR | 6:30pm @ Tempe History Museum Phoenix VR for Good Thursday (Jan. 11th) Galvanize Data Science Discovery Session | 6pm @ Galvanize Galvanize WordPress Meetup — Tempe | 6:30pm @ Endurance Arizona WordPress Group Geeks Who Drink Pub Trivia | 6:30pm @ Bonus Round Phoenix N.E.R.D.S. Friday (Jan. 12th) Snow Day! First ASU LAN of the Year | 5pm @ ASU(EDC 117) ASU eSports Association Hack Arizona 2018 Hackathon Kickoff | 7pm @ U of A University of Arizona Saturday (Jan. 13th) Machine Learning Essentials: Hands-on Clustering | 11am @ Galvanize Learn Data Science Phoenix Ace Comic Con Arizona 2018 ($55) | 11am @ Gila River Arena Ace Universe Sunday (Jan. 14th) Ace Comic Con Arizona 2018 ($55) | 11am @ Gila River Arena Ace Universe Save the Date for These Upcoming Events Jan. 16th | Women. Power. Technology 2018 Jan. 19th | LinkedIn Local Phoenix Jan. 20th | Street Fighter 5 Arcade Edition Launch Tourney Jan. 23rd | Amazon Alexa Skill Development Workshop Jan. 24th | Coplex Grand Opening Jan. 27th | IoT DevFest Arizona
Phoenix Tech Events (Jan. 8th— Jan. 14th)
43
phoenix-tech-events-jan-8th-jan-14th-11ea2a7bd30a
2018-01-10
2018-01-10 07:59:07
https://medium.com/s/story/phoenix-tech-events-jan-8th-jan-14th-11ea2a7bd30a
false
375
A comprehensive list of the awesome tech, entrepreneurship, and gaming events happening locally.
null
null
null
Phoenix Tech Events This Week
phoenix-tech-events-this-week
PHOENIX,TECH,WEB DEVELOPMENT,DATA SCIENCE,VIDEOGAMES
null
Startup
startup
Startup
331,914
Chris Huie
null
8af52aa51166
chrishuie
107
552
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-13
2018-04-13 11:47:40
2018-04-13
2018-04-13 11:49:23
2
false
en
2018-04-13
2018-04-13 11:49:23
2
11eaa5352302
2.100314
1
1
0
Hi Onioners!
5
Deep Intelligence Hi Onioners! Today I bring you an article with a proposal, an idea that could be a possible implementation of DeepOnion. I asked about this topic in the Q & A 4.0 thread and after that, I have been writing this. What that I asked @themonkii was: “What do you think about the artificial intelligence wallet implementation in the future like the newest Fintech apps?” There will be people who don’t know what I’m talking about, so I proceed to explain some terms to them. What is Artificial Intelligence? Artificial Intelligence or AI in a few words is a specific area of computer science focused on developing systems capable of solving tasks in the most human-like way possible. Also responding to an information input and performing tasks as people would do based on their intelligence. Professionals have been working on this for many years, but with the appearance of BigData, it seems to get a better way. For those who don’t know, the BigData is the collection of huge amounts of data for its analysis. (Google Images, source https://apttus.com) Hand by hand with the BigData we find a type of Artificial Intelligence named Machine Learning that consists on creating systems that learn automatically and autonomously, without human intervention over time and for this you need a huge amount of information (here is where the BigData takes part). It usually works by identifying complex patterns and treating them. If we continue to get into the Artificial Intelligence we find the Deep Learning or Neural Networks, which is the area that tries to simulate the working of the human brain in the process of learning and recognition. Thanks to these terms you can develop apps or systems that make life easier for humans by automating their activities without spend time in them. If we implement this in the financial world we have what many startups in the Fintech world are developing, smart bank apps that alert you of all the movements that you are going to have through the month, purchases that you shouldn’t make and the notifications of payments in the next few days, for example. This could be implemented in DeepOnion since we have our own wallet and also the Android app wallet (coming soon on iOS). It could help DeepOnion users with their daily movements, nowadays it may sound a bit useless because there are few stores that accept Onions, but as this increases, this implementation could be more and more useful. The only problem that I find here is its development since it is something complicated and it should have a correct system for an anonymous and private service. The revolution will be technological or it won’t be, let’s get all the weapons !!
Deep Intelligence
50
deep-intelligence-11eaa5352302
2018-04-14
2018-04-14 05:36:31
https://medium.com/s/story/deep-intelligence-11eaa5352302
false
455
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Pepe Sospechas
null
fa18730ca636
pepesospechas9
2
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-20
2018-06-20 15:14:47
2018-06-20
2018-06-20 15:16:01
1
false
en
2018-06-20
2018-06-20 15:16:01
2
11eb29f974c2
1.162264
0
0
0
Nearly three-quarters of life science professionals — 72 percent — agree the life science sector is falling behind other industries when it…
5
72% of life science pros say industry lags in AI development, survey finds Nearly three-quarters of life science professionals — 72 percent — agree the life science sector is falling behind other industries when it comes to the development of artificial intelligence solutions, according to a The Pistoia Alliance survey. For the survey, the research and development nonprofit asked 229 pharmaceutical and life science leaders about their attitudes toward AI. Here are three survey insights respondents shared: 1. The majority of respondents — 69 percent — reported their organization uses AI, whether in the form of machine learning, deep learning or through chatbots. Nineteen percent of respondents said they plan to use AI within the next year, while 12 percent said they have no plans to use AI. 2. Of those already using AI within their organizations, 21 percent indicated the AI projects were not yet providing meaningful outcomes. Another 21 percent said they “didn’t know” if these projects were delivering meaningful outcomes. 3. When asked which sectors they planned to collaborate on for AI development in the next 18 months, most respondents indicated either a technology or data provider (40 percent), followed by stakeholders in healthcare (22 percent) or academia (15 percent). “This survey shows interest in AI remains strong, but there is still a challenge with moving past the hype to a realty where AI is delivering insights with the power to truly augment researchers’ work,” Steve Arlington, PhD, president of The Pistoia Alliance, said in a June 7 statement. Source beckershospitalreview.com
72% of life science pros say industry lags in AI development, survey finds
0
72-of-life-science-pros-say-industry-lags-in-ai-development-survey-finds-11eb29f974c2
2018-06-20
2018-06-20 15:16:03
https://medium.com/s/story/72-of-life-science-pros-say-industry-lags-in-ai-development-survey-finds-11eb29f974c2
false
255
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Doc Coin
Blockchain protocol for telehealth
713a79d298cf
doc.coin.ico
50
46
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-17
2018-04-17 07:19:06
2018-04-18
2018-04-18 09:17:55
12
false
th
2018-08-02
2018-08-02 16:52:52
54
11eb3dded0cc
6.021698
15
1
0
ʕっ•ᴥ•ʔっฉบับแปลไทย 10 นาทีจบ ตามมาโล้ด
5
Ten Machine Learning Algorithms You Should Know to Become a Data Scientist ʕっ•ᴥ•ʔっฉบับแปลไทย 10 นาทีจบ ตามมาโล้ด Original version: You can read the original article from Towards Data Science and ParallelDots. This blog is already permitted to translate in Thai language. src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e มนุษย์แต่ละคนมีนิสัยมีบุคลิกที่แตกต่างกันออกไป คนที่ทำ Machine learning (ML) ก็เหมือนกัน บางคนกล่าวว่า “ฉันเชี่ยวชาญด้าน algorithm X มาก X เนี่ยเอาไป train ข้อมูลแบบไหนก็ได้” บางคนบอกว่า “ต้องเลือก tool ที่เหมาะกับคนและงานสิ” ถึงแต่ละคนจะมีมุมมองที่ต่างกัน แต่พวกเขายึดหลักกลยุทธ์ “Jack of all trades. Master of one” เหมือนกัน มันคืออะไร? มันคือการที่มีซักด้านนึงที่เรารู้ลึกรู้จริง และก็ยังรู้จัก field อื่นๆ ของ ML เอาไว้ด้วย ในฐานะที่เป็น Data Scientist ฝึกหัด เราปฏิเสธไม่ได้ว่าจำเป็นต้องรู้พื้นฐาน algorithm ของ ML เอาไว้บ้าง เพื่อเอาไว้รับมือกับปัญหาที่เราอาจเจอในอนาคต และ blog นี้ก็จะมาพูดถึง algorithm เหล่านั้น + แหล่ง resource สำหรับคนที่สนใจด้วย 1. Principal Component Analysis (PCA)/SVD เรียกสั้นๆ ว่า PCA ละกัน มันคือวิธีการแบบ unsupervised ที่ช่วยให้เราเข้าใจคุณสมบัติของ dataset เช่น จะวิเคราะห์หา covariance matrix ของ data point เพื่อจะได้รู้ว่า dimension หรือ data point ไหนที่มีความสำคัญกว่า เช่น data point นั้นอาจมีค่า variance สูง แต่มี covariance ต่ำกว่าข้อมูลอื่น คิดอีกแบบก็คือการหา top PC ของ matrix ก็เหมือนหาตัวเจาะจง (eigenvector) ที่มีค่าเจาะจง (eigenvalue) สูงที่สุดน่ะแหละ ซึ่ง SVD ก็เป็นอีกวิธีที่ใช้หา ordered components ได้เหมือนกัน แต่เป็นแบบไม่ต้องหา covariance matrix src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e สำหรับ datapoint ที่มี dimension เยอะๆ สามารถใช้ algorithm นี้ลด dimension ได้ Libraries: https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Introductory Tutorial: https://arxiv.org/pdf/1404.1100.pdf 2a. Least Squares and Polynomial Fitting จำโค้ดที่คุณเขียนสมัยอยู่มหา’ลัย ที่เอาไว้วิเคราะห์ข้อมูลที่เป็นตัวเลขได้หรือเปล่า เราหาสมการซักอย่างที่พอ plot แล้วเส้นของสมการนั้นลากผ่านจุดข้อมูลเยอะที่สุด หรือ error น้อยสุดนั่นแหละ เราใช้วิธีนั้นหา fit curve ใน ML สำหรับข้อมูลที่ไม่เยอะ และ low dimension ได้นะ โดยใช้หลักของ ordinary least square (OLS) เท่านั้น ไม่ต้องใช้ optimization technique ที่ซับซ้อนอะไรมากมาย ปล. แต่ถ้าเอาไปใช้กับข้อมูลขนาดใหญ่ + หลาย dimension อาจเกิด overfitting ได้ ดังนั้นอย่าดื้อจะใช้ล่ะ src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e จะเห็นว่า algorithm นี้เหมาะจะใช้กับข้อมูลที่เป็น curve หรือ regression แบบง่ายๆ Libraries: https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.htmlhttps://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.polyfit.html Introductory Tutorial: https://lagunita.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/linear_regression.pdf 2b. Constrained Linear Regression พอเจอข้อมูลที่มี outlier เช่น มีค่าผิดปกติที่โดดจากกลุ่มไปเยอะมาก ข้อมูลแปลกๆ หรือมี noise การหาค่า least squares (กำลังสองน้อยที่สุด) อาจไม่ใช่ทาง และทำให้ model เราเพี้ยนไปได้ เราจึงต้องกำหนดข้อจำกัด (constraint) เพื่อลดความแปรปรวนที่อาจเกิดกับเส้น model เรา วิธีที่เราจะทำก็คือดู weight ของข้อมูลที่เอาไป fit model ด้วย ซึ่ง model อาจมี L1 norm (LASSO) หรือมี L2 (Ridge regression) หรือมีทั้งคู่เลยก็ได้ (elastic regression) แล้วค่อย optimize model ด้วย mean squared loss อีกที src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e เราใช้ algorithm นี้ fit constrained regression line เพื่อหลีกเลี่ยงการเกิด overfitting และปิดบัง dimension ที่มี noise ออกจาก model ได้ Libraries: http://scikit-learn.org/stable/modules/linear_model.html Introductory Tutorial(s): https://www.youtube.com/watch?v=5asL5Eq2x0A https://www.youtube.com/watch?v=jbwSCwoT51M 3. K means Clustering อันนี้ล่ะคือ algorithm ในดวงใจของใครหลายๆ คนเลย! ให้ข้อมูลมาชุดนึง (ในรูปของ vector) ละเราสามารถแบ่งกลุ่ม (clustering) ข้อมูลเหล่านั้นได้โดยใช้ระยะห่างระหว่างแต่ละ data point มาคำนวณ algorithm นี้จะรับ input เป็นจำนวน cluster หรือก็คือ เราต้องการแบ่งข้อมูลเป็นกี่กลุ่ม ลักษณะการทำงานคือจะวนลูปไปเรื่อยๆ (iterative) เพื่อหา center ของแต่ละ cluster พอ center เปลี่ยน member ของ cluster นั้นก็จะเปลี่ยนด้วย และวนลูปจนกระทั่ง center หยุดเปลี่ยนแปลง src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e ตามชื่อเลย! เราใช้ algorithm นี้แบ่งข้อมูลออกเป็น K กลุ่ม (cluster) ได้ Library: http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html Introductory Tutorial(s): https://www.youtube.com/watch?v=hDmNF9JG3lo https://www.datascience.com/blog/k-means-clustering 4. Logistic Regression เอาง่ายๆ มันคือ Linear Regression ที่ไม่เป็น linear (เส้นตรง) นั่นล่ะ จากที่ใช้สมการเส้นตรง Logistic Regression มักจะใช้ sigmoid function หรือ tanh แทน เช่น ค่าที่ผ่าน sigmoid function จะได้ output ออกมาเป็น 0 กับ 1 ซึ่งเป็นการจำกัด output ของ model ไปด้วย ประมาณนี้ src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e note for beginner: เราจะใช้ Logistic Regression ทำ classification โดยจะมองว่า Logistic Regression เป็น Neural Network ที่มี layer เดียวก็ได้ นอกจากนี้ ยัง optimize model ด้วย method เช่น Gradient Descent หรือ L-BFGS ได้ด้วย ซึ่งคนมักจะเรียกวิธีนี้ว่า Maximum Entropy Classifier Library: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html Introductory Tutorial(s): https://www.youtube.com/watch?v=-la3q9d7AKQ 5. SVM (Support Vector Machines) SVM ก็เป็น linear model เหมือนๆ กับ Linear/Logistic Regression เลย ต่างกันตรงที่ margin-based loss function ที่ใช้ (จริงๆ แล้วรากศัพท์คำว่า support vector มันคือผลลัพธ์ทางคณิตศาสตร์ที่สวยงามอันนึงเลยนะ) เราสามารถใช้ optimization method เช่น L-BFGS หรือ SGD เพื่อ optimize loss function ได้ด้วย src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e นอกจากนี้เราสามารถกำหนด kernel ที่ใช้ train data ได้ด้วย จากเดิมคือ RBF kernel จะลองเปลี่ยนเป็น kernel อื่นที่ดีกว่าก็ได้จ้า อีกจุดเด่นคือ SVM สามารถเรียนรู้ classifier ที่จำแนกเพียง 1 class ได้ และยังใช้ train เพื่อสร้างตัวจำแนกข้อมูล (classifier) หรือสมการถดถอย (regressor) ได้ด้วย Library: http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html Introductory Tutorial(s): https://www.youtube.com/watch?v=eHsErlPJWUU Note: SGD based training of both Logistic Regression and SVMs are found in SKLearn’s http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html , which I often use as it lets me check both LR and SVM with a common interface. You can also train it on >RAM sized datasets using mini batches. 6. Feedforward Neural Networks มันคือ Logistic Regression classifier ที่มีหลายๆ layer นั่นเอง ซึ่งคั่นแต่ละ layer ด้วย non-linearitie (เช่นสมการ sigmoid, tanh, relu + softmax หรือกระทั่งสมการ selu) เราเรียกด้วยชื่อเพราะๆ ว่า multi-layered perceptrons Multi-Layered perceptron src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e FFNN as an autoencoder, src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e เราใช้ FFNNs ทำ classification หรือ autoencoder ที่เป็น sunsupervised feature learning ได้ด้วย Libraries: http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html https://github.com/keras-team/keras/blob/master/examples/reuters_mlp_relu_vs_selu.py Introductory Tutorial(s): http://www.deeplearningbook.org/contents/mlp.html http://www.deeplearningbook.org/contents/autoencoders.html http://www.deeplearningbook.org/contents/representation.html 7. Convolutional Neural Networks (Convnets) ทุกวันนี้มีการนำ CNN มาใช้กับพวก art Vision เยอะมาก เช่นพวก Image classification, object detection หรือกระทั่ง segmentation of images ย้อนกลับไปอดีตในช่วงปลายยุค 80s ต้น 90s Yann Lecun เป็นคนคิดค้น Convnets ขึ้นมา มีลักษณะเป็นลำดับชั้นของ layer (hierarchical feature extractor) src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e เราจะใช้ convnets ทำ image หรือ text classification (graph ก็ได้นะ), object detection รวมถึง image segmentation Libraries: https://developer.nvidia.com/digits https://github.com/kuangliu/torchcv https://github.com/chainer/chainercv https://keras.io/applications/ Introductory Tutorial(s): http://cs231n.github.io/ https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/ 8. Recurrent Neural Networks (RNNs): RNNs model จะมีลักษณะเป็น sequence ที่วนทำซ้ำ (recursive) แต่ละ aggregator state ณ เวลา t (ให้แต่ละ sequence มี input เข้ามาเป็น 0..t..T และมี hidden state ที่เป็น output จาก step ก่อนหน้า หรือ t-1) ปกติแทบจะไม่มีคนเอา RNNs มาใช้เพียวๆ แล้ว แต่จะเอามาใช้ประมาณนี้ src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e โดย f ในที่นี้เป็น densely connected unit และ nonlinearity (ปกติ f จะเป็น LSTMs หรือ GRUs) ซึ่ง RNN นี้ใช้ LSTM unit แทน dense layer ธรรมดาๆ แบบ pure RNN src: https://towardsdatascience.com/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-8dc93d8ca52e จะใช้ RNNs กับงานอะไรที่เป็น sequence โดยเฉพาะ text classification, machine translation และ language modelling จ้า Library: https://github.com/tensorflow/models (Many cool NLP research papers from Google are here) https://github.com/wabyking/TextClassificationBenchmark http://opennmt.net/ Introductory Tutorial(s): http://cs224d.stanford.edu/ http://www.wildml.com/category/neural-networks/recurrent-neural-networks/ http://colah.github.io/posts/2015-08-Understanding-LSTMs/ 9. Conditional Random Fields (CRFs) จะบอกว่า CRFs เป็น model ที่ใช้บ่อยสุดในพวก probabilitic graphical model (PGMs) เลยก็ว่าได้ เราใช้ CRFs กับอะไรที่เป็น sequence เหมือนกับ RNNs เลย หรือจะใช้ร่วมกันกับ RNNs เลยก็ได้ เมื่อก่อนที่ยังไม่มีระบบ Neural Machine Translation เจ้า CRFs ถือเป็นศาสตร์แห่งศิลป์ของสมัยนั้นเลยนะ เราใช้ CRFs จัดการ sequence tagging task ต่างๆ ที่มี dataset ไม่ใหญ่มาก ซึ่งตรงนี้ล่ะที่ทำให้ CRFs ได้เปรียบ RNNs ในด้านการเรียนรู้ เพราะ RNNs ต้องการ dataset ขนาดใหญ่ๆ ในการ train CRFs ยังเอาไปใช้กับ structured prediction task อื่นๆ ได้ด้วย เช่น การทำ image segmentation CRF จะทำ model แต่ละ element ของ sequence (เรียกว่า sentence) ดังนั้น neighbor นึงจะกระทบกับ label ของ component ใน sequence นั้นเท่านั้น ไม่กระทบกับ label ทั้งหมด (label เป็นอิสระต่อกัน) เราจะใช้ CRFs tag sequence ต่างๆ ในรูปของ ข้อความ, รูปภาพ, Time Series, DNA เป็นต้น Library: https://sklearn-crfsuite.readthedocs.io/en/latest/ Introductory Tutorial(s): http://blog.echen.me/2012/01/03/introduction-to-conditional-random-fields/ 7 part lecture series by Hugo Larochelle on Youtube: https://www.youtube.com/watch?v=GF3iSJkgPbA 10. Decision Trees สมมติว่าเราได้รับ data ของผลไม้หลากหลายชนิด และโจทย์คือบอกให้ได้ว่าอันไหนคือแอปเปิ้ล คำถามที่เราจะตั้งก็คือ “ผลไม้อันไหนเป็นผลกลมๆ สีแดงบ้าง?” หลังจากนั้นก็แบ่งผลไม้ที่มีออกเป็นกลุ่มที่ตอบคำถามเป็น yes กับ no ทีนี้ ผลไม้ที่ตอบ yes ทั้งหลายนั้นอาจจะไม่ใช่แอปเปิ้ลก็ได้ และแอปเปิ้ลก็อาจจะไม่เป็นผลกลมๆ สีแดงเสมอไป ดังนั้น เราก็จะถามกลุ่มผลไม้กลมๆ สีแดงต่อว่า “ผลไม้อันไหนที่ผิวมีสีแดงๆ เหลืองๆ แต้มอยู่บ้าง?” และถามกลุ่มผลไม้ที่ไม่กลมๆ สีแดง ว่า “เป็นผลกลมๆ สีเขียวหรือเปล่า?” ต่อไป คำถามพวกนี้จะช่วยให้เราตัดสินใจได้ว่าสุดท้ายแล้วอันไหนคือแอปเปิ้ล และลักษณะการถามต่อเป็นทอดๆ (cascade of question) งี้ล่ะที่เราเรียกว่า Decision Tree จากตัวอย่าง decision tree นั้นเกิดจากสัญชาติญาณของเราเอง ซึ่งในกรณีข้อมูลจริงที่มีหลาย dimension หรือซับซ้อนๆ หน่อย สัญชาติญาณคงไม่เวิร์ค! ทีนี้ล่ะพระเอกของเราจะหา cascade of question มาให้เอง 😎 จริงๆ เมื่อก่อนเคยมี CART tree ประมาณนี้เหมือนกันนะ แต่มันใช้ได้กับเฉพาะ simple data พอเจอ dataset ที่ใหญ่/เยอะหน่อยก็เกิด bias-variance tradeoff ทำให้ต้องใช้ algorithm ที่ดีกว่านั้นมาแก้ ทุกวันนี้ decision trees ที่เห็นคนเอามาใช้กันบ่อยๆ คือ Random Forests ที่สุ่มหลายๆ attribute มาสร้าง tree หลายๆ แบบ เป็นการสร้างแบบ parallel คือสร้างพร้อมๆ กันเลย แล้วสุดท้ายเอา output ของแต่ละแบบมารวมกัน Boosting Trees สร้างหลาย tree เช่นกัน แต่ไม่เป็น parallel เหมือน Random Forests ลักษณะคือสร้างทีละ tree และ tree นั้นจะส่งผลกับ tree ตัวต่อๆ ไปที่จะสร้าง มีการดู residual จาก tree ก่อนหน้า ทำให้เกิด error ในการทำนายน้อยลงเรื่อยๆ Decision Trees ใช้ classify datapoint ได้ ทำ regression ก็ได้นะ Libraries http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html http://xgboost.readthedocs.io/en/latest/ https://catboost.yandex/ Introductory Tutorial: http://xgboost.readthedocs.io/en/latest/model.html https://arxiv.org/abs/1511.05741 https://arxiv.org/abs/1407.7502 http://education.parrotprediction.teachable.com/p/practical-xgboost-in-python TD Algorithms (Good To Have) แถมอีกนิด ทุกคนสงสัยมั้ยว่า algorithm ที่อ่านกันมายืดยาวเนี่ย ทำให้ DeepMind เล่นเกมโกะชนะแชมป์ระดับโลกได้ยังไง? สิ่งที่ algorithms พวกนั้นทำไม่ใช่วางแผนกลยุทธ์ในการเล่นเกมโกะหรอก แต่มันเป็น Pattern Recognition การจะเอาชนะเกมหมากกระดาน หรือเกมคอนโซลอย่าง Atari ได้ (เรียกลักษณะแบบนี้ว่า multi-step problem) มันต้องลงไปเล่น เจอแพ้เจอชนะเอง (rewards/penalties) เราเรียก ML ที่เรียนรู้แบบนี้ว่า Reinforcement Learning ส่วนมากที่เห็นทุกวันนี้คือจะรวมหลายๆ รูปแบบการเรียนรู้ เช่นจากทั้ง convnet และ LSTM เข้าด้วยกัน ได้เป็นเซ็ตของ algorithm นึง เราเรียกการวิธีแบบนี้ว่า Temporal Difference Learning หรือจะเป็นการเรียนรู้แบบ Q-Learning หรือ SARSA ก็อยู่ในแนวๆ นี้เหมือนกัน algorithm แบบนี้มักใช้กับการเล่นเกมนี่แหละ 😆 แต่ก็เอาไปใช้กับอย่างอื่น เช่น language generation หรือ object detection ได้เหมือนกัน Libraries: https://github.com/keras-rl/keras-rl https://github.com/tensorflow/minigo Introductory Tutorial(s): Grab the free Sutton and Barto book: https://web2.qatar.cmu.edu/~gdicaro/15381/additional/SuttonBarto-RL-5Nov17.pdf Watch David Silver course: https://www.youtube.com/watch?v=2pWv7GOvuf0 🚩จบละจ้าาา ถือว่ายาวพอสมควรสำหรับ 10 machine learning algorithms ที่ควรรู้ก่อนจะเป็น data scientist และน่าสนใจหลายๆ ตัว ใครอยากเจาะลึกอันไหนก็จัดไปได้เลย สำหรับผู้แปลเองมีหลาย algorithm ที่ได้ศึกษาทฤษฎีมาบ้าง ลองเล่นบ้าง และเคยได้ยินแต่ชื่อบ้าง โดยเฉพาะตัว Neuron Network ที่คิดว่ายังไปได้ไกล และมีอะไรให้เล่นอีกเยอะ แน่นอนว่านอกจาก blog นี้ ยังมี algorithm อีกหลายตัวรอให้เราไปค้นหาอยู่ ดังนั้นเราต้องไม่หยุดที่จะเรียนรู้ 🏃‍🏃‍..~ และขอบคุณผู้เขียน blog ต้นฉบับที่ให้ความรู้กับทุกๆ คน ใครอยากอ่านเป็นภาษาอังกฤษตามไปได้เลยน้า 👇 Ten Machine Learning Algorithms You Should Know to Become a Data Scientist Machine Learning Practitioners have different personalities. While some of them are “I am an expert in X and X can…towardsdatascience.com Ten Machine Learning Algorithms You Should Know to Become a Data Scientist Machine Learning Practitioners have different personalities. While some of them are "I am an expert in X and X can…blog.paralleldots.com เพิ่มเติม! ตอนนี้มี API สำหรับทำ sentiment analysis, emotion analysis และ keywords generator รองรับหลายภาษา รวมถึงภาษาไทยให้ใช้ด้วย กดดู demo ได้เลย วันนี้ไปก่อนล่ะ แล้วเจอกัน blog หน้าจ้า 😎
Ten Machine Learning Algorithms You Should Know to Become a Data Scientist
166
ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-11eb3dded0cc
2018-08-02
2018-08-02 16:52:52
https://medium.com/s/story/ten-machine-learning-algorithms-you-should-know-to-become-a-data-scientist-11eb3dded0cc
false
1,238
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Manusaporn Treerungroj
null
f88adf04b108
m.treerungroj
185
87
20,181,104
null
null
null
null
null
null
0
null
0
9a6e4e26a67c
2018-04-02
2018-04-02 12:43:59
2018-04-02
2018-04-02 12:56:42
7
false
ko
2018-05-18
2018-05-18 09:16:55
2
11ebc06cbd90
6.676
3
0
0
안녕하세요. 엘리스입니다. 두 번째로 만난 개발자는 삼성전자 AI/NLU(Natural Language Understanding, 자연어 이해) 팀의 소프트웨어 엔지니어 윤채원님입니다.
5
수강생 인터뷰 2. AI/NLU 소프트웨어 엔지니어 윤채원님. 안녕하세요. 엘리스입니다. 두 번째로 만난 개발자는 삼성전자 AI/NLU(Natural Language Understanding, 자연어 이해) 팀의 소프트웨어 엔지니어 윤채원님입니다. 채원님은 정치외교학을 전공하신 사회과학도이신데요, 엘리스 머신러닝 캠프 수강 당시에는 놀랍게도 파이썬 기초 문법만 조금 아는 정도의 코딩 지식으로 수업을 이수하셨다고 합니다. 비전공자에서 프로그래머가 되기까지, 어떤 이야기가 있을까요? 비전공자, 소프트웨어 엔지니어. 엘리스 : 안녕하세요! 자기소개 부탁드려요. 채원 : 안녕하세요. 저는 현재 삼성전자 AI/NLU팀에서 소프트웨어 엔지니어로 일하고 있는 윤채원입니다. 연세대학교 정치외교학과를 졸업했고, 회사의 SCSA라는 융합형 인재 프로그램 교육을 이수한 후 개발자로 입사하게 되었습니다. 엘리스 : 정치외교학을 전공하셨는데, 코딩 공부를 시작한 계기는 무엇이었나요? 채원 : 취리히에서 교환학생 생활을 하면서 개발을 처음 접했어요. 당시에 한국은 지금처럼 코딩 공부를 많이하는 분위기는 아니었는데, 취리히에서 만난 친구들은 컴공과 학생이 아니더라도 C언어라든지 개발을 다 조금씩은 하더라구요. 그게 인상 깊었죠. 그래서 한국에 돌아와서 파이썬 강의를 들었어요. 학교에서 교양으로 C수업도 듣고요. 이때 재미를 느껴서 저는 저랑 개발이 잘 맞는 줄 알았어요.(웃음) 엘리스 : 한국은 아직 인문계 학생이 코딩을 활발하게 할 수 있는 여건은 아닌 것 같아요. 공부를 하더라도 개발자로 취업까지 이어진다는 것이 흔한 일은 아니고요. 어떻게 개발자가 되셨는지 궁금하네요! 채원 : 저는 우선 IT계열에서 일하고 싶다는 기준이 있었어요. 항상 새로운 서비스를 써보는 것에 관심이 많고 좋아했거든요. 개발자로 일하고 싶었지만 거의 아는 게 없어서 자신은 없었어요. 그래서 IT계열에서 개발 외의 직무로는 디지털 마케팅이나 기술 기획쪽으로 지원을 했어요. 그런데 개발자로 지원한 회사에 합격을 해서 현재 개발자로 일하고 있습니다. 엘리스 머신러닝 캠프. 엘리스 : 엘리스 머신러닝 캠프를 듣게 된 동기는 무엇인가요? 채원 : 개발에 대해 이것저것 할 수 있는 걸 다 해보자는 생각이 컸어요. 지금은 인공지능 쪽에서 일하고 있지만 당시에는 머신러닝이 뭔지도 몰랐거든요. 파이썬 기초 문법만 조금 아는 정도였으니까요. 함께 개발 동아리를 하던 친구가 추천해서 수강하게 되었어요. 엘리스 : 그렇다면 배우는 내용이 많이 어려웠을 것 같아요. 학습에 엘리스 조교님이 도움이 되었나요? 채원 : 아무 것도 모르고 들었기 때문에 조교님이셨던 수인님께 정말 도움을 많이 받았어요. 실습 시간마다 실시간으로 코딩하는 걸 계속 도와주셨어요. 또 실습이 어렵긴 했지만 그때 이론을 배운 덕분에 비지도학습, 지도학습 등의 개념이 익숙하고 친숙해지기도 했고요. 엘리스 : 머신러닝 캠프가 취업에 도움이 되었나요? 채원 : 전공이 달라서 개발에 대해 이야기할 경험이 별로 없는데, 머신러닝 캠프에 참여했던 내용으로 이야기를 풀어낼 수 있었어요. 또 머신러닝 캠프에 참여해서 받은 수료증으로 저의 성취도 증명할 수 있어 도움이 되었구요. 제가 회사에 지원할 당시만해도 Siri, NUGU, 빅스비 같은 인공지능(AI) 음성비서 시장이 활발하지 않았었는데, 회사에서는 관심을 가지고 있었죠. 엘리스에서 머신러닝을 공부했기 때문에 이 분야에 대한 흥미를 표현할 수 있었습니다. 엘리스 : 이때 배운 내용들이 현재 업무에 도움이 되나요? 채원 : 그때 배운 지식들을 지금 실무에서 다 사용해요. K-means, 디멘션, 리덕션, 지도학습, 비지도학습, 클러스터링 등, 다 씁니다. 아무래도 인공지능 관련 부서이다 보니까요. 당시에 엘리스에서 배운 내용들을 다 기억하진 못하지만요.(웃음) 개발자 취업 준비 과정. 엘리스 : 취업 준비 과정에서 가장 어려웠던 것은 무엇인가요? 채원 : 지원하면 대부분 떨어진다는 거요. 내가 쓸모 없는 사람 같이 느껴지고요. 그런데 사실 일희일비하지 않아도 되는 것 같아요. 왜냐하면 애초에 채용 인원이 지원 인원보다 훨씬 적기 때문에 확률상 떨어질 확률이 높은 거니까요. 그래도 자꾸 떨어지다보면 자존감이 낮아진다는 게 제일 힘들죠. 엘리스 : 본인만의 극복 방법이 있었나요? 붙으면 극복이 되나요? (웃음) 채원 : 네 붙으면 이제 극복이 되는데(웃음), 저는 계속 바쁘게 지내려고 노력했어요. 엘리스 머신러닝 캠프에도 참여하고 MOOC(Massive Open Online Course, 온라인 공개 수업)도 들으면서 새로운 것을 하려고 했어요. 지원하는 회사에 계속 불합격 하더라도 하고 싶은 공부를 했고, 웹개발 등 새로운 걸 배우면서 활력소가 되었어요. 결과적으로는 프로그래밍 역량을 쌓는 데에도 도움이 되었구요. 엘리스 : 취준생에게 전하고 싶은 취업 노하우가 있을까요? 채원 : 공통적으로는 알고리즘, 데이터 구조를 많이 공부하는 것 같고 잘해야 하는것 같아요. 요즘은 딥러닝이 적용되는 분야가 많아서 개발자로 취직하고 싶다면 어느 정도 개념은 알아야 하지 않을까 싶어요. 아예 외면하기에는 수요도 많고 트렌드가 된 것 같아요. 또 한가지는 자기 역량 쌓는 게 중요한데, 저는 남들이 다 해서 하는 게 아니라 자기 적성과 미래 전망에 맞게 역량을 쌓는 게 중요한 것 같아요. 대다수가 하는 방법대로 준비하면 비슷한 취준생들에게 밀릴 수밖에 없거든요. 제가 지원할 때만 해도 문과생에게 개발은 황무지였어요. 제가 개발 한다고 하면 다들 허튼짓 한다고 했죠. 그런데 저는 코딩은 누구나 다 하는 거라는 생각을 했어요. 정치외교학과 친구들도 졸업 논문을 쓰면서 데이터 처리를 위해서 R 프로그래밍을 하더라구요. 저는 커리어에 큰 도움이 될 거라는 나름의 확신이 있었기 때문에 계속 코딩 공부를 했죠. 이렇게 소신을 가지고 했던 것이 취업으로 이어졌다고 생각해요. 열심히 취업 준비를 하되 주관을 가지고 영리하게 하는 게 중요하다고 생각해요. 엘리스 : 삼성전자 취업 절차에서는 어떤 과정을 거치셨나요? 채원 : 포트폴리오 발표와 알고리즘 시험을 봤어요. 포트폴리오 발표 때는 개인용 인공지능 스피커처럼, 회사에서 사용할 수 있는 기업 특화 인공지능 스피커가 나온다면 활용도가 높을 것 같다라는 생각을 해서 이를 주제로 발표했어요. 당시에는 부서는 모르고 포트폴리오 발표를 한 건데 합격 후에 제가 발표한 주제와 직접적으로 연관된 일을 하게 되어 신기했어요. 엘리스 : 마지막으로 채원님에게 엘리스란? 채원 : 전공이 아닌데 개발을 접할 수 있던 기회. MOOC를 많이 들었는데 엘리스는 조교님이 실시간으로 봐주는 게 되게 좋았어요. 교육 플랫폼인데 인터랙션하면서 공부할 수 있다는 게 다른 온라인에서 할 수 있는 학습이랑 제일 차별화된 지점이라고 생각해요. 그리고 제게는 추억이네요. 엘리스를 생각하면 저는 학생 때, 취준생 때가 생각 나요. 기숙사 방에서 구린 노트북으로 코딩 하던 거 생각나네요.(웃음) 지금 결과적으로 보면 관련된 일을 하고 있으니 의미가 남다른 것 같습니다. 비전공자였지만 주변의 만류에도 불구하고 뚜렷한 자기 확신으로 하고 싶은 일을 해 나가신 과정이 감명 깊었습니다! 얼마나 많은 다양한 분들이 프로그래머를 꿈꾸며 자기만의 이야기를 만들고 계실지 기대가 되는 부분이기도 하네요. 내가 흥미를 느끼는 새로운 분야, 언어, 적성은 무엇이 있을지 다양한 공부를 시도해보는 것도 즐거울 것 같습니다. 마음껏 프로그래밍 공부해주세요 :) !
수강생 인터뷰 2. AI/NLU 소프트웨어 엔지니어 윤채원님.
16
수강생-인터뷰-2-삼성전자-윤채원님-11ebc06cbd90
2018-05-18
2018-05-18 09:16:57
https://medium.com/s/story/수강생-인터뷰-2-삼성전자-윤채원님-11ebc06cbd90
false
862
Interactive SW classrooms for everyone.
null
elice.io
null
Elice - https://elice.io
elice
EDUCATION,PROGRAMMING,MACHINE LEARNING,ONLINE EDUCATION,COMPUTER SCIENCE
null
인공지능
인공지능
인공지능
46
Jeong Woo Lee
Content manager @ elice
2fd9814de1d8
jw_92325
20
20
20,181,104
null
null
null
null
null
null
0
null
0
32881626c9c9
2018-09-07
2018-09-07 18:48:24
2018-09-26
2018-09-26 21:04:18
1
true
en
2018-09-30
2018-09-30 17:25:50
5
11ec1a7c6a09
3.483019
2
0
0
The role of the CIO is a paradox. CIOs are entrusted with managing technology but in many cases struggle to get a seat at the CEO’s table…
4
From Facebook CIO to CEO “man holding black smartphone while sitting” by rawpixel on Unsplash The role of the CIO is a paradox. CIOs are entrusted with managing technology but in many cases struggle to get a seat at the CEO’s table. In a world where technology and change is accelerating, it’s more critical than ever for CIOs to make a significant impact on business, products, and operations. Yet despite emerging opportunities, many CIOs feel powerless to drive meaningful digital transformation programs. This interview with Tim Campos, CIO of Facebook during the company’s growth from $1 billion to $30 billion in revenue, was conducted and condensed by Jedidiah Yueh, bestselling author of Disrupt or Die and Executive Chairman and Founder of Delphix. Q. You created a product group inside of IT as CIO at Facebook. What impact did your products have on the business? A. We built solutions like Facebook CRM, which contributed to our 15% annual productivity growth in sales. We also built technologies which served as prototypes and in some cases the actual implementations of products that were used externally by Facebook’s customers (e.g. Audience Insights). Our solutions had a transformative Impact on Facebook both in terms of results and the expectation the company had of IT. We engineered systems from the ground up and built them in the same manner as Facebook builds all of its products. This helped solidify IT, which we called Enterprise Engineering, as both the champion of employee productivity and an innovative development center. Q. It’s hard to be transformational in IT, especially in a company that has been as transformational as Facebook has been for the world at large. Was it hard to get a product group approved? A. There was little support when we got started. I had to prove it would work. First, we had to recruit Engineers to work on problems that were internally focused. Everyone thought Facebook engineers wouldn’t care about internal solutions. We proved that wasn’t true. Engineers care a lot about productivity, and they enjoy seeing the direct impact they have on their customers. Second I had to prove that we could manage these teams exactly as they are managed within other product functions. That wasn’t hard, either. Finally, we had to prove it with results. This took a surprising amount of convincing, even when we had real numbers. In the end, however, one or our cultural truths proved out, because “data wins arguments” at Facebook. Q. How else did Facebook’s culture shape your ability to drive transformation? A. Facebook is an environment where you define your job. You are ultimately measured on impact on the company, not whether you fit within a predefined box. If the definition of your job is wrong or incomplete, you are expected to change it yourself. This cultural tenet is at the heart of what enabled me to drive transformation from inside IT at Facebook. Q. What were your biggest lessons learned related to transformation at Facebook? A. First, there is no substitute for quality people. We hired really strong engineers into IT. Not only did this help us build strong product groups, but it’s also simply good business. A strong engineer is worth more than 10 mediocre engineers in terms of productivity. And good people want to work with good people. This made it easier to hire strong people over time. It’s a lesson I’ve taken with me even in my new adventures. Second, perseverance. If you have a vision, stick with it. Nothing worth doing is easy, and if you give up just because things are hard, you’ll never get anything (of worth) done. Third, make sure you have your champions. I wouldn’t have been able to drive change at Facebook without the support of our VP of Sales and our CFO. They believed in our vision and supported us when things got hard. Q. You mentioned new adventures. What transformation are you driving next? A. I’m excited by the opportunities created by today’s technology shifts, specifically in artificial intelligence. I see a big opportunity to re-think the enterprise landscape. Most enterprise software is based on CRUD-dy tools (create, read, update, delete). These systems are not only awkward to work with but also limited in their ability to capture and use information. This is a major source of lost productivity, particularly for knowledge workers. Meanwhile a lot of AI emphasis today is on anthropomorphic interfaces, bots and virtual assistants — AI trying to imitate people. These technologies demo well, but they are quite limited in what they can actually do on their own. Try asking Siri what the most important thing is for you to work on today. But AI doesn’t need to be like people. There’s a major opportunity to fuse AI and UI (user interface) to assist humans in their jobs. I think synergistic (versus imitative) uses of AI will have a much broader impact than attempts to fully take over jobs done by humans today. This is what we’re working on at Pulsra, my new startup, where I’m CEO. We’re injecting AI into a part of the business world where everyone is already involved, an awkward and frustrating place where AI can make a positive difference. That’s all I can say for now.
From Facebook CIO to CEO
15
from-facebook-cio-to-ceo-11ec1a7c6a09
2018-09-30
2018-09-30 17:25:50
https://medium.com/s/story/from-facebook-cio-to-ceo-11ec1a7c6a09
false
870
Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing.
null
datadriveninvestor
null
Data Driven Investor
datadriveninvestor
CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY
dd_invest
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Jedidiah Yueh
Bestselling Author of Disrupt or Die, Delphix executive chairman and founder, Avamar founding CEO
40de6b1985b3
jedidiahyueh
1,284
186
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-16
2018-04-16 02:13:33
2018-04-16
2018-04-16 02:18:55
0
true
en
2018-04-16
2018-04-16 02:32:18
45
11ec6c4c12dd
18.037736
1
0
0
There has been discussion within the political arena as to whether certain campaign agendas fare better for African American candidates…
5
Data Collection, Classification, and Analysis: Deracialization and African American Mayors There has been discussion within the political arena as to whether certain campaign agendas fare better for African American candidates. Often times, African Americans have to think beyond normal strategies when running for office due to the fact many are in an era where they are part of second generation politicians within an institution not necessarily built for their success. A key strategy, which was suggested by Charles Hamilton, dealt with deracialization. Deracialization has been defined as “conducting a campaign in a stylistic fashion that defuses the polarizing effects of race by avoiding explicit reference to race-specific issues, while at the same time, emphasizing those issues that are perceived as racially transcendent, thus mobilizing a broad segment of the electorate for purposes of capturing or maintaining public office.” In this analysis stipulated by Dr. Huey Perry, an assessment has been made based upon twenty eight variables to determine the strength of the deracialization model for African American candidates. The study was conducted based on a random sample population of ten African American mayors. Within the ten selected mayors, consideration was broken down between five candidates who were successful at becoming elected and five who were not successful in their bid for office. The five successful African American candidates used as samples include Kevin Johnson of Sacramento, California, Cory Booker of Newark, New Jersey, John Daniels of New Haven, Connecticut, Maynard Jackson of Atlanta, Georgia and Bryon Brown of Buffalo, New York. The five candidates who were not successful at becoming elected included Carl Stokes of Cleveland, Ohio, Tom Bradley of Los Angeles, California, Kip Holden of Baton Rouge, Louisiana, Cory Booker of Newark, New Jersey and Larry Langford of Birmingham, Alabama. Five Successful Candidates The years for the five successfully elected candidates for this study ranged between 1973 to 2008. Maynard Jackson was elected in 1973, John Daniels in 1989, Bryon Brown in 2005, Cory Booker in 2006 and Kevin Johnson was elected in 2008. The winning percentage ranged from 57.40% to 72%. Maynard Jackson won with 59% of the total vote, John Daniels had 59%, Bryon Brown had 63.79%, Cory Booker had 72% and Kevin Johnson had 57.40%. The campaign issues used by these five candidates to win their elections to the office of mayor included education, crime, poverty, police brutality, economic development, balancing the city budget, human rights, jobs and hiring equity. Three top categories of campaign issues were used in this study. However, other campaign issues could have been added as part of all their campaign agendas. But, for the sake of this short analysis, the top three categories were researched and incorporated, if possible, into this study. The results showed how all five successful candidates campaigned against crime in their cities. Two candidates, Kevin Johnson and Cory Booker, campaigned for better education. And two others, Bryon Brown and John Daniels, campaigned against poverty levels within their cities. Although jobs and hiring practices could be categorized together, Kevin Johnson and Maynard Jackson successfully campaigned for those issues respectively. Police brutality, economic development, budget balancing, human rights and hiring equity were used by one candidate as a campaign issue. Of the five successfully elected African American candidates, Kevin Johnson, John Daniels and Bryon Brown showed campaigns which totally represented a deracialized model. 66% of Cory Booker’s campaign included deracialized categories. And, 33% deracialized issues were used by Maynard Jackson. This states how four of the five successfully elected candidates used an overall deracialized model during their election campaigns to office. Maynard Jackson was the only candidate to show a racialized model. Scholars such as Baodong Liu have stated how endorsement of newspapers is the greatest indicator of crossover votes to win elections. Three of the five successfully elected candidates, however, did not receive newspaper endorsements during their campaigns. They were Kevin Johnson, John Daniels and Maynard Jackson. On the other hand, Cory Booker and Bryn Brown did manage to receive endorsements from newspapers. As far as incumbency status, four of the five successfully elected candidates obtained success in this category. Kevin Johnson, Cory Booker, John Daniels and Bryon Brown all held incumbency status. Maynard Jackson was the only candidate who did not. Also, four of the five successfully elected candidates held previous office in government. These positions included City Councilmen, Alderman, State Representatives, Senators, City Representatives and Vice Mayor. Two of the five, John Daniels and Bryon Brown, held Senate seats prior to becoming elected mayor. Two of the five, Cory Booker and Bryon Brown, also held City Council positions before becoming elected as mayor. Kevin Johnson was the only successfully elected candidate to not hold any previous office in politics. Three of the five successfully elected candidates were elected in a white dominant context. These candidates were Kevin Johnson, John Daniels and Bryon Brown. Cory Booker and Maynard Jackson ran their campaigns in a black dominant context. Four of the five successfully elected candidates also ran against a white opponent. Kevin Johnson ran against Heather Fargo, John Daniels ran against John DeStefano, Maynard Jackson ran against Sam Massell and Bryon Brown ran his campaign against Kevin Helfer. Cory Booker was the only successfully elected candidate to run against another African American in Ronald Rice. Different variables pertaining to campaign issues of their opponents were harder to retrieve during this study. Of the five opponents who ran against successfully elected candidates, information was found with three. Information on Heather Fargo who ran against Kevin Johnson in the 2008 mayoral campaign in Sacramento, California showed how her issues included environmental concerns and gun control. Although 100% of her categories were deracialized, 66% was stipulated in the study to distinguish two of the three required categories. Both issues are considered deracialized strategies. A third category could not be found for the study. Ronald Rice, who ran against Cory Booker in the 2006 Newark, New Jersey mayoral campaign, showed an overall deracialized campaign strategy. Two of the three categories chosen for the study included crime and rehabilitation campaign issues. The third category was not clear, but information of rhetoric used showed there was some racialized context with campaign agenda. Specifying what the specific issue consisted of was not made clear, but was added to the study as a variable. Rice also showed an overall 66.66% deracialized campaign against Cory Booker. The last opponent where information was able to be gathered was found with Sam Massell who ran against Maynard Jackson in the 1973 mayoral campaign in Atlanta, Georgia. The only campaign issue found dealt with enabling segregation methods while downplaying the role of other Civil Rights Leaders who contributed to Maynard Jackson’s efforts at the end of his campaign run. Massell was therefore categorized as running a 100% racialized campaign since no other categories could be found. There was no information found to categorize opponent agendas for John Daniels and Bryon Brown other than race. No categorical decision was made to incorporate variables for deracialized/racialized strategies with DeStefano or Helfer. Five Unsuccessfully Elected Candidates The years of the population sample for the five unsuccessfully elected African American candidates ranged between 1965 and 2002. Carl Stokes lost his bid for mayor in 1965, Tom Bradley in 1969, Larry Langford in 1979, Kip Holden in 1979 and Cory Booker lost in 2002. The losing percentage ranged from 33.9% to 47% of the total votes. It should also be noted, there was no data collected on the total votes during two campaigns with Carl Stokes in 1965 and Larry Langford in 1979. Kip Holden lost with 33.9% of the vote, Cory Booker lost with 46.74% of the vote and Tom Bradley lost with 47% of the vote. Kip Holden had run for mayor on other occasions, but the 1979 mayoral campaign was used for this study. Cory Booker, Tom Bradley, Larry Langford and Carl Stokes all ran for office once before eventually becoming successful as elected mayors of their cities. The issues which were used during the loss for their bid as mayors included education, public housing, city beautification, political corruption, better leadership, social programs, crime, traffic conditions, safety, economic development and jobs. Of the five mayors who lost, three categorical descriptions were repeated by other candidates who were not successful. Carl Stokes and Kip Holden both campaigned for education. Cory Booker and Larry Langford campaigned for jobs. And, Cory Booker along with Larry Langford campaigned for economic development of their cities. There was no other unification in the categories of campaign issues used between candidates. All five candidates who lost their bid for mayor had used deracialized models. At the same time, there was no presence of racialized categories to consider as variations amongst the five losing candidates. Three of the five losing candidates had newspaper endorsements. Carl Stokes, Tom Bradley and Cory Booker showed the support of newspapers. Larry Langford did not and there was information found as to whether Kip Holden had newspaper endorsement during his loss. Two candidates, Carl Stokes and Tom Bradley, had reached incumbency status prior to their campaigns for mayor. Kip Holden, Cory Booker and Larry Langford had not reached incumbency status during these elections. All five candidates previously held positions government prior to running for mayor. Two of the candidates, Carl Stokes and Kip Holden, held seats as state representatives. Tom Bradley, Kip Holden, Cory Booker and Larry Langford held positions as city council members. The only one to not hold a position on city council boards was Carl Stokes. Three of the five candidates including Carl Stokes, Tom Bradley and Larry Langford, campaigned in a white dominant context. Kip Holden and Cory Booker had campaigned in a black dominant context. Three of the five candidates ran against white opponents. Carl Stokes ran against Ralph S. Locher, Tom Bradley ran against Sam Yorty and Kip Holden ran against Tom Ed McHugh. The other two candidates ran against black opponents. Cory Booker ran against Sharpe James and Larry Langford ran against Richard Arrington Jr. in Birmingham. Again, in accordance to this study, information on the issues opponents had used is limited to the amount of information found. Nevertheless, the opponents in campaigns against those African American candidates who lost their bids for mayor included population control, radical fear, crime, jobs and police brutality. Tom Ed McHugh, Sharpe James and Richard Arrington Jr. had all campaigned for crime against Kip Holden, Cory Booker and Larry Langford respectively. Two candidates, Sam Yorty and James Sharpe used the race card as tools of fear in their campaigns. Sam Yorty used radical fear against Tom Bradley to appeal to white voters. And, Sharpe James used a reverse attack to radical fear against a black candidate with Cory Booker in a black dominant context. Racial segregation was also identified as a campaign method used by Ralph S. Locher against Carl Stokes to win white votes. These inferences used by Sam Yorty, Sharpe James and Ralph Locher can be considered under the subject of ‘Racial Threat’ as a means to win votes during their campaigns. The campaign findings of Ralph Locher versus Carl Stokes only included a category inference to population control. Therefore, his campaign was considered 100% racialized because no categories of deracialization were found. Sam Yorty also showed one category of racialized campaigning in his strategy towards Tom Bradley was input as 100% racialized. Tom Ed McHugh was only found to campaign directly against crime. McHugh was then categorized as using a 100% deracialization model with no other deracialized categories to consider. Even though Sharpe James played the race card against Cory Booker, his overall campaign strategy has been categorized as 66.66% deracialized because the other two categories of campaign issues dealt with crime and jobs. Lastly, there were only two categories of deracialization to consider under Richard Arrington Jr.’s campaign against Larry Langford, crime and police brutality. Crime is considered a deracialized method while police brutality is usually associated with race and was therefore categorized as racialized. Arrington’s campaign agenda showed a 50/50 split on issues of deracialization. Findings and Conclusion for Deracialization For this study of African American candidates during the successful or unsuccessful campaigns for mayors of their respective cities, categories which showed up more than once by all ten sample population of mayors were used to establish what is important to their campaign agendas. Also, a one way analysis was done to include all the variables from issues of their campaigns to the classification of racial content by their opponents. The data was then cross-tabulated with those African American mayors who specifically ran deracialized campaigns against all the variables ranging from categories which showed up more than once, to all other variables up to the opponents’ campaign strategy. Agenda Comparison The four categories which showed up more than once included crime, education, poverty and economic development. Of the ten mayors chosen for this study, 70% of them considered crime as the most important issue to use for campaigns. Education was the next category to appear most with 40% of the sample population. Lastly, poverty and economic development showed up between 20% of the candidates. The data also showed 9 of the 10 sample population to have selected deracialized models as their overall campaign agenda. For the sake of this study on deracialization, these 9 candidates will be used as independent variables during cross-tabulation a little further down. Maynard Jackson will be excluded because he was the only candidate to run a racialized campaign within this sample population. 80% of the successfully elected mayors used a deracialized campaign while 100% of the unsuccessful candidates showed deracialized models. Newspaper Endorsement 50%, or 5 of the 10 candidates, had newspaper endorsements. 40% did not have newspaper endorsements and 10% were not applicable for this study. 40% of successfully elected candidates showed endorsements while 60% of unsuccessfully elected candidates were endorsed. This tells us there is a negative correlation between newspaper endorsements and the success of candidates being elected to the office of mayor. There is also a .20 statistical difference between successfully elected officials and unsuccessful candidates which tells us newspaper endorsements should not be used for this study and the null hypothesis (No) should be accepted as false. Also, due to the fact only 50% of the candidates had newspaper endorsements makes this variable irrelevant. Incumbency Status 60% of the candidates had incumbency status prior to campaigning for mayor. 80% of successfully elected candidates had incumbency status while 40% of unsuccessful candidates did not. There is a positive correlation between incumbency status and successfully elected candidates. And, there is a .40 statistical difference between successfully elected candidates and unsuccessful candidates which states we should reject the true No. And, 60% incumbency status is more acceptable as relevant data. Incumbency status should not be published because the significant variance is too great but should be considered for further study because of the large positive correlation between successful and unsuccessful candidates. Previous Office Holding 90% of the candidates held previous offices in government. 80% of successfully elected candidates held previous government positions while 100% of unsuccessful candidates held previous government office positions. There is a negative relation between previous office held and successfully elected candidates. There is also a .20 statistical difference between successful and unsuccessful candidates. We should therefore accept the No as false and the previous office holding should not be used for the study of deracialization. Previous office holdings, however, is relevant to the research of deracialization. Racial Dominant Context 60% of the candidates ran their campaigns in a white dominant context. 60% of successfully elected candidates ran campaigns in a white dominant context. The data also showed how 60% of unsuccessful candidates ran campaigns in a white dominant context. There is a .000 statistical difference between candidates who ran campaigns in a white dominant context which means racial dominant context is a significant variable. And, 60% of the candidates running campaigns in white dominant context makes racial context relevant to the study of deracialization. In this case you can also reject a true No and publish the data. Race of Opponent 70% of the opponents were white. 80% of successfully elected candidates ran campaigns against white opponents. 60% of unsuccessful candidates also ran campaigns against white opponents. There is a positive correlation between the race of the opponent and the success of elected mayors. In other words, the data shows there has been success of black candidates who ran campaigns against white opponents. While the statistical difference between successful and unsuccessful candidates is .10, the fact 70% of the opponents being white makes this variable relevant. We can also reject a true No in this case. The race of the opponent should be published in this case. Categorical Similarities with Opponents on Deracialization 40% of the opponents in this study showed deracialized campaigns. 40% of opponents ran deracialized campaigns against successfully elected candidates. 40% of opponents also ran deracialized campaigns against unsuccessful candidates. At the same time, 30% of the opponents ran racialized campaigns. Of the opponents who ran racialized campaigns, 20% ran racialized campaigns against those in the category of successfully elected candidates while 40% of unsuccessful candidates faced opponents with racialized agendas. The categories also showed how 20% of the opponent campaigns were stipulated as neither deracialized nor racialized and was deemed not applicable (n/a). 40% of successful candidates were deemed n/a while 0% of unsuccessful candidates were not affected. Lastly, 10% of opponent campaigns were categorized as split (50/50 racialized/deracialized). This means 0% of successfully elected officials face opponents with split agendas and 20% of unsuccessful candidates faced opponents with split agendas. Cross-Tabulation First, candidates who ran deracialized models were compared with those categories which showed more than one candidate had selected them as part of their campaign issues. Second, the nine candidates who did use deracialized campaigns were cross tabulated against the rest of the variables on the grid. Deracialization versus Crime Of the 70% of candidates who chose crime as part of the campaign agenda, 6 out of 7 or 86% of those were deracialized campaigns. 80% of successfully elected candidates used crime as part of their deracialized campaigns. And, the two unsuccessful candidates who used crime as an issue also ran overall deracialized campaigns 100% of the time. There is a negative correlation between successfully elected candidates and crime. There is also a .20 statistical difference between successfully elected candidates and unsuccessful candidates which states the data is not significant. The comparison between crime and deracialization in this case should not be published although crime is considered relevant. Deracialization versus Education 40% or 4 of the 10 candidates who used education as campaign issues also ran overall deracialized campaigns 100% of the time. Both successful and unsuccessful candidates each had two candidates who used education as a campaign issue. There’s a positive correlation between the deracialization and education in campaign issues. Also, there is a .000 statistical difference between successful and unsuccessful candidates. While we can reject a true No and should publish the relation between candidate success and deracialization, the data may not be valid. More test need to be run between the two variables because only 40% of candidates used education as a campaign issue causing the data to be irrelevant. Deracialization versus Poverty 20% or 2 of the 10 candidates used poverty as campaign issues. Both candidates were successfully elected to office. Therefore, 40% of successful candidates used poverty as a campaign issue. There is a positive correlation between poverty and deracialized campaigns. And, while there is a statistical difference of .4 between successful and unsuccessful candidates, no unsuccessful candidates chose poverty. While the data is insignificant and there is no relevance, the positive correlation of the data should still be considered for further study. Deracialization versus Economic Development 20% or 2 out of 10 candidates also used economic development as campaign issues in overall deracialized campaigns. Both successful and unsuccessful candidates used economic development, 1 each in this case. There is a statistical difference of .000 between successful and unsuccessful candidates. The data also shows that 20% used economic development which makes the study lack relevance. While there is no significant different, the data can be considered spurious (false) and irrelevant. Therefore, economic development versus deracialization should not be published. Deracialized Campaigns For the rest of the comparisons, 90% or 9 of the 10 candidates who ran deracialized campaigns as found in this study will be used as independent variables in comparison to the rest of the variables stipulate for African American candidates because deracialized campaigns were seen as relevant data for the sample population. Deracialization versus Newspaper Endorsement 55% or 5 out of 9 candidates who ran deracialized campaigns had newspaper endorsements. 50% or 2 out of 4 successfully elected candidates who ran deracialized campaigns were endorsed by newspapers. 60% or 3 out of 5 unsuccessful candidates who ran deracialized campaigns were endorsed by newspapers. There is a negative correlation between successful and unsuccessful candidates and there was a statistical difference of .38 between newspaper endorsement and candidates who ran successful campaigns. We should accept a false No and the data should not be published because there is too much variance. Deracialization versus Incumbency 67% or 6 out of 9 candidates who used deracialized campaigns were also incumbents prior to becoming elected as mayors. 100% or 4 out of 4 successful candidates who ran deracialized campaigns were incumbents before being elected. 40% or 2 out of 5 unsuccessful candidates who ran deracialized campaigns were incumbents prior to running for office. There is a positive correlation between successful and unsuccessful candidates. There is also a .23 statistical difference between deracialization which says there is still too much variance to consider the comparison between Incumbency and Deracialized campaigns. However, 100% of successful candidates who ran deracialized campaigns were incumbents. More studies are definitely needed for this data to be accepted as valid. Deracialization versus Previous Office Holding 89% or 8 out of 9 candidates who ran deracialized campaigns held previous government office positions. 75% or 3 out of 4 successfully elected candidates who used deracialized campaigns also held previous offices in government. 60% or 3 out of 5 unsuccessful candidates who ran deracialized campaigns held previous offices in government. There is a positive correlation between successful and unsuccessful candidates. There is also a .01 statistical difference between previous office holding and candidates who ran deracialized campaigns. We can reject the true No and we should publish this data as significant and relevant to the study of deracialization. Deracialization versus a White Dominant Context 67% or 6 out of 9 candidates who ran deracialized campaigns also ran against opponents in a white dominant context. 75% or 3 out of 4 successfully elected candidate who ran deracialized campaigns won in a white dominant context. 60% or 3 out of 5 unsuccessful candidates ran deracialized campaigns in a white dominant context. There is a positive correlation between successful and unsuccessful candidates who ran deracialized campaigns in a white dominant context. There is also a statistical difference of .23 between candidates in a white dominant context and those who ran deracialized campaigns. We would then reject the true No because there is no relation. However, while the data should not be published, more studies need to be done to establish a true relation between deracialization and candidates in a white dominant context because of the positive correlation between successful and unsuccessful candidates who ran deracialized campaigns in white dominant context. Deracialization and the Race of the Opponent 67% or 6 out of 9 of the candidates who ran deracialized campaigns, did so against a white opponent. 75% or 3 out 4 successfully elected candidates who ran deracialized campaigns also ran against white opponents. 60% or 3 out of 5 unsuccessful candidates who ran deracialized campaigns against a white opponent. There is a positive relation between successful and unsuccessful candidates who ran deracialized campaigns against white opponents. There is also a statistical difference of .23 between deracialization campaigns and white opponents. While there is too much variance between the two categories to publish the data, more studies need to be done due to the positive correlation between successful and unsuccessful candidates who ran deracialized campaigns against a white opponent. Deracialized Campaigns versus Opponent’s Choice of Deracialized/Racialized Campaigns 44% or 4 out of 9 candidates who ran deracialized campaigns also ran against opponents who ran deracialized campaigns. 50% or 2 out 4 successfully elected candidates who used deracialized campaigns also ran against opponents who used deracialized campaigns. 40% or 2 out of 5 unsuccessful candidates who ran deracialized campaigns also ran against opponents who used deracialized campaigns. There is a positive correlation between successful and unsuccessful candidates who ran deracialized campaigns against opponents who ran deracialized campaigns. But, there is a .46 statistical difference between candidates who ran deracialized campaigns and opponents who also ran deracialized campaigns. There is too much deviation from the level of significance to publish data. But, again, more studies need to be done because there was found to be a positive correlation between candidates who were successful and unsuccessful at running deracialized campaigns against opponents who also ran deracialized campaigns. Results/Conclusion of the study for Deracialization In this study, we should be able to use Deracialization in comparison to other variables because it was shown to be relevant for research on the sample population of ten African American candidates for mayor. Within these 10 candidates chosen for the study, 5 successfully elected candidates were compared with five unsuccessful candidates. Four categories of campaign issues which were repeated more than once by different candidates were used for further analysis. The categories were crime, education, poverty and economic development. Of these four categories education versus deracialized campaigns showed signs of significance and validity but more test need to run to make a valid conclusion. Poverty was not significant but showed relevance to deracialization. On the opposite end, economic development showed significance but lacked relevance for deracialization. Crime, although used the most by candidates, did not show significance or relevance to deracialized campaigns. Incumbency, previous office holding, white dominant context and race of the opponents seemed to hold the most weight in comparison to African American candidates who ran deracialized campaigns. Incumbency status was not significant but had positive correlations between successful and unsuccessful candidates for deracialization and encouraged more test. White dominant context was not significant but also encouraged more test due to the positive correlation between successful and unsuccessful candidates. The opponent race variable can be interpreted in the same exact manner as the study done with incumbency and white dominant context. Previous office holding showed the most promise of this study because it was found to be significant and relevant to deracialization. All other variables of the study are null and void for deracialization and the 10 African American candidates used for this study. It should be noted, there was no uniformity with categories of African American candidates. The best possible choices were assessed and used for this study. We can also infer many other campaign issues could have been used, but choice was left to the discretion of the person obtaining the data. The information in this study cannot be used as valid unless many more test are run with all African American candidates considered as categories to study deracialized effects. This study does, however, gives us a basis to interpret and ask valid questions in connection to African Americans and their use of deracialized campaigns and whether deracialization has been effective as a campaign agenda. Sources for Data Assignment Successful Candidates Bryon Brown versus Kevin Hefler http://www.city-buffalo.com/Home/Leadership/Mayor/Biography http://en.wikipedia.org/wiki/Byron_Brown http://www.buffalonews.com/apps/pbcs.dll/article?AID=/20130317/CITYANDREGION/1303192 58/1109 http://www.idcide.com/citydata/ny/buffalo.htm Cory Booker versus Ronald L. Rice http://en.wikipedia.org/wiki/Cory_Booker http://www.city-data.com/city/Newark-New-Jersey.html http://www.njleg.state.nj.us/members/BIO.asp?Leg=85 http://www.city-data.com/city/Newark-New-Jersey.html http://en.wikipedia.org/wiki/Ronald_Rice http://topics.nytimes.com/top/reference/timestopics/people/b/cory_booker/index.htm John Daniels versus John DeStefano http://books.google.com/books?id=p1ZHZKlKGCIC&pg=PA129&lpg=PA129&dq=John+Daniels+1 989+Mayoral+Campaign&source=bl&ots=36l7_T4A6E&sig=- MP4nqcFCl9717PdXo9SLQ0TlOQ&hl=en&sa=X&ei=NAtzUeyPDoGs2wW1y4GYBQ&sqi=2 &ved=0CGgQ6AEwCQ#v=onepage&q=John%20Daniels%201989%20Mayoral%20Campai gn&f=false http://www.epodunk.com/cgi-bin/popInfo.php?locIndex=9218 http://www.apnewsarchive.com/1989/Black-State-Senator-Wins-New-Haven-Democratic- Primary-With-AM-Primary-Rdp-Bjt/id-a721d86425e93f023c631aeba2735f4b Kevin Johnson versus Heather Fargo http://en.wikipedia.org/wiki/Kevin_Johnson http://sacramento.areaconnect.com/statistics.htm http://www.kevinjohnson.com/ http://en.wikipedia.org/wiki/Heather_Fargo http://sacramento.about.com/gi/o.htm?zi=1/XJ&zTi=1&sdn=sacramento&cdn=citiestowns&tm =109&gps=410_7_1366_651&f=00&su=p284.13.342.ip_p554.23.342.ip_&tt=2&bt=0&bt s=0&zu=http%3A//www.cityofsacramento.org/council/index.cfm%3Ffrpath%3Ddepart ments/home.cfm%3FMenuID%3D5003 Maynard Jackson versus Sam Massell http://www.georgiaencyclopedia.org/nge/Article.jsp?id=h-1385 https://webspace.utexas.edu/tsp228/.../Philpot%20and%20Walton.pdf http://en.wikipedia.org/wiki/Atlanta_mayoral_election,_1973 http://www.racematters.org/maynardjacksonjr.htm Unsuccessful Mayors Carl Stokes versus Ralph Locher http://en.wikipedia.org/wiki/Carl_Stokes http://ech.case.edu/cgi/article.pl?id=MAOCBS http://www.pbs.org/wgbh/amex/eyesontheprize/story/14_power.html http://books.google.com/books?id=TUKJogqXXl8C&pg=PA90&lpg=PA90&dq=Kenneth+G.+Wein berg,+Black+Victory:+Carl+Stokes+and+the+Winning+of+Cleveland+(1968),&source=bl& ots=ZgwtFSRoS5&sig=780q5W3P7OgmVxn48pOscKLCqRI&hl=en&sa=X&ei=bqN0Ue3MC 4rB2wXEpoGoBQ&ved=0CEYQ6AEwBDge#v=onepage&q=Kenneth%20G.%20Weinberg% 2C%20Black%20Victory%3A%20Carl%20Stokes%20and%20the%20Winning%20of%20Cl eveland%20(1968)%2C&f=false http://www.clevelandmemory.org/ebooks/stokes/ch7.html http://en.wikipedia.org/wiki/Ralph_S._Locher http://ech.case.edu/cgi/article.pl?id=AA http://digital.wustl.edu/e/eii/eiiweb/taf5427.0935.158sethtaft.html http://www.clevelandmemory.org/ebooks/stokes/ch7.html Cory Booker versus Sharp James http://en.wikipedia.org/wiki/Cory_Booker http://en.wikipedia.org/wiki/Street_Fight_(film) http://www.youtube.com/watch?v=rfm_1ZPDmyw http://www.lsureveille.com/article_11506591-1220-5076-bca3-8a2836e9c8f6.html?mode=jqm Kip Holden versus Tom Ed Mchugh http://en.wikipedia.org/wiki/Tom_Ed_McHugh http://en.m.wikipedia.org/wiki/Kip_Holden http://censusviewer.com/city/LA/Baton%20Rouge Larry Langford versus Richard Arrington Jr. http://www.youtube.com/watch?v=8Eu6aDwttA8 http://www.bhamwiki.com/wiki/index.php?title=Larry_Langford http://www.bhamwiki.com/w/Richard_Arrington%2C_Jr http://beck.library.emory.edu/southernchanges/article.php?id=sc02-3_007 http://www.bham.lib.al.us/resources/government/JeffersonCountyPopulation.aspx Tom Bradley versus Sam Yorty http://en.wikipedia.org/wiki/Sam_Yorty http://en.wikipedia.org/wiki/Los_Angeles_mayoral_election,_1969 http://www.city-data.com/forum/city-vs-city/918584-how-has-demographics-your-city- metro.html http://www.youtube.com/watch?v=tnDjQ1QbWho http://www.answers.com/topic/tom-bradley Definition of Deracialization in Introduction Liu, Baodong, “Deracialization and Urban Context”, Urban Affairs Review, Vol. 38(4) 2003, pp. 572–591.
Data Collection, Classification, and Analysis: Deracialization and African American Mayors
10
data-collection-classification-and-analysis-deracialization-and-african-american-mayors-11ec6c4c12dd
2018-06-07
2018-06-07 00:07:10
https://medium.com/s/story/data-collection-classification-and-analysis-deracialization-and-african-american-mayors-11ec6c4c12dd
false
4,780
null
null
null
null
null
null
null
null
null
Politics
politics
Politics
260,013
Derrick Mills
null
dc49a0546cf2
derrickmills
9
11
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-12-15
2017-12-15 11:04:33
2017-12-15
2017-12-15 11:05:09
0
false
en
2017-12-15
2017-12-15 11:05:09
3
11ed7346294f
0.754717
0
0
0
Munich. MVTec Software GmbH has made deep learning functions available on embedded boards with NVIDIA Pascal architecture. The deep…
5
MVTec brings deep learning to NVIDIA Pascal architecture Munich. MVTec Software GmbH has made deep learning functions available on embedded boards with NVIDIA Pascal architecture. The deep learning inference in the new version 17.12 of the HALCON machine vision software was tested on NVIDIA Jetson TX2 boards based on 64-bit Arm processors. The deep learning inference, i.e., applying the trained convolutional neural network (CNN), almost reached the speed of a conventional laptop GPU (approx. 5 milliseconds). This is possible thanks to the availability of two pre-trained networks that MVTec ships with HALCON 17.12. One of them (the so called “compact” network) is optimized for speed and therefore ideally suited for use on embedded boards. A software version for this architecture is available on request. In addition to deep learning, the full functionality of the standard machine vision library HALCON is available on these embedded devices. Applications can be developed on a standard PC. The trained network, as well as the application, can then be transferred to the embedded device. Users can also utilize more powerful GPUs to train their CNN, and then execute the inference on the embedded system. eletter-12–15–2017 eletter-12–16–2017 Originally published at www.embedded-computing.com.
MVTec brings deep learning to NVIDIA Pascal architecture
0
mvtec-brings-deep-learning-to-nvidia-pascal-architecture-11ed7346294f
2017-12-15
2017-12-15 11:05:10
https://medium.com/s/story/mvtec-brings-deep-learning-to-nvidia-pascal-architecture-11ed7346294f
false
200
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Patrick Hopper
President | Publisher OpenSystems Media — Embedded, #IoT, Military, Industrial, Strategist, Social Web Innovator.
a778f19909c9
patrickhopper
77
952
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-05
2018-03-05 08:03:26
2018-03-05
2018-03-05 08:04:46
1
false
en
2018-03-05
2018-03-05 08:04:46
0
11eda4c0510f
2.233962
17
0
0
This article examines the perfect combination of automated decisioning with advanced analytics and the human interaction. Let’s start with…
5
Decision Automation: How To Augment Predictive Analytics with Human Intelligence This article examines the perfect combination of automated decisioning with advanced analytics and the human interaction. Let’s start with a simple brainteaser. Take a look at this picture: There is a set of four cards lying on a table. Your task is to verify the rule: “If there’s a vowel written on one side of the card, then an even number is on the other side.” Identify which card(s) you need to turn to check the validity of this rule. Most of respondents give an answer straight away: it is enough to check the other side of the card “A”. Another popular answer is: you need to turn both “A” and “2” cards. Sure, we need to flip the “A” card, since this card has a vowel on it, and we have no data on what’s on the other side of this card. Is it really necessary to turn the “2” card?Our rule says nothing about the even numbers — therefore, we have no interest in checking this card. But this doesn’t mean that checking the “A” card is sufficient. We must flip the “7” card as well to see if the other side of this card has a vowel written on it. If it did, that would refute the rule. This task is called “Wason selection task” and was created by Peter Wason, a leading cognitive psychologist. According to his experiments, four out of five respondents fail to solve this puzzle correctly. Cognitive psychologists have found that people are wary of speculating on factors with high level of ambiguity; they prefer to base their decisions only on known facts. In other words, they tend to lose out of sight uncertain information. But sound and insightful decision making is impossible without the information that is “hidden” in the data. Paul Rogers and Jenny Davis-Peccoud at Bain & Company have comprised a list of 10 Decision Diseases That Plague Companies, and this rating is topped by a lack of relevant insights or, as they call it, Blurred vision. Below we’d like to provide just a few examples of how predictive analytics and decision automation paltforms can enhance decisioning in various areas. Embedding scorecards and advanced analytical models into loan origination systems allows lenders to acquire most reliable and profitable accounts, render the optimal decisions for loan pricing, and capture cross-sell opportunities. This way, forward thinking lenders can reduce costs of customer acquisition campaigns, improve their loan portfolio and overall profitability. Marketers can leverage decision automation platforms to design, test and implement customer lifecycle management activities and marketing campaigns. With sophisticated data analytics and behavioral scorecards, they can identify white spaces in the market and gain information advantage over their competitors. Furthermore, marketing automation solutions enable them to monitor performance of the product and services across different target groups, and keep their focus on changing customer demands. Decision automation technologies streamline business flow and allow companies of all sizes to embed intelligence into their daily operations. Smart market players who can leverage predictive analytics to optimize their decisioning and risk management will step beyond accessing relevant insights: they will be able to automatically transform the information into profitable actions.
Decision Automation: How To Augment Predictive Analytics with Human Intelligence
687
decision-automation-how-to-augment-predictive-analytics-with-human-intelligence-11eda4c0510f
2018-04-27
2018-04-27 12:40:25
https://medium.com/s/story/decision-automation-how-to-augment-predictive-analytics-with-human-intelligence-11eda4c0510f
false
539
null
null
null
null
null
null
null
null
null
Big Data
big-data
Big Data
24,602
Turnkey Lender
Turnkey Lender is an intelligent SaaS for automatic borrowers’ evaluation, decision making and loan management automation. Learn more at www.turnkey-lender.com.
5c64f5fd80e0
turnkey.lender
36
14
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-03
2017-10-03 00:16:51
2017-10-03
2017-10-03 00:20:14
0
false
en
2017-10-03
2017-10-03 00:20:14
1
11edb0e602e
2.181132
8
0
0
It’s my first day at work, I come in at 9, have a nice chat and coffee with my CEO and then fire up Facebook, Twitter and LinkedIn. Since…
4
The Intern Life It’s my first day at work, I come in at 9, have a nice chat and coffee with my CEO and then fire up Facebook, Twitter and LinkedIn. Since then, that’s pretty much been my morning routine. This is my job and I love it ! I’m currently interning as a Marketing intern at Pulse Q&A. Pulse is an early stage start-up based in San Francisco. My main job is to get more registered users by posting the data we have and also sharing weekly reports to target the right audience primarily by way of Facebook, Twitter and LinkedIn. So far, my life as an intern here has been blissful. Now, what is Pulse Q&A? We’re a team of 4 and are working on building a platform that provides data driven decision making for professionals. At Pulse, we believe in a ‘give to get’ ideology. Our target audience at Pulse are CIOs, CTOs, and other IT Leaders for now. But as we grow, we want Pulse to be for anyone and everyone who has a question and needs an immediate answer — as google is to search, Pulse is to research. It is an interesting platform to work on. So, why is my job so cool? Simply because I’m learning so much more than what I envisioned and it also binds what I love and am passionate about. At Pulse, no two days are exactly the same and this is what I love about it. Today, I’m posting on Facebook, tomorrow I’ll be working on Google Analytics to go deep on user acquisition funnels. What intrigued me about Pulse to intern here? What intrigued me, was the idea itself. The questions or as we call it Pulses are all questions which you’d want answers to. But what sets it apart is instant results and its interactive nature. For example, if I have a specific industry related question, I can just go to that particular pulse, answer it and get instant breakdown of how people think. I can also filter it according to my needs. This, right here is what I find awesome! Interning at Pulse has been rewarding as it’s constantly pushing me to get out of comfort zone and do something I’ve never done. As cliched as it may sound, it is actually quite true. So what next? Pulse is now moving towards launching a web app. That’s gonna be super fun. Over the course of my internship, I came up with strategies to disseminate our content, tried to get more traction, worked on content generation, looked at analytics, and in the future I get to put on my product thinking hat along with using machine learning side of my brain. All this while having fun and creative liberty. I’m also incredibly excited to kick off the machine learning project now. I love all things data and really passionate about it and will soon be working on a project where I’ll use machine learning algorithms to build a model to enhance user experience and specifically cater to their needs. What more could I ask for? To sum up, I can just say, I love waking up in the morning and taking that train to get to work, in fact look forward to it. If you wish to enjoy your job or want to know more about us, check us out @ https://www.pulse.qa/home. We’ll be happy to answer any questions you have for us.
The Intern Life
18
the-intern-life-11edb0e602e
2018-03-07
2018-03-07 16:12:36
https://medium.com/s/story/the-intern-life-11edb0e602e
false
578
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Pulse Q&A
null
ce827c59798e
pulseqa
27
5
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-01
2018-09-01 11:35:42
2018-09-04
2018-09-04 12:01:18
1
false
en
2018-09-04
2018-09-04 12:01:18
12
11edcf4dc2ac
0.671698
0
0
0
ServCoins will be used to encourage ServAdvisor App downloads, activation and adoption.
5
ServCoin Token Economy ServCoins will be used to encourage ServAdvisor App downloads, activation and adoption. Until ServAdvisor reaches a critical mass in both our clients and user base, in the early stages, ServAdvisor will be using a reserve of ServCoins to incentivize ServAdvisor App users to rate retailers\ services, upload reviews and complete other desirable actions (the ServAdvisor App incentive Reserve). This is to ensure we create a consistently rewarding experience for the consumer from the outset, and encourage usage while more brands and participating parties are onboarded. ServCoins can be easily stored in any compliant digital wallets and can be transferred and traded in a manner similar to any other digital currency. #cryptonews #cryptocurrency #blockchain #ICO #Crypto #TokenSale #earlybird #bitcoin #cryptokitties #altcoin #ServAdvisor #SRV
ServCoin Token Economy
0
servcoin-token-economy-11edcf4dc2ac
2018-09-04
2018-09-04 12:01:19
https://medium.com/s/story/servcoin-token-economy-11edcf4dc2ac
false
125
null
null
null
null
null
null
null
null
null
Bitcoin
bitcoin
Bitcoin
141,486
ServAdvisor
null
64017f48c363
ServAdvisor
32
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-17
2017-10-17 11:19:16
2017-10-17
2017-10-17 15:26:12
15
false
en
2017-10-17
2017-10-17 15:26:12
2
11ee7c30138
9.730189
25
2
0
In the previous article, we just only considered the straight forward network. Some common problems can be solve by such kinds of model…
4
Simple Introduction about Hourglass-like Model In the previous article, we just only considered the straight forward network. Some common problems can be solve by such kinds of model, including object classification and object recognition. I want to discuss about another important problem: object segmentation. Further to use this concept to try to solve the object recognition problem. The legend of these models The above image shows the legends which will be used in this article. I will introduce some tricky idea about these legends. FCN Object segmentation is another kinds of popular problem. Unlike the object recognition, we should recognize the object into pixel-level. In the previous object recognition task, we can just use bounding box to post the region of interesting. The point we care is just the location of the object, and we don’t really care about the edge (or shape) of the object. However, we should consider pixel-by-pixel to check if it’s belong to object or not. Rather than object recognition, object segmentation is a more hard problem. How to deal with this problem? The first straight forward sight is: bring each pixel into network and predict one by one! However, it’s not a practical method. Since the category of the pixel isn’t just determined by the intensity. The relation between the neighbor may be the influence toward the result. Another idea is: why can we just revise the original CNN structure? As the result, FCN was born. The structure of fully convolution network The original name of FCN is fully convolutional network[1]. For original structure, the last layer of CNN may be a softmax layer to predict the probability of the category. However, there’s only one result can be produced. As the result, Long et al. try to treated the model with another concept: the fully connected layer in usual structure can be regard as a convolution layer whose kernel size is the size of whole feature map! Is there possible to predict the category of each pixel just by convolution? The answer is yes. In FCN, the image will be processed through the network. The “course” feature response map will be produced at the end. This feature response map some how represents the category of original image in pixel level. However, the size is shrink to 1/32 times. To reconstruct as original image size, the first idea is bi-linear interpolation. However, it’s not suitable to adapt in the realistic situation. Another idea is learnable up-sampling method, and it’s more reasonable to learn how to deal with this problem case by case. In the upsampling process of FCN, the course feature map will be operated by a learnable up-sampling layer. Next, the element-wise addition will be adopt to the feature map. By the previous feature map, the result can realize more location and detail information which are destroyed by max pooling layer. The concept of how to up-sampling in FCN The author purposed the three kinds of result: FCN-32s, FCN-16s and FCN-8s. The meaning of the back number is the times of shrinking. For example, the result of FCN-32s is 1/32 than the original image. On the contrary, the result of FCN-8s gets through two learnable up-sampling layers and element-wise additions. The results of different scale output of FCN In the author’s experiment, we can see that the performance of FCN-8s is the most brilliant. As you can see, the detail margin of human and bicycle is more clear than the two other result. Moreover, as the author mentioned, the performance of FCN-4s and FCN-2s aren’t well than the FCN-8s. Thus it’s not certain that the more fusion will lead to more high accuracy. U-Net The FCN brought a big bang to this territory. It used straight forward concept and gave the hint to solve such this kinds of problem. After FCN, there’re lots of model being launch. Since the shape of these models are just like the horizontal hourglass, I call them hourglass-like models. These models did the good jobs in many different tasks, including pixel segmentation, object recognition, denoising, super-resolution…etc. The next one is U-Net[2]. The most specialty of this model is that it’s purposed to solve the medical problem in advance! The shape of the whole model is just like the english alphebat “U”. As the result, the name of this model is U-Net. You can just see the shape which are drawn in original paper in below. The structure of U-Net in original paper The author of U-Net was trying to solve the denoising which isn’t related to the category of the object. To speed up the computation, the U-Net drops the last two layer of VGG. This is first advantage of the U-Net. Second, rather than using element-wise addition to fusion the information of previous tensor and up-sampling tensor, the U-Net concatenates each other in channel dimension conversely. The structure of U-Net that scratched by myself This is another structure image of U-Net. As you can see, after each up-sampling operation, the tensor will also get through two convolution layers to reinforce the intensity. In original paper, the author also announced some revision to weight the margin of the different instances. We don’t consider the loss function detail in this article. Generally, the U-Net is a great model that use the detail of previous layers completely. SegNet The idea of learnable up-sampling layer is great. However, Badrinarayanan[3] thought that the structure of U-Net wasn’t perfect enough. The major problem is max-pooling. The idea of max pooling The above image shows the process of max pooling. In each filter region, we choose the max value to become the result which might represent the strong response toward the feature. However, the location information will lose after this operation. Are there some methods to solve the location missing problem? The alternative way is pooling-indices. We should remember the the location of the max value for each filter region during max pooling operation. After this recording, we can get the max pooling mask . On the contrary, we can utilize this mask to fill the max value to the original corresponding position. By the consulting, the location information will not lose during down sampling process. The structure of SegNet The above image illustrates the structure of SegNet. The twist arrows in the below side are the pooling-indices technique. After each stage, we just consult the mask and fill the max value to the original position. Next, we can just use convolution layer as U-Net does. At the end, we use softmax layer to predict the result. DeconvNet By the pooling-indices mechanism, the location detail can be preserved. The Noh[4] raised another creative idea. As we know, the convolution layer will learn the feature of the object. We can regard the kerenl as the perception retina of the specific object. By sliding on the image, it will turn out to become the response toward the specific feature. In other word, the process of convolution is just like “extracting feature”. However, can we do this process in reverse order? We can just regard it as to “render” the feature to the feature map. You can treat the process of convolution as changing the image to the feature response low dimension space. The structure of DeconvNet In the process of DeconvNet, the image will get through the VGG, and get the feature response map with low dimension. This feature map remains the rough structure of the original image. Next, we render this course feature map into the category space. By layers of deconvolution and pooling-indices, the location and category detail will be described at the end. You may be curious about a question: If the DeconvNet firstly uses deconvolution as the upsampling method, why are some yellow part in the FCN image? As I know, the original group of FCN author didn’t announce the original source code. However, other third-parties re-implementations are found. To simplified the “learnable” up-sampling mechanism, they just use deconvolution to be the alternative one. As the result, I use yellow block to represent the design. To make the conclusion toward the DeconvNet, the most structure of DeconvNet is similar to the SegNet. The only difference is changing the convolution layer to deconvolution layer during up-sampling. RedNet The original full name of RedNet is the residual encoder decoder network[5]. After the previous idea, Mao et al. though of two points: pooling-indices isn’t perfect! By this mechanism, we should record the location information by the extra mask. This process not only wastes other memory to record but also should spend time to compute the max value by sliding window. The idea of residual is more and more popular. Is it possible to use residual idea to enhance the performance toward this work? The structure of RedNet The RedNet solves the previous two problem. To my surprised, the idea and structure of RedNet is quite simple and clear! It’s just composed by convolution layer, deconvolution layer and addition! Rather than losing the location information and spending the extra memory, Mao got rid of the max pooling directly. In the whole process, the size of feature map isn’t change at all. The image just get through the layers of convolution. Next, we do deconvolution and element-wise addition with the previous tensor. The two different ways of skip connection But there’s one question. Where is the residual concept? In our previous experience, the skip connection is in order. And it just like the upper one of above image. On the other hand, the design of skip connection is like the lower one. Why should the author do this changing? The two experiments related to skip connection In fact, the author did the two experiments. The first one is to examine the performance between removing the skip connection or not. The left chart of the above image shows the result. As you can see, the red line gets the more higher value of the result. It shows that it’s essential to use skip connection to enhance the performance. The right side shows the result of two different skip connection. The blue and green line is the order residual connection (the original author of ResNet is He et al. ), and the other two line is the symmetric residual connection. From the value of the chart, the symmetric assignment gets higher value indeed. As the result, they adopt the symmetric skip connection at last. Experiment by myself What is the performance of each models to solve the real world problem rather than dataset? I try to implement by myself. First, the simple dataset was made. I choose the top phase of my refrigerator and place my red pen and green earphone one it. I take 20 photo for training and 2 photo for testing. Another short extra video was record that I want to check the performance of continuous situation. You can find the whole data here. Next, I implement the FCN, U-Net and RedNet by myself. The framework I use is tensorflow. Since the support of pooling-indices isn’t sufficient, I just implement these three model. You can see the whole project here. The result of testing data toward three different model Under my experiment, I shrink the whole image into 104*78. The times of shrinking is 10. The base number of filter is 32, so I really use standard VGG structure to train the model. The epochs in my experiment is only 500, and I record the loss for each 20 epochs. The above image illustrates the result of the testing data. The number of testing image is 2, so there’re 2 rows in the image. In each model, three sub-regions are divided. The most left one is the original image. The middle one is the ground truth of annotation and the most right region is the prediction result. As you can see, whole three models can capture the rough of the ear phone. However, only FCN-8 and RedNet can describe the distribution of red pen. The information of red pen in U-Net is just gone. The video prediction toward three different models The above gif image shows the video testing result. The right one is FCN, and it can capture some response of red pen. RedNet can show the red pen completely. However, it’s more sensitive to the environment, a bunch of object proposals are circled. Maybe more training iterations can improve a little. Conclusion The structure of five models with legend (in detail) In this article, I describe the progress of the hourglass-like model, including FCN, U-Net, SegNet, DeconvNet and RedNet. After the introduction, I show my simple implementation about these model. Reference [1] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, Nov. 2014. [2] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Miccai, vol. 9351, no. Pt 1, pp. 234–241, May 2015. [3] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.” [4] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, vol. 2015 Inter, pp. 1520–1528. [5] X. -J.Mao, C. Shen, and Y.- B.Yang, “Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections.”
Simple Introduction about Hourglass-like Model
43
simple-introduction-about-hourglass-like-model-11ee7c30138
2018-06-21
2018-06-21 06:31:07
https://medium.com/s/story/simple-introduction-about-hourglass-like-model-11ee7c30138
false
2,181
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Sunner Li
我不是二次宅,但我因夢想而熱血! 雖然我不是天才,但我將繼續用我的毅力、熱血和能力,解決一些問題,幫助周遭朋友,甚至改變世界的一點點!
b0cb7847d712
sunnerli
74
31
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-03-23
2018-03-23 01:53:58
2018-03-23
2018-03-23 01:54:12
0
false
en
2018-03-23
2018-03-23 01:54:12
0
11eee8e47ea5
0.475472
0
0
0
I feel like artificial intelligence has been misrepresented and misunderstood almost since it was first proposed. Artificial intelligence…
1
blog #4 I feel like artificial intelligence has been misrepresented and misunderstood almost since it was first proposed. Artificial intelligence is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for learning, planning, problem solving, etc. My concern for artificial intelligence would be that the technology would be too strong. Newman and Chiang both prove very good points in their articles. I think both authors are trying to say that artificial intelligence has the potential to transform the world. Computers and robots are effectively taking control away from the human species. Artificial intelligence makes people addicted and hard for a person to get their mind off of.
blog #4
0
blog-4-11eee8e47ea5
2018-03-23
2018-03-23 01:54:13
https://medium.com/s/story/blog-4-11eee8e47ea5
false
126
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Sophia Delloso
null
c721fcd69669
sophiadelloso
0
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-06
2018-07-06 12:33:25
2018-07-03
2018-07-03 13:54:37
2
false
en
2018-07-06
2018-07-06 12:34:11
5
11ef066db9bc
2.983333
0
0
0
My opinions. Nothing more. Why spend money on marketing research that is faster, cheaper…and cr*ppy? Wouldn’t it make more sense to trust…
5
Random Thoughts on Marketing Research, Statistics and Data Science (July 3, 2018) My opinions. Nothing more. Why spend money on marketing research that is faster, cheaper…and cr*ppy? Wouldn’t it make more sense to trust our gut than untrustworthy data? Why make a bad decision more quickly? A penny saved is a penny earned. It would be not be a huge exaggeration to say chaos and confusion reign in data science. To begin with, the term itself implies many things and is interpreted in a multitude of ways. Data scientist and data science are often lumped together in such a way that many job descriptions are for unicorns, instead. They list an assortment of IT skills, numerous programing languages, and experience with statistics, machine learning and software development — AI, especially. Not to mention the soft skills. Sorry, this describes a team, not an individual. Moreover, data management is not data analysis. Data analysis is not just descriptive statistics and visualization. Pattern recognition is not data analysis either, just an early step in the analytics process. Last, but not least, insights are not descriptive findings. It takes a human to turn data into insights. Meanwhile, around the globe, millions of talented, ambitious young people have been convinced that coding paves the road to riches… Some marketing researchers may wonder “What has all this got to do with me?” Unfortunately, quite a lot. One reason to worry is that many in the data science community essentially see all data as the same, apart from volume, velocity and structure. Their focus is on data management or simple predictive analytics. https://www.thedigitaltransformationpeople.com/connecting-talent/ From that perspective, marketing research data is much the same as any other data, and knowledge of marketing or research is not needed in order to mine “insights” from it. This is a threat to MR and marketing I feel should receive more attention. We humans debate whether something is this or that endlessly and sometimes, tragically, even go to war over it. We often forget that neither this or that actually exist — they are verbal labels. Take Myers-Briggs, for instance — wouldn’t it be better just to look at dimension scores instead of 16 personality types that don’t actually exist? Another example, is NHST. Though now a pariah in the eyes of many statisticians, Null Hypothesis Significance Testing recognizes this ingrained need of ours to categorize, usually into…this or that. We are far less comfortable with probabilities, a human foible easy for statisticians to lose sight of because of the way we’re trained to think. Personally, I’d rather look at distributions of effect size estimates than asterisks. But, if my education and career path had taken a different course, asterisks would most likely have been my choice. Oops…this or that… IMO, branding is now more important than ever. Unfortunately, fundamental marketing knowledge and commonsense is being lost, perhaps in part squeezed out by a near obsession with programming and tech. I recall one fellow, an American with an MBA, defining marketing as “selling to a lot of people.” Sorry, but it’s a bit more than that… Another example is a claim that the appearance of food and its packaging affects its taste perceptions (true) and than we’ve only learned this recently through neuroscience (baloney). Kinda think that’s something babies figure out sans fMRI. Yet another is that, once again, it’s being re-discovered that humans make decisions that aren’t always as tidy as mathematical formulas. If marketers aren’t taken seriously at C-Level, small wonder why. If you want to extract some general information from a vast amount of social media or other text data very quickly, automated text mining (now often called “AI”) can do this for you. However, if instead you want in-depth analysis and true insights, a skilled and experienced qualitative researcher is your best resource until Artificial General Intelligence arrives. That might be a long wait. Text Analytics: A Primer is a short interview with Professor Bing Liu, a noted authority on text mining, and provides an overview of the current strengths and limitations of text mining. Originally published at www.thedigitaltransformationpeople.com on July 3, 2018.
Random Thoughts on Marketing Research, Statistics and Data Science (July 3, 2018)
0
random-thoughts-on-marketing-research-statistics-and-data-science-july-3-2018-11ef066db9bc
2018-07-06
2018-07-06 12:34:12
https://medium.com/s/story/random-thoughts-on-marketing-research-statistics-and-data-science-july-3-2018-11ef066db9bc
false
689
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
The Digital Transformation People
Follow for insights on #DigitalTransformation #Disruption #FoW #Analytics #CyberSec Click here for our newsletter: https://tinyurl.com/yd3ckeqv
475f38b7f49d
TheDigitalTP
1,403
1,244
20,181,104
null
null
null
null
null
null
0
null
0
61d8f53e661f
2018-06-19
2018-06-19 01:15:21
2018-06-20
2018-06-20 15:26:41
2
false
en
2018-06-20
2018-06-20 15:30:40
4
11ef54de002b
2.805975
29
2
0
This story was first published on Cognitive World on June 18th, 2018.
5
Google Brain, now Medical Brain leader Jeff Dean — image credit: Fortune Google’s Medical Brain can now Predict Probability of Death in Hospitals, in Clinics soon. This story was first published on Cognitive World on June 18th, 2018. We know that artificial intelligence will transform healthcare as we know it. It’s science fiction come alive here, as Google AI can help doctors predict when patients might die. The AI is now better at predicting death than hospitals. Google’s Medical Brain team is now training its AI to predict the death risk among hospital patients, and there’s lots of buzz what this will do to our healthcare systems. In a paper published in Nature in May of 2018, from Google’s team, it says of its predictive algorithm: These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios. Bloomberg first reported the story, and went on to explain that Google has a new algorithm that can quickly sift through thousands of digital documents in a patient’s health record to find important information and with superior number crunching the AI, once fed this data, makes predictions about the likelihood of death, discharge, and readmission. Google is indeed all-in on AI in Healthcare. Image: Getty Data is a Matter of Life and Death at Times Google’s ‘Medical Brain’ team appears to be making significant AI advances that allows Alphabet to be more implicated in healthcare. This enables the technology in some cases to help doctors, basically augmenting them with more accurate predictive analytics, and allows them to make better predictions about the outcome of particular patients and their prognosis, for instance, how long a patient may stay in a hospital or when the likelihood that the patient may die. Google had already released an AI tool to help make sense of our genomes. Basically AI tools could help us turn information gleaned from genetic sequencing into life-saving therapies. So AI really now is at the cusp of life and death, and perhaps ready to be set loose on our health data. Google AI is able to work in surprising ways, and able to interpret and add thousands of data points. For instance, according to Bloomberg, what impressed medical experts most was Google’s ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. These may be the finest predictive algorithms in healthcare on the planet. Google’s system even shows which records led it to conclusions; and it’s speed and accuracy appears to be without precedent. Google is getting ready to bring the tech into clinics and use ‘a slew of AI tools to predict symptoms and disease.’ AI chief Jeff Dean health research unit — sometimes referred to as Medical Brain — is working on a slew of AI tools that can predict symptoms and disease with a level of accuracy that is being met with anything from shock, hope as well as alarm. Here is both a tool that well outperforms doctors, and yet augments how they may be able to assess patients. Trials put the accuracy of the AI’s predictions as high as 95%. Since Alphabet Inc.’s Google declared itself an “AI-first” company in 2016, Medical Brain is showing great progress. Google’s AI uses neural networks, which has proven effective at gathering data and then using it to learn and improve analysis. Google system is possibly, and even probably, the fastest, and more accurate techniques at evaluating a patient’s medical history known today. What could this mean for the future of healthcare? Dean envisions the AI system steering doctors toward certain medications and diagnoses. How much will AI fundamentally change how doctors deal with patients, the AI could help improve patient outcomes, reduce error theoretically, and also use patient data like never before.
Google’s Medical Brain can now Predict Probability of Death in Hospitals, in Clinics soon.
362
googles-medical-brain-can-now-predict-probability-of-death-in-hospitals-in-clinics-soon-11ef54de002b
2018-06-21
2018-06-21 12:59:20
https://medium.com/s/story/googles-medical-brain-can-now-predict-probability-of-death-in-hospitals-in-clinics-soon-11ef54de002b
false
642
Futurism articles bent on cultivating an awareness of exponential technologies while exploring the 4th industrial revolution.
null
null
null
FutureSin
null
futuresin
TECHNOLOGY,FUTURE,CRYPTOCURRENCY,BLOCKCHAIN,SOCIETY
FuturesSin
Healthcare
healthcare
Healthcare
59,511
Michael K. Spencer
Blockchain Mark Consultant, tech Futurist, prolific writer. WeChat: mikekevinspencer
e35242ed86ee
Michael_Spencer
19,107
17,653
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-13
2018-04-13 17:53:57
2018-04-03
2018-04-03 17:29:27
2
false
en
2018-04-13
2018-04-13 18:00:26
6
11efe1d9a995
1.134277
0
0
0
Be careful about what you feed your algorithm
4
The Dogs of Image Recognition Be careful about what you feed your algorithm This is caused by Deep Dream’s trainer data set. The system’s programmers at Google started strong, using a data set from ImageNet, a database created by researchers at Stanford and Princeton who built a database of 14 million human-labeled images. As we’ve seen, the more data an AI / neural network is trained on, the better and more reliable the results. Also, large human-labelled (meaning: ‘probably correct’) data sets are few and far between. But the Google team didn’t use the whole database. Instead, they used a subset of the database — one that was released specifically for use in a contest. Now ImageNet already has a bias toward dogs, with a few hundred classes out of the total being dogs. The subset the Google team chose was reportedly even more dog-heavy; Deep Dream sees dog faces everywhere because it was trained to see dog faces. Software powered by Google’s Deep Dream tends to see dogs everywhere. (source: Twitter) Originally published at possibility.teledynedalsa.com on April 3, 2018. Recommended Reads Facial Recognition, Part II: Processing and Bias Facial Recognition: How to Find a Face
The Dogs of Image Recognition
0
the-dogs-of-image-recognition-11efe1d9a995
2018-04-13
2018-04-13 18:00:27
https://medium.com/s/story/the-dogs-of-image-recognition-11efe1d9a995
false
199
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Imaging Possibility
There are two sides to every innovation: engineering and imagination. It’s in the space between the two where possibility takes shape.
9afe712f7328
PossibilityHub
4
54
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-08-08
2018-08-08 19:06:01
2018-08-08
2018-08-08 19:17:52
0
false
en
2018-08-14
2018-08-14 09:41:53
0
11f03f11e9f3
0.901887
11
1
0
What is Machine Learning?
1
Introduction to Machine Learning What is Machine Learning? The name machine learning was coined in 1959 by Arthur Samuel. Machine learning is a subset of artificial intelligence in the field of computer science. It provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves. Machine Learning tasks Supervised learning Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs Supervised machine learning algorithms are trained on data sets that include labels Examples : Logistic Regression, Decision trees ..etc UnSupervised learning Unsupervised machine learning is the machine learning task of inferring a function that describes the structure of “unlabeled” data Unsupervised machine learning algorithms, are trained on unlabeled data Examples : Clustering , Anomaly detaction Semi-supervised learning Semi-supervised learning algorithms are trained on a combination of labeled and unlabeled data. Examples : Google expander, Web page classification ..etc Reinforcement Learning Reinforcement Learning allows the machine or software agent to learn its behaviour based on feedback from the environment. Examples : Game Playing, Robotics Machine learning applications Virtual Personal Assistants, Examples : Siri, Alexa, Google Now Image Recognition Speech Recognition Traffic Predictions Medical Diagnosis Email Spam and Malware Filtering Search Engine Result Refining Product Recommendations Game Playing Online Fraud Detection ..etc
Introduction to Machine Learning
14
machine-learning-11f03f11e9f3
2018-08-14
2018-08-14 09:41:53
https://medium.com/s/story/machine-learning-11f03f11e9f3
false
239
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
srinivasarao aleti
null
7499ac7cbcb
srinivas.aleti03
18
3
20,181,104
null
null
null
null
null
null
0
null
0
cc02b7244ed9
2018-05-31
2018-05-31 06:43:54
2018-05-31
2018-05-31 06:46:03
0
false
en
2018-05-31
2018-05-31 06:46:03
11
11f0dd2b5e73
1.996226
1
0
0
PRODUCTS & SERVICES
5
Tech & Telecom news — May 31, 2018 PRODUCTS & SERVICES Applications Mary Meeker presented yesterday her “Internet Trends” report for this year, the longest ever. Many analysts are mentioning the underlying forecast that growth for tech companies could decelerate as a consequence of the saturation of internet penetration (including mobile), as time spent online doesn’t grow so fast (Story) (Presentation) Cloud The massive shift of enterprise IT to the cloud is driving Microsoft to a leading spot in the race of tech giants to reach a $1trn valuation, and has worked as the driving force for the company’s turnaround. Cloud (IaaS and SaaS) is still only 20% of total sales, but has contributed 63% of Microsoft’s overall growth this year (Story) Internet of Things BioHax, a Swedish startup, is implanting tiny electronic chips into people’s hands, opening the door to practical advantages of “human augmentation”, including easier identification (e.g. at hospitals) or using hands to pay train tickets, but also creating lots of privacy and ethical concerns. 40K Swedes are already “chipped” (Story) Regulation A US senator defended yesterday a “laissez faire” approach to tech regulation, concerned that a more aggressive policy (e.g. antitrust proposals to break up tech giants) would benefit Chinese competitors Alibaba and Tencent and hurt American interests, in a tech world where scale has become critical for value creation (Story) HARDWARE ENABLERS Components Nvidia is committed to maintain leadership in chips for AI workloads in the cloud, a fast growing semiconductor segment, where competition is increasing both from Intel and cloud giants themselves (e.g. Google, Microsoft). They just launched a new server architecture that unifies AI and high-performance computing (Story) Morgan Stanley analysts predict headwinds for Qualcomm in the coming months, even if the company recently had positive news with the approval of the NXP acquisition, and with new products for Augmented Reality headsets. However, challenges remain to monetize IP royalties and from losing business with Apple (Story) Networks US tower company Crown Castle keeps defending its strategy to capture new growth in mobile network densification and small cells, and claims the market is expanding, and validating the “neutral”, multi-carrier approach (more than 30% Crown Castle small cell sites), including the share of the fiber backhaul (Story) SOFTWARE ENABLERS Artificial Intelligence PayPal is reinforcing its suite marketing analytics tools for retailers, and just acquired Jetlore, a startup using AI to deliver personalized experiences at fashion retail chains, using “billions” of customer & product data points. This helps retailers better segment campaigns or tailor product lists to individual customers (Story) A contract with the Pentagon, including the use of AI technology to improve the targeting of drone strikes, has triggered a deep internal debate at Google, where many employees have protested the contract, concerned by ethical implications of the company entering into “Weaponized AI” or building “efficient ways to kill” (Story) Among key concerns driven by AI adoption in companies’ decision taking processes, there is a risk of sustaining, or even increasing, lack of workforce diversity, when algorithms are applied to people selection. Some experts claim for action, e.g. ensuring that systems are designed and built by a diverse set of stakeholders (Story)
Tech & Telecom news — May 31, 2018
1
tech-telecom-news-may-31-2018-11f0dd2b5e73
2018-05-31
2018-05-31 12:09:03
https://medium.com/s/story/tech-telecom-news-may-31-2018-11f0dd2b5e73
false
529
The most interesting news in technology and telecoms, every day
null
null
null
Tech / Telecom News
tech-telecom-news
TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE
winwood66
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
C Gavilanes
food, football and tech / [email protected]
a1bb7d576c0f
winwood66
605
92
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-27
2018-04-27 03:53:58
2018-04-27
2018-04-27 04:03:18
1
false
en
2018-04-27
2018-04-27 04:03:18
5
11f21e577bc7
32.569811
0
0
0
what can the government bodies do and consider in maximising the benefits while minimising the risks and societal cost of AI global wide…
4
Government care and the AI revolution of human capital what can the government bodies do and consider in maximising the benefits while minimising the risks and societal cost of AI global wide adoption? here’s the link for more resources [slides/podcast/script]: http://bit.ly/government-care-and-ai. Author: Andrew Liew Weida , First Created: 25th April 2018 Hi guys, thank you for coming here. Today, I would like to talk about government care and the AI revolution. Before I go into that, I would like to help us to recollect about what was said in the last 2 talks [Art of AI and Automation] and [AI Revolution of human capital for the individual]. In the last 2 talks, we talk about what is AI ? what’s the implications of AI for human capital? What would be a world of people powered AI like? What can we as individuals do to support the ecosystem? If you recollect, we will be superhumans in the next 5 to 10 years when AI empower us to do far more greater things than we can do the same things all by our own capabilities. We can achieve a lot of intellectual horsepower by working with them. One way to think about AI is that AI is a general-purpose technology. It would be like electricity and internet that impact our day to day lives without us even thinking about it. As such, it will create jobs and kill jobs at the same time. AI will kills jobs because of a huge component of those tasks in these jobs can be automated by AI. The new jobs created by AI require more complex skills and these skills take time to develop , to build and to work with AI and Automation. And because of that, there is a labor market mismatch. This labor market mismatch gives us 2 phenomenons. We mentioned a classic example of a waiter that can serve 50 customers using pen and paper. When the boss implement digitization, the waiter needs to learn to use the ipad, use the software applications and make payment with these applications. These new processes mean that the cost of learning and cost of adapting to using new technological tools will cost the boss 25 less customers in the short run. As such, there is a potential short term loss or a recovering period to using digitization. This leads us to understanding the first reskilling deskilling paradox explains why companies are reluctant to hire talents with emerging skills , more reluctant to send workers for training. When the worker realize that he or she does not generate return of investment in his or her job using the new skills, the worker will naturally be reluctant to invest in training as well. Then we have the second reskilling deskilling paradox that explains you can make more money using your current skills than your new skills if your utilization rate of your old skills is far greater than your utilization rate of your new skills. This is also known as the “Learn Less, Make More” dilemma. Because of these 2 paradoxes, people feel anxious. New tech startups and Big companies will become leaner. We learn in my previous talk, automation leads to recomposition of jobs. If the job is repetitive, it can be automated. Jobs that will not likely to be automated are very complex. That tells us that the whole economy or the marketplace will transit into a transient workforce cycle. The nature of transient workforce cycle will propel us to learn that the only risk in life is to not take risks in your work at all. At the same time, companies become more nimble at outsourcing and hiring more and more independent contractors and gigsters. The full time corporate talent might eventually become a gigster over time. Income and jobs are no longer certain. You have to constantly learn and constantly adapt. When the pace of change is rapid, people are lost, worried and anxious. This is why universal income will become a global norm. We learn something. When that happens, universal income will make people feel assured and that enable a sense of inclusiveness and promotion of collaboration. We also observed the rise of philanthropic works. One of the reason for that can be the rich realize that hard work is only one of the contributing factor to their prosperity and the rich acknowledge the society plays a part in making them prosper. If the rich trust the governmental system in enabling universal income, then universal income can be a sustainable feature of a futuristic society. We also talk about another potential side effects of universal income in the society. We notice that the millennials are relatively more purpose driven workers. Universal income can get more millennials to work in the social sectors. This is especially if these sectors are located in countries with high cost of living and low wage offers, attracting bright talents will be enormously difficult. Universal income supplement the income for these millennials to increase their participation in the social sector because these millennials no longer have survival anxiety. And because of that, the society as a whole is better off. We also talk about the potential doomsday scenario that there will be rapid pace of change for people to adapt. Jobs will evolve over time. As such, how can we as a society build an ecosystem for a people powered world? This leads us to this specific topic today: Government Care and the AI Revolution on Human Capital. Introduction So i’m going to share with you 4 different ways that any government needs to think about it. The four main ideas are showing metrics, enabling human capital statements in business registry, building an API for tech startups and enabling local companies to gear up. Let’s begin by examining the state of government intention , progress and hope in institutionalizing policies to grow human capital in their respective labor markets. Government bodies are measuring progress. The first one is to foster local government needs to enable accountability and clarity of human capital metrics at the national level, the company level and the individual level. Most of the government bodies in the world talk about economic growth and they measure economic growth using GDP and that stands for Gross Domestic Product. It can also be GDP per capita or Gross Domestic Product per person. By the same equation that they measure, they will focus on boosting the investment confidence via stimulating either monetary policies by Central Banks or fiscal policies by parliamentary bodies. These are in general financial allocation and spending. And because of these dollars and cents, the government place accountabilities on their spending and sharing that information on public websites. In the same way, there are regulatory bodies that ensure corporate governance for the companies and businesses. These organizations will also have to declare their company financial statements in a business bureau of registry. It’s called ARCRA in Singapore and US Small business administration in the United States. In any country, there will be a central credit bureau for keeping records of the individual credit history or financial history. So you can see that there is public records of financial information at the individual level, at the company level and at the governmental level. As such, there is an ecosystem to debate and think about using financial information to improve the performance of the individual, of the company and of the government. What about the display of human capital metrics at all of these levels? Existing measurements for policies on human capital are inadequate. Let’s have a look. At the moment, we do see that government bodies do reveal information about the labor market conditions. The metrics are the unemployment rate, the number of new jobs added versus the number of people being retrenched, the number of people on payroll. However, this information only reveal information at the tip of the iceberg. It is not able to help the public understand how has government spending in improving labor market conditions. Here are some suggested metrics for measuring human capital growth. New metrics such as the transition time per employed individuals, the income transition per employed individuals, the placement rate to training subsidy per individual ratio will enable the public and the government to better understand the cyclical effects of their policies on transitional employment. It is a flow capture instead of a stock capture on the dynamic of human capital allocation and job creation-destruction state. There should also be new regulatory calls to collect information at the company level. Improvement in measuring human capital is also needed at the corporate level. At the moment, companies are only required to furnish financial information in their business registry records and government are still doing market research to understand the state of human capital development in their respective nations. This is ironical because we are living in the age of Big Data. Collecting information on human capital via market research is a poor cousin to getting companies to provide information on human capital metrics just as they did for financial information and corporate governance. This is because the former collect less data at the granular level and there needs to be more data driven policies to enable more personalised governance be it corporate governance or national governance. Without granular data, we are missing out insights to what drive the individual to work more? what drive the individual to learn better or learn more? what drive companies to spend more time and money to send its people for training? is there a ROI or return of investment in creating or subsidising training programs at the company level , at the individual level and at the national level? As such, I highly recommend government to consider making collecting human capital information a compulsory at the business registry. This small effort can have huge ramification for investment institution to assess the value of the company from the financial capital and human capital perspective. At the moment, human capital information at the broad level are available at public listed companies and even then, we have less clue about on the human capital development of the rank and file in the public listed companies let alone the small and medium businesses. It’s better to start talking about it before it’s too late. There will be a time when nations and government bodies will push for such legislation to mandating collecting human capital information and storing in the local business registry. It can be the time when robots or AI create a tremendous impact. It can be the time when the reignition of the luddites revolution. That time will come when human beings might unfortunately be visibly classified as those transienable ones or the non-transienable ones. The transienable ones are those can that jump from job to job, can learn very quickly and can work with different robots or AI applications. By that time, the great inequity might get us to think about the following questions? Why are some individuals having difficulties transiting? Should we tax companies with robots like what Bill Gates mentioned before? Before we get to that time, perhaps it is your responsibility or my responsibility to discuss now so that discussion can collective drive the state or the society to start collecting human capital information at the company level and to display that information in the business registry or a macro summary of these granular human capital statements. Human capital metrics Here are some questions when we start collecting these information. Have they been open and accountable to the following: the number of hours that the individual is studying or learning, the number of skills that the individual has learned and applied, the number of jobs that the individual can move because of their learning. how has the recent government human capital reach the respective policies outcome in enhancing human capital re-employability? Because if we don’t have such metrics to put the respective agent accountable, then nothing move. Management starts with measurements It’s like the management saying, if you want to manage anything, the first thing you need to measure. Your measurement will determine the outcome you want to achieve. In the same way, there is a lack of clear visible metrics that align government spending and human capital re-employability. The analogy of Speed Limit Policy and Human Capital investments Here is another analogy, if you want to have safe road, the government will need to install road cameras. When there is a speed camera and the speed limit is 90 kilometres per hour, the car drive more than 95 kilometres and the speed camera will take a snapshot of the car and issue a warning to the driver with a proof that he is driving above the speed limit. In the same analogy, if the government wants to restore labor market conditions or provide better jobs to companies, the government should measure the outcome of the individual learning per taxpayer dollar spent, the outcome of the individual time to transit into a new job per taxpayer dollar spent, the outcome of the company profit with reskilled individuals per taxpayer dollar spent. When the company human capital drop below the national standard for that specific sector, then the human capital agency can prompt the company to send its people for reskilling. When the company saw return of investment on reskilling its people, they will place more of their profits into H-exp (human capital expenditure) over Cap-exp (financial capital expenditure). Ok, perhaps some want to see what kind of insights from collecting such information. Business case of effective transparent metrics policies Let me give you a numerical example. Suppose the government spend $100, $80 of that $100 goes to the individual learning and the other $20 goes to the administration of that policy, we can see the government is pretty efficient given 80% of each dollar go to the individual learning. But wait, let’s dig deeper in a situation if we have human capital information at the company registry. How much does of that $80 of training goes to enabling the individual to apply and translate that into a new product or service that enable the company to increase revenue, decrease cost, increase profit or reduce risks? Suppose there is a mandatory need to collect the working time, the learning time and the creative time. Now suppose that the individual is contracted to help the company for 8 hours. The company has a policy that states 2 hours of that 8 hours goes to learning and another 2 hours of that 8 hours goes to explore creativity. We might be able to know under the policy of giving $80 to the individual to learn, what is the profit impact of adding extra 10 minutes to learning ? what is the profit impact of adding extra 10 minutes to doing creative staff? With such information we can construct smarter analysis into understanding how this $80 of government supported training goes into creating profitability for the company. Perhaps we might realize that the company is not going to change its current mode of operation, then we can study that the marginal profit impact of each dollar from government support training programs on a comparable sets of similar companies running on similar human capital operation. Perhaps we might realize the government can see extra $100 annual profit from pumping $82 instead of $80 subsidy. If that’s the case, then the government can justify putting in $102.50 given it will be getting back in 4 years time at a 25% tax rate on that $100 annual profit that derives from the $82 of trickle down effect. So you can clearly see why it is in the public interest to ensure that companies also put human capital metrics into the records of business registry. Human capital statements in Business Registry Let’s look at another example of what kind of human capital metrics that companies should put into the registry of business. The human capital balance sheet shows the contracted hours that any human being can work without dying from overwork, the number of headcounts that companies go into hiring, the type of working arrangement of getting human capital and the financial resources to paying people, to training people and to caring for its people. Having these information can allow us to think about the following issues? Have more companies transiting into an ethical contingent workforce? Are the companies developing human capital while deploying human capital for profitability purposes? When should the government intervene to ensure market stability and enable companies maintain their corporate social responsibility? These questions might seem like a compliance issues but they are more so about economic sustainability too. Think about it, when companies start hiring accountants, they started having clarity about their financial resource management and start thinking about optimizing financial performance of a company. Imagine how powerful will the aggregate effect be when companies start hiring HR engineers to quantify the impact of human capital resources and investors start asking questions about diversity? they ask questions about how can we improve the workplace? how can we care more about people? These questions will no longer pertain to just for the big multinational organizations but across small and medium organizations. It is important because when companies want to go from where they are now to where they want to be, they need to be clear about the status of human capital position. Most will say it’s easier said than done. How do we measure human capital? Where do we collect these data? Build an API sandbox/testbed for HR tech startups To enable data collection, we need to enable a way to collect these data at the national level and one way is to leverage a technology called Automated Programming Interface or API in short. If a country want to be a smart nation, it must open up its API to collect human capital data. For example, the body that govern the social security fund and in Singapore, it’s the Central Provident Fund or CPF in short needs to help companies to make easily from their existing software vendors to the CPF board. By opening up an API infrastructure across the social security fund, the tax authority, the workforce agency, the ministry of manpower and HR technology companies can immediate help companies to digitize their HR processes. The most important implication out of this vision is to enable a central system for collecting human capital data just like what central banks are doing. It always amazes me that financial information are easily available and standardise when each country will enable inter banking systems via API call while human capital information are often scattered everywhere and even harder to administrate. With a centralized HR “banking” system, companies can have their own comprehensive view on their HR spending in terms of payroll, taxes, headcount and training impact. Only when everyone plays a part in advocating for a seamless HR experience for the individual , for the company and for the nation, will we be able to really use these human capital data to make smart decisions in an age which AI and automations are fast blurring the line between assisting human beings and substituting human beings. There’s an African proverb that it takes a village to raise a child. If you want a child to grow up and contributing back to the society, you need the teacher, the police officer, the uncle, the parents to train and guide the child to how the village want the child to be. Similarly, it takes an ecosystem to build a smart nation. If we cannot enable local companies to gear up and the local government bodies to enable this API connection, then the freelancers or gigsters or local HR tech startups will not be able to fulfill their respective potential to enable a smart nation. The lack of transparency does not boost labor market confidence for companies to invest in people when technology might seemingly do the job. We might eventually see more unemployment and more chronically unemployed individuals as more AI and Automation are introduced in a labor market with lack of granular level of transparency about the return of investment on human capital. No matter how much PR and marketing dollars are spent to send the message to the public to adapt and grow, to upskill, to reskill, giving them tough love without showing them a clear granular path of how they can see light at the end of the tunnel, they will eventually lost trust and possibly lost hope. It’s like telling a man with a broken leg to stop using a clutch and start running when that same man doesn’t even know when he can regain his capabilities to run because the government is not providing this man an X-ray. Even if the government provide an X-ray, the guy might not know is he recovering faster than people in similar conditions because no one has collect data on the recovery period and benchmark. In this analogy, the X ray is the tool to diagnose human capital. The data are the human capital metrics. It takes a lot of courage to open AI to HR tech startup to start collaborating, to enable companies to submit human capital data, to enable individuals to start measuring their own learning time and their skills, to enable government to use granular data to adopt a data driven policies in real time to enable personalized policies for the individual to reach its potential. Successful campaigns have visible metrics to the target audience Let’s turn to another example on how measuring human capital can create an impact. This example derives the policies that the government are good at tackling the health and well being of the individuals. In Singapore, we are trying to do our best to create a healthy nation. Everyone wants to be healthy. Let’s look at the health related poster in Singapore from 2015 to 2017. Let’s look at the first poster. It tells you not to consume too much sugar. For example, if you take one teaspoon of brown sugar you will need to drop for 2.35 minutes if you take one teaspoon of white sugar you will need to jump for 2.42 minutes and if you take one teaspoon of honey you would need to job for 3.72 minutes. So here you can see the effects of taking different types of sugar and how much time you need to jog to burn those calories to stay healthy. So having these metrics in the first poster tells me that I can take up more time jogging if I can white sugar or honey relative to brown sugar. The other way is if I jog for the same amount of time, I can take brown sugar to take in less calories in order to stay healthy. Let’s look at the second poster. The second poster shows the calorie intake for taking different types of Mcdonald meals. If you take a grilled mcchicken wrap meal that consist of a bottle of water, a cup of corn and the wrap, you will consume 501 kilocalories. If you take a grilled chicken salad with the same bottle of water and a cup of corn, you will consume 256 kilocalories. The campaign name for the second poster is called the Delight 500 in that you will be healthy by consumer anything that have 500 kilo calories or less. The first 2 posters give the individual a good indicator about the input of energy that one consumes while the third poster give the individual a good indicator about the time to quit something negative for the body. The third poster shows the individual can quit smoking for good if the same individual can quit smoking for 28 consecutive days. So you will be thinking have I seen any poster that indicate the return of investment for the individual to start learning. I don’t know about other countries but I can tell you that there isn’t any poster that relates to human capital return of investment on training or pay or leadership that indicate some specific numerical indicators in Singapore from 2015 to 2017. Existing HR campaigns lack visible metrics In the next slide, you can see the posters in Singapore that encourages the individual to switch career, encourages the business owners to hire new people. There are posters on the public subway which is known as the Singapore Mass Rapid Transit that encourages individuals to reskill. Why isn’t there a poster that say the message similar to the public health posters like this: “learn this course in 30 days and you can most likely earn $2000 in your future income.”? Why doesn’t the workforce agency have the courage to stake a numerical claim? This is because that agency did not measure and collect data as the public health agency does. If you want to have a successful marketing campaign, showing the value proposition in numbers to ask for a call to action definitely works most of the time. When you see those human capital related posters, you will be thinking where are the return of investment metrics? where are these metrics? When public agencies does not show the individuals the dollars and cents impact of taking their time to learn or taking their money to learn, they will be very skeptics. Everybody knows in general training is good like everyone knows investing is good too. BUT everyone realize that training is as risky as investing! They will be thinking like this: how much time will I be use for training? What is the opportunity cost of using that time for training? My time to take care of my kids? My time to take care of my parents? My time to rest my body? My time to build relationship with my wife? These other commitments are also important to any human beings. And if the government or the company or I want myself to learn, then I need to demand a clarity on the return of investment in learning a skill or a course or picking up a knowledge. What is that Return of Investment in learning? In the same way, for an existing campaign, when the government wants to help the individual to transit one’s career, the government needs to show them metrics. The agency needs to think like this: if we have 10 professionals that got retrenched, 6 of them come out of this program, 4 of them successfully transit. Among the 4 that transit, 3 of them double or triple their income 6 or 7 years down the road. Or at least this program maintain their income or prevent a downward spiral effect to their lifetime income. Where are such metrics? Without these metrics, the individual will become very doubtful about these programs. What about the labor demand? What are they thinking about human capital policies without clear ROI metrics? Why are companies very reluctant to take up new programs and policies? The companies will be questioning the effectiveness of these policies. If the policies are working then perhaps there can be greater transparency to show how effective these policies are at enabling companies to generate return of investment from using these policies. The business owners and companies are thinking like this. Ok , I can hire this person on this policy on a grant, I am still paying him and I need to train him. I need to get a training manager or a line manager or I need to train the person myself. And that means the opportunity cost of the business owners’ time or the training manager or the line manager’s time. And these time can possibly help the companies to collect more sales or revenue. Now I am taking a short bet on this candidate using this grant and I have no metrics to evaluate the return of investment on the candidate or the policy. what is the return of investment for this program that I am getting into? Without a clear way of measuring and collecting data to illuminate the return of investment on these policies, the business owner will be reluctant and be skeptic about the program. It is a common sense to think in that way. When a business owner see an idea, that owner will go and test it. The business owner talk to his or her investor. The investor will ask what is this budget? what is the return of investment of taking this budget and putting it into marketing? similarly the investor if possible will also want to know the return of investment of HR policies like hiring key people, like training young people. Adding clarity to the market When we don’t have a clear metric, then we cannot add clarity to the labor market dynamics, the public officers will want to know the following questions: what is the estimated increase in payroll for companies that take up training initiatives? what is the impact of training on the company current revenue and future revenue within an 18 months period? which sector see the biggest impact on GDP growth via implementing the training policies for that year? which training institute offers the biggest dollar impact to companies and to individuals? We are currently observing an imperfect labor market. When we don’t have clarity on the labor market and human capital dynamics, everybody will be like the headless houseflies running around in the air figuring out they need to get some training without realizing the danger of being smack by the tides of AI and automation. so the public agency has clearly seen that there is a lack of quantifying the human capital investment. Why should we care? If an investment today doesn’t generate a return tomorrow at a national level, then I better write off my investment as bad debt and quickly change my investment strategies. I can only change if I can measure them, right? So can we quantify human capital investment? Allow me to give you an example of how we can quantify the human capital investment and how this concept indicate the labor market dysfunctioning. Lack of transparency on quantifying human capital investments Let explain the effects of information asymmetry of human capital investments in the labor market. There are 2 lines in this chart. The chart has 2 axes. The horizontal axis represents time while the vertical axis represents resources like effort and money. This is the situation. The economy is not doing well and many companies are retrenching and restructuring. As a result, many people are out of job for a chronic period of time. High unemployment might trigger social unrest or political overthrow of the existing incumbent party. At the same time, there are new sectors seeking to hire people to take on new jobs with new skills requirements. One possible reason why the retrenched group are not able take up these new jobs is because they do not have the skills for these new companies to hire them. Companies that are risk averse in human capital investment will generally hire experienced hires for the new jobs and pay them a premium. In the same fashion, companies that are risk averse in human capital investment will be more likely to offer a discount on the market rate wage to hire someone with no demonstrated way of showing skill utilization of the new skills demanded for the new job. At the same time, these retrenched workers are so used to getting their previous earning wage that might be higher than what the employer of the new jobs are willing to offer to these retrenched group without these new skills. This reveals a gap in labor mismatch. Without any intervention, the employer will not find workers unless they seek out foreigners and retrenched group will not get jobs unless they go overseas. If the local government start opening influx of foreigners under the nose and direct observation that the retrenched workers cannot find local employment and are unwilling to relocate, then there will be political consequence of anti migrant effects and labor market crunch. If there is continuous labor market crunch, the economy will go stagnant. If there is anti migrant effects, the incumbent party will lose their jobs. Therefore, the government decides to offer government subsidy to allow companies to take risks to training this retrenched group and eventually hiring them. The government subsidy is intended to reduce the risks of job placement. However, companies are not willing to take up. Why is that the case? One possible reason is the companies do not know whether the investment in this case, training this retrenched worker will lead to a payoff in the company’s profit. This question arises because the individual can learn but not necessarily able to utilize that new skills. Skill utilization is key to applying knowledge into productive output that the company can use to grow its profitability. The fact that everyone has one’s unique time requirement for learning and utilizing new skills, couple with the fact that companies assimilating AI and Automation are catching up with incoming competitors accelerate the urgency for the candidates to quick hit the ground and quickly apply the skills as they learn. A good training can enable the specific candidate to quickly learn the skills and a good environment can enable the same candidate to apply the new skills. The candidate also need to have a good degree of learnability to learn complex skills. 3 of these elements are necessary for the companies to see the return of investment from taking chances on candidates without experience. While the company can create a supportive work environment to satisfy one of the necessary conditions for obtain ROI from taking the government subsidy and putting the candidate for training, the company needs clarity to assess the quality and quantity of training as well as the learnability of the individual in order to assess the ROI or return of investment of taking up the scheme. At the moment, we have seen that the existing agency has not been able to provide the return of investment on its existing training programs and the learnability of the individual. That is why the companies are worry that the worst case scenario can happen. The worst case scenario is that the company or employer commit to the scheme only to find out that the individual is not able to learn or not able to utilize the skills to turn knowledge into products or services and eventually creating a loss from the employer initial investment on training and hiring a lost cause. If training is not effective and cannot be indicated by a metric, then there will always be labor market failure in the age of AI and Automation. Most often economic recession is often signal with poor stock market performance with the local trading index like the Dow Jones index or the Straits Times index. The stock market is often an indicator of investor confidence about the economy. Similarly, there can be a lack of company confidence and a lack of job applicant confidence in the labor market. The lack of company confidence in the labor market is the skeptics about the learnability of individuals without prior experience and the return of investment from human capital investment while the lack of job applicant confidence is the skeptics toward job search and learning given the prolonged search and effort to possibly land jobs or gigs over time. I want to talk about another interesting phenomenon in the labor market and this phenomenon will either get more prevalent because of AI or diminished over time because of AI. Let’s recall one of the mentioned statement that companies that are risk averse in human capital investment will naturally want to hire experienced hires. These companies will look for experienced hire whose resume or profile meet at close to 100% of the job scope as much as possible. Yet the ironical truth is experienced hire also want to learn, grow and expand their skill sets so they will not likely take up the same job with the same scope of work. If they do, then the incentive for doing so is milking as much salary as possible. On top of that, individuals that take up jobs that matches almost 100% of the job scope will experience boredom or lack of engagement. The lack of engagement will most likely not able to enable the individual to fully utilize one skills given that individual’s attention at work is probably waning or being diverted to some side projects. That individual know that in the age of AI and Automation, the same set of skills and job scope will decline in value as new technology and new skills are emerging to replace the existing ones. They will be thinking: I need to take risks in this modern age so why do I have to take the same old job again and again in the corporate environment? Why don’t I be a gig consultant and earn a premium as much as I can? By sticking to the same job scope, I am in fact increasing my risks of getting my next job. And so if I am increasing my risks of my next next job and this job wants me to do the exact same job in my previous job, I better ask for a premium on my new pay to compensate for that future risks. As such, companies need to review the return of investment on candidates whose profile matches 100% of the job scope. The truth is it is often a challenge to know what is percentage match that the candidate can match the job scope without have a common way of assessing that percentage. This is because no one is measuring human capital as clear as what the market is measuring financial capital. Gurus say add metrics to initiate impact So what does all these examples and ideas boil down to for the government sector? It boils down to the following quotes. The father of management theory, Peter Drucker said “If you can’t measure it, you can’t manage it.” This means that if the government want to restore labor market failure in the age of AI and automation, it has to take a rigorous approach to adopting granular measures just as it has done it for central banking and fiscal spending. Gray Becker who won the Nobel Prize in Economics for his contribution to using empirical analysis and theoretical calibration to explain the return of investment on human capital in education. At the moment, most have taken his rules but not his methods to collecting and analyzing human capital issues. His rules might changed in the age of AI and Automation but his empirical approach remains relevant till today thanks to big data and analytics. Another Nobel Prize winner in Economics, Daniel Kahneman who study the psychological behavior of human being in its rational mode or irrational mode. In his book, Thinking Fast and Slow, he mentioned that we, human begin has 2 modes of thinking, the fast thinking mode and the slow thinking mode. When the business owner is given a choice to take a grant, he will most likely take a rational thinking mode which is the slow thinking mode. And if the government is unable to provide indicator to the return of investment on that human capital policy, then the business owner can either take a chance or reject the offer. If the business owner reject the offer, then the government is not able successfully use the above policy to restore labor market confidence. If the business owner takes a chance and make a lost in the first instance, then the business owner memory will automatically locked into memory for system 1 thinking or the fast thinking mode. The government re-introduces the scheme with more subsidy but still lack clarity on the return of investment on the human capital policy. As such, the business owner will automatically go into system 1 thinking and reject the scheme. In the same way, the government will incur a lost of trust because a policy failure without clear data driven explanation will lose the confidence of the businesses in taking up new schemes with ambiguous outcome on the return of investment on human capital investments. By having clear granular data to explain the return of investment on each policy, the outcome of new human capital policies will most likely not create a dependency on any subsequent possible failure of past human capital policies. This is because business owners constantly take information on the return of investment of human capital investment into its rational thinking. Companies to start sharing and discussing human capital insights like financial capital Ok let’s look at how companies can make better decision on human capital investment in the age of AI and Automation. The company records the number of headcount deployed in 2015 and 2016. This is further classified into leaders, professional and rank & file. We can see that there is no change in the number of leaders in 2015 and 2016. The number of leaders represent 10% of the company total headcount. Now let’s look at the professional headcounts. There is a 33% increase in the number of professional from 2015 to 2016 given we observe that the headcount for professional increase from 30 to 40. In addition, we notice that the group of professional has increase its representation in the company workforce from being 30% of the company workforce in 2015 to forming 40% of the company workforce in 2016. At the same time, we notice that the rank and file workers has reduced in size from 70 in 2015 to 60 in 2016. At first sight, most will be thinking oh no! the company is not doing well and so is retrenching people. But when you look at the human capital statement, you can clearly see that the total number of workforce remains the same at 100 workers in 2015 and 2016. One possible story is that the company is doing well and it is getting ready for its next phase of growth as such it has increase more professionals to beef up its future capability and reduce the size of the rank and file workers to keep its running cost efficiency and sustainable. With such a clear picture, government is able to better assess its macro policies working at the micro level on the ground. Saving time and effort to get opportunities Let’s look at how the government helps the individual to restore confidence to learn and to job hunt. Recalling that it takes a village to raise a child, it also takes an ecosystem to build a smart nation. We form the society and we want a society to help the individual to save time and effort to get opportunities. And right now what we are observing is that getting a job is a job by itself and the process of job hunting is not a trivial task. It is a job to get a job. There are a lot of steps that you need to do to get a job. One of my personal opinion is that if Linkedin is seriously about helping its members to gain more economic opportunities, Linkedin should do some corporate socially responsible activities and one of the CSR is to provide “one click” to populate the candidate information into the company job application tracking system. The current one click is currently discriminating against those that paid for this feature and those that don’t. This one simple gesture can save the individual enormous amount of time. The individual government from the respective country should enable a 1 click job population for its citizen by opening up its API infrastructure. If the Singapore is aiming for a cashless society, why not a frictionless job application world for its people in a fast dynamic labor market in the age of AI and Automation? This is a public good to ensure the efficient allocation between individuals and companies. This is no difference from building an extra bridge between Singapore and Malaysia to enable the efficiency transportation flow between 2 markets. I see this as a public virtual bridge between the individuals and the companies. The other idea to help the individuals is to create awareness for companies to hire for talent potential over using traditional methods to assess job match. Resume is becoming obsolete given the changing dynamics of tasks and experiences that can be captured in a public profile like Linkedin. Recruiters or business owners can adopt the following thinking: Can we evaluate this guy ? Are there indicators for talent potential and learnability? Has the individual cover similar experience that is not exactly matching the job description but is what we are seeking for in the same settings or different settings? If the answer is yes, this profile seems to have match more than 50% of the job description, let’s have a chat about it. It is through discovering the individual story that we can evaluate whether this person can add value to the company. One of the famous movie about measuring the return of investment in human capital is the Moneyball. This movie is basically about this baseball manager whose name is Billy Beane played by Brad Pitt trying to change the game of winning baseball against the former winner club, which is the New York Yankees with 10 times less resources. Billy, the general manager of the Oakland Athletics do so by hiring a new assistant Peter Brand played by Jonah Hill to use econometric towards analyzing and scouting players. In the end, Billy won the game by using a data driven approach over the traditional approach of over-relying on his scouts in 2002. So Peter Brand key message to Billy Beane is to hire players to achieve the wins instead of hiring players to play the game. In the same way, we can enable companies to win like the Oakland Athletics only if we start to quantify human capital and evaluate the return of investment on human capital investments like training, compensation, benefits and leadership succession. Once we can do that, we will be able to identify different companies with different risk appetite on their human capital investment decisions. The current traditional approach is to get a headhunter to convince the super experience people and pay both the headhunter and experience people a ton of money. There is nothing wrong with using a headhunter or hiring experience hired and the traditional approach worked and still works today. However, as we are moving into the age of AI and Automation, this traditional approach will results in increasing rate of attrition, decreasing the level of job engagement and perhaps increasing the cost of running a business. The new way to do is to market the job, take job applicant whose profile fit 50% or less on the job description, assess the company risk appetite in hiring, training and paying this candidate on the job alongside with the return of investment of taking this candidate with a modern headhunter that also uses modern AI and Automation to recommend this candidate. By doing so, the company adapts to the age of AI and Automation when corporate job holders are transiting in and out of being gigsters and when companies are constantly transiting between the state of innovation and the state of optimization. Again, there is high risks in hiring high potential but there is also higher return too. API : be connected, be the hub for spokes Now the human capital investment decisions for the companies and for the individuals will increase with clarity if everyone including the government to work on a common API infrastructure to communicate, engage and activate human capital activities just like the Central Bank working on the cashless infrastructure in Singapore or in other parts of the world. So why is this more important and what are the implications of doing so? Imagine that there is a central HR databank is seamlessly connected to the business registry like ACRA in Singapore, the social security fund like the Central Provident Fund, the Training bank like the Workforce Skillsfuture Agency in Singapore, the Job posting bank like the Singapore Job bank and the Physical and Mental health bank like the Health Science Authority in Singapore and this HR data bank, which is known as the HR Nexus, is the information highway for enabling efficient and effective information transfer and services between the respective agency and the public that consists of the locals, the companies, the gigsters, the foreign workers and investors. This enables investors to value the company based on not just financial capital but also human capital. Freelancers and gigsters are able to differentiate the companies with agile workforce needs against companies with rigid workforce arrangement. Companies can reduce the cost of making erroneous mistakes in administering HR paperwork given there are seamless HR technology products constantly using AI and Automation to check and administer with all of the government agencies without having to call each agency to make inquiry every time. This dramatically reduces the cost of HR compliance for companies if the government can implement the HR Nexus project. This HR Nexus project will eventually be the first Human Capital Central banking system that enables transparency and standardization of metrics to enable the first push towards using data to evaluate the return of investment on human capital matters. This gets us to think about big companies to think about adopting the HR Nexus project for their global workforce. If so, let me share with you a way of thinking about managing the agile workforce and a way to think about human capital investments. The traditional way is using psychometric metrics for hiring, using traditional performance appraisal for rewarding and using engagement surveys for retaining people. These metrics are subjective data and so have inherent biases. If we only use these subjective data to make human capital investment decisions, then we run a very high risk of making bad bets about people, about a team and about the workforce into the future. So, how can we avoid these potentially costly mistakes? This gets us to think about the next question. Why don’t we consider using a quantitative perspective? How can we better enable human capital to be more efficiently deployed? How can we bring in operational science, statistical thinking alongside with the human behavioral study and the human psychological study to form the basis for our human capital investment decisions? By combining these approaches, we can use machine learning techniques to better analyze and better predict the future outcome of companies profitability from the human capital investment decisions we make today. Thank you for listening to this talk or reading this script. We have come to the end of this talk. Have a good day ahead. Note: this post is being constantly reviewed, re-edit on new researches. If you have good sources of information or insightful opinions, please write to me or tweet me. Click here for the Apple Podcast version The original post for this blog can be found at Andrewliewweida.com
Government care and the AI revolution of human capital
0
government-care-and-the-ai-revolution-of-human-capital-11f21e577bc7
2018-04-27
2018-04-27 04:03:20
https://medium.com/s/story/government-care-and-the-ai-revolution-of-human-capital-11f21e577bc7
false
8,578
null
null
null
null
null
null
null
null
null
Economics
economics
Economics
36,686
Andrew Weida Liew
a curious writer to share insights about people, tech and biz.
163388f9df6
andrewweidaliew
272
520
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-11-24
2017-11-24 16:05:53
2017-11-26
2017-11-26 20:40:27
5
false
en
2017-11-26
2017-11-26 20:54:09
5
11f2509272cb
5.301258
35
5
0
Background
5
Share the code: adventures in education data wrangling Background When I transitioned from being a high school science teacher to becoming a data scientist, I thought it was a phenomenal stroke of luck that I was the only person within the first organization that hired me who: Had a solid knowledge base in analytics and statistics Was familiar enough with R to favor it over Excel When you add to that the fact that I was working with proprietary data that under no circumstances could be shared with the public, it put me in a very safe position — I would never have to share my code, I could work at my own pace, and I never had to subject myself to criticism. Fast forward to six months ago, when I finally came to terms with the fact that I was doing myself an utter and complete disservice on both a personal and professional level. Not only had I not improved in my programming skills, I simply wasn’t learning or growing. I happen to like my comfort zone sometimes! Why share your code now? I currently work with K-12 education data, which is publicly available and therefore has far less restrictions on what can be shared. I’m really leaning in to the whole idea of sharing my learning experience as a way to demonstrate that learning data science can be challenging, but it’s something you can absolutely do. Learning is a messy process, and you don’t just wake up one day being great at data science. Or maybe you do, and I’m going about this entirely wrong. I want to get better at programming! Outside of graduate statistics coursework from 10 years ago, I’ve had no formal training in R. Everything I’ve learned has been through a haphazard “grab and go” process, and I’ve only just started to strategically organize my learning. When I asked a question about this on Twitter, I realized that I had never shared my code, and that maybe now was a good time to change that! This entire thread gave me all the feels — the responses are both diverse and incredible This is the sentiment that has resonated with me the most OK, so where is your code? Right here! What am I looking at? The code goes through importing and wrangling the Texas Academic Performance Report (TAPR) data for students both approaching and meeting grade level, as measured on the State of Texas Assessments of Academic Readiness (STAAR) tests. There’s a whole host of additional information as well, and you can find it here. What were the project guidelines? By November 27th, I needed to have data in a format that allowed me to accomplish the following: Compare growth from 2016 to 2017 in students approaching and students meeting grade level for math and reading, looking at all students, english language learners (ELL), and students from lower socioeconomic backgrounds. The data needed to be analyzed at the state and district level. Determine which schools were in the top 10% and top 25% in terms of both achievement and year-over-year growth at both the state and district level. Determine which districts were making notable gains with english language learners and students from lower socioeconomic backgrounds. How I approached wrangling the data: There were three main steps: Link campus information with student mobility data as well as student testing data. The student testing data contained a fair bit of information coded into column headers — getting the information decoded and into columns was going to be critical to the downstream analysis. Calculate additional metrics, including the average rate for each subgroup as well as growth from the 2015–2016 school year to the 2016–2017 school year. This project felt a lot bigger than it actually was The non-programming skills I relied on (heavily) Knowing my data. I spent several hours reading through column headers and looking at how the data was organized before starting to actually wrangle the data. This time spent familiarizing myself with the data made it easier to understand what I ultimately needed to do with the data, as well as articulate my questions to others when something wasn’t working correctly. Tenacity. Being confident in my ability to figure things out helped keep the impending sense of panic at bay. The deadline for this project was initially pretty tight (two weeks) and eventually got chopped in half, resulting in more than one moment of “oh my gosh I can’t actually do this.” It helped to remind myself that with a couple of good search terms and 20 minutes of reading, the answer can almost always be found. Knowing when to ask for help. I got to a point in the wrangling where I knew what I needed to do with the data, but I couldn’t find an answer online that I both understood and could get to work. After working at it for 30 minutes, I reached out to a couple of friends who are more fluent in R than I am, and they had the issue sorted within minutes. Recognize when I’m headed down a rabbit hole. This is where I’ll spend all of my time if I’m not careful, because it’s so darn easy to do. When I found myself reading through a weird online argument about coalesce vs. coalesce2, I knew that I had taken a wrong turn about 30 clicks back and needed to re-focus on the task at hand. It may not be the prettiest code, but it works, and it’s done! Questions you might have: Why did you [insert R programming technique here]? Because I didn’t know of another way to do it! Can I make a suggestion about your code? Yes, absolutely! Feel free to use GitHub, start a conversation here, and/or reach out to me on Twitter. Why does it look like you simply uploaded your code into GitHub? Because that’s exactly what I did. The account and repository that I primarily work in is for my current job, and contains proprietary data and analysis. As such, I’ve created files that utilize nothing but publicly available data sets and uploaded them to my long-neglected public GitHub account. How did you make it this far in data science while struggling with R? There are two big components. First, the organizations that I work with have relatively straightforward needs in terms of data science, because they have never had a “data person” on staff before. As such, a lot of my time is usually spent sorting out what the organization has in terms of data and analysis, what it needs, and then building organizational initiatives that helped bridge the gap between the two. Second, my strengths and passions are in teaching, communication, and strategic work. I absolutely love working with others to help them better understand the results from data analysis, as well as assisting them with developing their own analytical skillsets. When you couple this with my strengths in organizational strategy — such as building departments, goal-oriented planning and execution, and developing staff and organizational capacity — it’s made me an ideal fit for the organizations that I’ve worked for. Remember: being able to program is only part of being a data scientist. I have more questions — what’s the best way to get in touch? Twitter is a fantastic way to connect, and I would love to hear from you!
Share the code: adventures in education data wrangling
245
share-the-code-adventures-in-education-data-wrangling-11f2509272cb
2018-05-01
2018-05-01 07:00:52
https://medium.com/s/story/share-the-code-adventures-in-education-data-wrangling-11f2509272cb
false
1,184
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Jesse Maegan
molecular biologist turned public school teacher before falling in ❤️ with non-profit data science. perpetual #rstats noob.
56211c3483cf
kierisi
754
287
20,181,104
null
null
null
null
null
null
0
null
0
d162644efe2a
2018-06-19
2018-06-19 18:51:12
2018-06-19
2018-06-19 18:54:31
1
false
en
2018-06-19
2018-06-19 18:56:46
0
11f3c4c00ee4
2.675472
15
0
0
Any idea has a creator. Similarly, OLPORTAL’s mission and idea were also a product of the imagination and creativeness of the CEO of the…
5
CEO of OLPORTAL Artem Evdokimov about the project Any idea has a creator. Similarly, OLPORTAL’s mission and idea were also a product of the imagination and creativeness of the CEO of the project — Artem Evdokimov. Here we would like to expose the idea and mission of OLPORTAL from the point of view of our CEO. “About two or more years ago, I have strongly decided to create a really necessary and beneficial application that will bring convenience and efficiency to its users. I, as a user, was quite disappointed by the overwhelming majority of applications on the market. Moreover, I had enough skills and experience to generate such an idea of a decentralized messenger on neural networks, not to say about its implementation. For me, OLPORTAL is a way out of the current situation in the technological world. There are a lot of fresh and groundbreaking technologies but, for some reasons, app developers prefer to rely on the mass popularity of one or another idea. In a nutshell, they make the realization of ideas that are traditional and have demand. Sitting at my desk with my smartphone in my hand some time ago, I had to quit and login several mobile apps to complete some actions. This was very irritating for me and energy-consuming for my device. However, it was only one of the reasons forcing me to start developing my own application that will meet all requirements of modern users. The last straw was my helplessness in messengers, where I could only communicate with friends, business partners, or even salesmen. All that I could do was sending them few lines, files, pics or something, but it takes too much time to answer all of them and especially to conduct money transactions if needed. Moreover, I got no practical interest in using messengers, except for chatting and swapping some information and media with my interlocutors. In addition, I also didn’t like the fact my chats are not secured from either hackers or other evil-doers or from authorities. These factors irritated me as a user and gave me a push to develop a unique idea of messaging in chats with integrated AI, all data of which will be secured by decentralization. The mission of OLPORTAL is to simplify the process of everyday communication by chatting with the help of our neurobots assistants that can save your time by suggesting a certain variant of an answer to a question. Moreover, you may also be sure that your dialogue’s information will be totally secured by means of the decentralization of the project. During the development of our idea, my team and I came to the conclusion that an app of the future must have additional features that will bring some profit or even income to OLPORTAL’s users. By thinking in this way, we have decided to give users the possibility to train their bots not only for personal use but also for selling at our marketplace. Thus, a person may buy your trained bot and use it in his or her chats. This is not the only way how a user can get some income from OLPORTAL ecosystem. Here each user can be rewarded for using special advertising bots that are already pretrained by an advertiser in their chats. Shutting it off, I can say, that OLPORTAL was meant to be a simple app with several tricks in it. However, its idea has undergone great changes and now I believe OLPORTAL to be the first in the world ecosystem for unique neurobots making users chats more efficient and profitable. I am strongly concerned in that this idea will become a real trend among advanced users who are searching for an optimal decision to accomplish their important routine tasks.” These were the thoughts about the mission and idea stated by the CEO of the OLPORTAL project Artem Evdokimov. More about OLPORTAL on our official channels! Stay tuned!
CEO of OLPORTAL Artem Evdokimov about the project
652
ceo-of-olportal-artem-evdokimov-about-the-project-11f3c4c00ee4
2018-06-19
2018-06-19 18:56:47
https://medium.com/s/story/ceo-of-olportal-artem-evdokimov-about-the-project-11f3c4c00ee4
false
656
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
null
OLPortal23
null
OLPORTAL
ol-portal-steps-forward-to-the-future-communicatio
MOBILE APP DEVELOPMENT,SOCIAL NETWORK,ARTIFICIAL INTELLIGENCE,NEURAL NETWORKS,BLOCKCHAIN
olportal
Messaging
messaging
Messaging
8,912
OLPORTAL
The world's first decentralized messenger on neural networks with the Artificial Intelligence dialogue function.
b4a225970600
olportal.ai
960
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-04
2018-06-04 03:40:04
2018-06-04
2018-06-04 04:12:10
3
false
en
2018-06-06
2018-06-06 02:38:54
7
11f4511fd25b
3.387736
2
0
0
I had to speak at my first conference not too long ago and was presented with the problem of developing a machine learning model on an edge…
5
My First Foray with learning Machine Learning (Part I) I had to speak at my first conference not too long ago and was presented with the problem of developing a machine learning model on an edge device. Not just that- considering my target audience, it gets better- training and inference had to perform disconnected from the internet entirely. This, I later found out, is not a hard or unique problem that has not been done before. However, if you have zero background and experience with machine learning, this can become a daunting task very quickly. Disclaimer: this story does not end with a sexy model with over 70% accuracy (I know, right?!) What it does do is communicate the resources I began with, my cat fight with online tutorials, and the things I learned about machine learning development by DOING. So, let’s get started… I knew (for various reasons, to include the fact that my audience loves pictures) that I was going to shoot for an object classification and object recognition model. If you are lost at this point, have no fear. I have a separate article I will be publishing (see my channel) that breaks down the machine learning basics I knew prior to tackling this project Barney style. So where to find data? I started looking and soon stumbled upon Kaggle. Kaggle is amazing- if you’re looking for a great place for free datasets and an awesome open source community around machine learning and big data science challenges, Kaggle is the place to go. I ended up picking the OIRDS dataset- a dataset of overhead images of various vehicles, as seen below: OIRDS dataset sample For future laughs, please notice the first image. What is different about this one compared to the others? [You shall find out, don’t worry. It won’t take you weeks to do this either, like it did for me…] So, awesome. There is the dataset. So what did I have to do to built an object recognition model? Well, I Googled it. I also asked smarter people at Google to help throw resources my way too. I ended up with two places to start: this O’Reilly article on using transfer learning with the Inception v3 model, this article on TensorFlow (plus a tutorial), and finally the Tensorflow for Poets Codelab. These were all great resources, however, they failed me on a few things that I shall guide you through momentarily. I began with the TensorFlow article, and then moved onto the Codelab. I would recommend starting with those two resources- they are excellent in giving you a high level understanding of what transfer learning is, guiding you through training your first object recognition model, then giving you the tools and guidance you need to train your own dataset. Unless… you pick a dataset like mine. Advice here: if you would like to train your own dataset, I would recommend starting with a simple dataset that resembles the flowers dataset- images with single objects in them [more on this later] that are clear, and most importantly… are all the same size (dimensions)! So there I went, sifting and sorting through all 1000+ OIRDS images and throwing them into category folders as TensorFlow for Poets taught me. Except… now scroll up to the sample images I included earlier. First note: the first image is different because… it contains two vehicles. Also notice the quality of the images… can you really tell the difference between the types of vehicles? If you have no FMV-PED experience or training (even if you do) I challenge you to download the OIRDs dataset. Notice the quality of the images. Back to the quality of the images. It is very important (in real life) for your training dataset to accurately reflect the expected image quality and variations that you might see in production. This is 1. because it will make your life hell to actually train your model and 2. it will make your model better prepared for production, where not every image is perfect. Avoid the classic code-breaking-in-production phenomenon as much as you can. So naturally, the mistake I made was that I was now sorting images in duplicates, with images containing cars and trucks in the CARS folder, and that same image copied into the TRUCKS folder. This seriously f-ed up my model training process, giving me a whopping 29% accuracy on the first try. Have no fear… Part II will bring you to the second part of the journey.
My First Foray with learning Machine Learning (Part I)
2
my-first-foray-with-learning-machine-learning-part-i-11f4511fd25b
2018-06-06
2018-06-06 14:30:59
https://medium.com/s/story/my-first-foray-with-learning-machine-learning-part-i-11f4511fd25b
false
752
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Amina Al Sherif
null
de46c1e173d3
amina.alsherif
34
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-26
2018-06-26 18:59:06
2018-06-26
2018-06-26 22:06:02
1
false
ru
2018-06-27
2018-06-27 10:34:31
9
11f6368fc1de
1.350943
1
1
0
Поскольку кибер-угрозы растут с огромной скоростью и становятся более совершенными, современные решения кибербезопасности являются…
5
HEROIC.com — Будущее кибербезопасности, c использованием искусственного интеллекта и технологии блокчейн. Используя большие данные, искусственный интеллект и блокчейн, HEROIC.com запускает новое поколение кибербезопасности. Поскольку кибер-угрозы растут с огромной скоростью и становятся более совершенными, современные решения кибербезопасности являются, устаревшим и неэффективным. Подавляющее большинство данных об угрозах, контролируются крупными корпорациями и правительствам, что значительно усложняет и удорожает создание решений следующего поколения, которые значительно улучшили бы защиту. Недавние достижения в области защиты от угроз, основанных на использовании искусственного интеллекта, являются многообещающими, но данные технологии практически полностью контролируются и используются в крупных корпоративных приложениях, остаются недоступными обычным людям, наиболее уязвимым для нападений. HEROIC.com использует новый подход к защите от угроз, с использованием ИИ в сочетании с технологией блокчейн. Все это непременно изменит кибербезопасность, и представит новое поколение решений, доступные всем. HEROIC.com расширит возможности и будет стимулировать разработчиков и компании для создания следующего поколения кибербезопасности через экосистему HEROIC.com, которая включает в себя открытый обмен разведкой угроз под названием HEROIC Arc Reactor ™, объединенную платформу управления безопасностью под названием HEROIC Guardian ™, исследования и среды разработки. Мы полагаем, что сочетание данных о кибер-угрозах, интегрированных с искусственным интеллектом и технологией блокчейн — это будущее кибербезопасности. Экосистема HEROIC.com и HRO токен непременно станет новым стандартом, используемым во всей отрасли кибербезопасности, для глобального обеспечения безопасности, конфиденциальности и доверия. На сегодняшний день вся наша жизнь подвержена риску, по причине незащищенности персональных данных. Почти все, что мы используем в повседневной жизни — хранит наши персональные данные. Вкладываем деньги в банк или в интернет-магазин, используем технологии, от будильников и тостеров до детских игрушек и автомобилей. Насколько беспечно мы себя не чувствовали бы, угроза кибербезопасности уже выходит из-под контроля. А ситуация с с данной отраслью, на сегодняшний день действительно является критической. По оценке специалистов к 2020 году будет насчитано около 30 миллиардов устройств подключенных к интернету. Официальные ресурсы платформы: Website: https://tokensale.heroic.com Twitter: https://twitter.com/heroiccyber Facebook: https://www.facebook.com/heroiccybersecurity Telegram: https://t.me/heroichq Medium: https://medium.com/heroic-com Github: https://github.com/HeroicCybersecurity/ YouTube: https://www.youtube.com/c/HEROICCybersecurity Bitcointalk.org: https://bitcointalk.org/index.php?topic=3992644.0
HEROIC.com — Будущее кибербезопасности, c использованием искусственного интеллекта и технологии…
3
heroic-com-будущее-кибербезопасности-с-использованием-искусственного-интеллекта-и-технологии-11f6368fc1de
2018-06-27
2018-06-27 10:34:31
https://medium.com/s/story/heroic-com-будущее-кибербезопасности-с-использованием-искусственного-интеллекта-и-технологии-11f6368fc1de
false
305
null
null
null
null
null
null
null
null
null
Cybersecurity
cybersecurity
Cybersecurity
24,500
Yaroslav Kovalow
null
a143ca79d7de
yaroslavkow
1
3
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-04-25
2018-04-25 10:23:02
2018-04-25
2018-04-25 10:23:51
1
false
en
2018-04-25
2018-04-25 10:23:51
6
11f669ae477a
3.249057
0
0
0
Thirty or so years ago when the internet was in its relative infancy, it would have been hard to image the smart homes we have…
3
THE SMART HOME IN 2050 Thirty or so years ago when the internet was in its relative infancy, it would have been hard to image the smart homes we have today…appliances connected to the internet, automated tasks and voice activators that allow us to tell our homes what to do. Which begs the question, if we’ve come this far, how will smart tech shape up in the future? Here are five predictions for the smart home of as we head towards 2050…. Artificial Intelligence Just as artificial intelligence (AI) machine learning will play an increasing role in the way we shop, it will also play a role in the smart home and in many ways, it’s already making its presence felt. AI is the basis of intuitive entertainment like suggested playlists or movies, but in the smart home future will extend to encompass your lifestyle preferences with suggestions. For example, your smart home hub may prompt you to turn on the oven because it knows from your online shopping and previous habits you like to eat a roast on Sundays. Wearable interaction Arguably one of the next major leaps for the smart home will enable it to tailor your living environment to your personal health and lifestyle needs. Feasibly this means your home will access wearable devices (or something similar) to understand your current physical condition and react accordingly. For example, your smart watch and smart bed monitor may monitor your sleeping patterns and ascertain you didn’t get much sleep last night. The next night your home may suggest or activate an environment more conducive to sleep — dimming lights, setting a warmer ambient temperature or suggesting more relaxed movies. Meanwhile, the UK Mirror tips: “For the 36% of people in the UK who only get five hours of sleep per night, the sleeping environment will be enhanced through anti-sound technology, removing unwanted noise, pollen and virus filters. Beds in 2050 will adjust fabric textures, rigidity and temperature for a personalised, well-rested night’s sleep”. Robotic aids Robots are already being implemented by retail trailblazers like Amazon, and they may soon find also find gainful employment in the home. We’re not talking Jetsonesque robots here but subtle and streamlined aids. Consider that this year’s CES already saw a consumer version of a robot-like entertainment system. Keecker is an egg-shaped rolling robot that features a projector, sound system and home monitoring capability. Meanwhile, The Mirror predicts: “Tomorrow’s generation will spend less time cooking and cleaning, but more time socialising with their full-resolution life-sized 3D friends, who are only virtually present.” More simulation From the way we shop to the movies we watch and how we seek inspiration to decorate our homes, at least some of the future of the smart home will fall to virtual reality (VR). This ability to feel as if we are genuinely experiencing something will mean a host of activities we left the house for will now take place at home. Want to try on an outfit? Just use virtual reality. Want to experience a foreign land? Virtual travel enthusiastically awaits. Or do you wish to see what that couch in that catalogue would like in your home? VR and augmented reality are ready and willing to assist. Different approach to decoration Meanwhile when it comes to decorating your home, there could be major innovations in furnishings and home décor. The Mirror tips: “By 2050, fabrics will have the ability to change appearance, colours, patterns and textures enabled by intrinsic smart yarns. “Additionally, active contact lenses worn in the home will allow people to change their décor frequently, a simply decorated room could look elaborate and luxurious — all this combined will make redecorating an easy everyday option.” Futurologist Dr Ian Pearson BSc further explains: “Furnishings will often adapt to our body shapes to make us perfectly comfy. Visitors’ personal profiles will be used to adapt furniture to them too”. The truth is some of these predictions will hit the mark and some will miss when it comes to the home of 2050, just like the outlandish futuristic films of the 1960s prove somewhat off-base in 2017. Regardless, the home of 2050 will be more automated, more intrinsically intelligent and cater more to our individual needs as smart home technology continues its rapid evolution. About Lera Lera Smart Home Solutions is a leading installer of smart home technology in the greater Sydney region. Our team boasts over 20 years’ experience in IT networking, programming and the electrical industry. We have sourced the most reliable and cost efficient solutions from around the world to provide the very best in smart home solutions, and work with our clients to understand their needs. You can learn more about transforming your house into a smart home here, or contact usdirectly for further advice
THE SMART HOME IN 2050
0
the-smart-home-in-2050-11f669ae477a
2018-04-25
2018-04-25 10:23:52
https://medium.com/s/story/the-smart-home-in-2050-11f669ae477a
false
808
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Ian walker
Lera Smart Home Solutions provides home automation solutions and installations in Sydney. http://lerasmarthomes.com.au/
301789337e1e
mynameisianwalker
2
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-02-14
2018-02-14 09:34:01
2018-02-14
2018-02-14 09:35:30
2
false
en
2018-02-14
2018-02-14 09:35:30
2
11f7bf68a799
1.817296
1
0
0
LinkedIn has become an essential tool to improve your self-brand. It is considered to be most popular professional…
5
How Can I Improve My LinkedIn Profile To Get A Data Scientist Job? LinkedIn has become an essential tool to improve your self-brand. It is considered to be most popular professional network/business-oriented social networking sites to connect with the company or organization, thought leaders, other professionals in your industry. In recent days HR or recruiters are using LinkedIn to hire the professionals. As LinkedIn is the higher authority site it will rank higher in Google search engine. So if you create linked-in page in your name we have higher chances of getting rank. Tips to optimize LinkedIn profile: Headline: Let your headline explains what you do. Einstein’s saying: “If you can’t explain it to a six-year-old, you don’t understand it yourself.” Some Headline sample for a data scientist is: “I can make your data sell more goods” “I can make your data tell a story” Career summary: Use the keywords in your career summary to increase the visibility of your profile. The keyword is nothing but of the terms you would like to make your profile ranked in LinkedIn. Sample Career summary: Experience: Experience section in the LinkedIn is used to connect with your company. Make sure to have a logo of the company you worked is attached in the experience section of your LinkedIn profile Personalize your LinkedIn URL: Make sure to personalize your profile URL. By default, LinkedIn will have some default numeric value after the URL. Remove that numeric value and add your name in URL to get more visibility and that will help other to remember your profile who knows your name. Add value to the readers: Search the post related to your field and share your expertise. One of the best ways to improve your visibility is adding value to your network through LinkedIn updates. Add certification in your LinkedIn profile: Increase your marketability by adding certification to your profile. LinkedIn has over 400 million users, making it the largest professional network, and an ideal place to find that dream job. Make yourself unique from the crowd with the certification. At Livewire we provide you Data analytics course with course completion certification. As we are ISO certified training institute our course completion certification is internationally recognized. Originally published at www.livewireindia.com.
How Can I Improve My LinkedIn Profile To Get A Data Scientist Job?
1
how-can-i-improve-my-linkedin-profile-to-get-a-data-scientist-job-11f7bf68a799
2018-02-14
2018-02-14 11:50:43
https://medium.com/s/story/how-can-i-improve-my-linkedin-profile-to-get-a-data-scientist-job-11f7bf68a799
false
380
null
null
null
null
null
null
null
null
null
Data Science
data-science
Data Science
33,617
Pavi Vasa
null
a611b6d770d7
pavivasa
14
34
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-06-21
2018-06-21 16:01:37
2018-06-21
2018-06-21 16:00:15
7
false
en
2018-09-26
2018-09-26 16:24:56
2
11f8805ea51
3.310377
0
0
0
Millennials, the largest segment of the population, have come of age during a time of technological change, globalization and economic…
5
Millennials & Mobile Commerce Millennials, the largest segment of the population, have come of age during a time of technological change, globalization and economic disruption. That’s given them a different set of behaviors and experiences than their parents. A growing number of millennials are choosing to live at home with their parents. This gives them incredible buying power and disposable income which is why they are the optimal choice for the EZeeBUY™ smartphone application. They are social media savvy and shop online with the highest frequency of any age group. Millennials’ affinity for technology is reshaping the retail space. With product information, reviews and price comparisons at their fingertips, millennials are turning to brands that can offer maximum convenience at the lowest cost. The Internet, smartphones, social media, instant messaging…these are natural daily tools for millennials. The future of retailing belongs to those who successfully transition from targeting traditional demographic social groups to behavioral clusters. They are also the first generation of digital natives, and their affinity for technology helps shape how they shop. They are used to instant access to price comparisons, product information and peer reviews. In the US, millennials will rocket to 75% of the US workforce by 2020, which is why EZeeBUY™ is targeting millennials now and into the future. 40% of male millennials and 33% of female millennials say they would purchase everything online if it were easier on a smartphone. In the US, millennials have an annual purchasing power of $170 billion. The EZeeBUY™ application will incorporate the latest technologies including artificial intelligence, machine learning and augmented reality to enable experiential buying. EZeeBUY™ is the Amazon for Millennials in “Day 1” mode One of the largest generations in history is about to move into its prime spending years — generating billions in additional revenue streams to firms that can capture their spending habits. Millennials are poised to reshape the economy; their unique habits and preferences will change the way we buy and sell, forcing companies to change the way they do business for decades to come. Millennials have come of age in a different world, and hold a different worldview. They have grown up with technology embedded in their daily lives during a time of rapid change — giving them a set of priorities and expectations sharply different from previous generations. The Millennial generation is the biggest in US history — even bigger than Baby Boomers. Source: US Census Bureau Social Savvy and Always Connected The online world — and social media in particular — have given the millennials a platform to reach the world. “After searching online, how do you communicate with others about a service, product, or a brand?” Source: Propser Insights & Analytics for the Media Behavior and Influence Study The First Digital Natives Millennials have grown up with the Internet and smartphones in an always-on digital world. “Which online activities do you regularly do for entertainment and engagements?” Instant Messaging & Social Media Platforms. Source: Prosper Insights & Analytics for the Media Behavior and Influence Study Social media use continues to grow rapidly — the number of people using the top platforms in each major country around the world has increased by almost 1 million new users every day during the past 1 month. More than 3 billion people around the world now use social media each month, with 9 in 10 of those users accessing their chosen platforms through smartphone devices. Social Media Influencers. Source: Prosper Insights & Analytics for the Media Behavior and Influence Study This is why the EZeeBUY™ smartphone application will have full social media integration, where users can share buying experiences directly to social platforms to promote brand loyalty. This will, in turn, allow brands and retailers to promote more personalized offers directly to consumers through the EZeeBUY™ smartphone application. For more information about EZeeBUY™ and he EZeeBUY™ ICO, visit: www.ezeebuy.ai “Buying made EZee — just take a picture!” Originally published at medium.com on June 21, 2018.
Millennials & Mobile Commerce
0
millennials-mobile-commerce-11f8805ea51
2018-09-26
2018-09-26 16:24:56
https://medium.com/s/story/millennials-mobile-commerce-11f8805ea51
false
599
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
David Pipe
Co-Founder & Chief Marketing Officer at EZeeBUY™
23cd61a8cdf7
davidpipe_71703
11
4
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-13
2017-09-13 19:34:36
2017-09-13
2017-09-13 19:54:28
1
false
en
2017-09-13
2017-09-13 21:37:05
0
11f8bd154f3d
2.562264
6
0
0
(Have been kindly asked to give evidence at the All-Party Parliamentary Group on Artificial Intelligence as expert advisor, here’s the…
5
My testimony at the UK Parliament (Have been kindly asked to give evidence at the All-Party Parliamentary Group on Artificial Intelligence as expert advisor, here’s the transcript of my speech) (It was a dark and stormy afternoon, and the Big Ben wasn’t bonging) Thank you for the opportunity to offer evidence before this committee. Exponential growth in technology is not a new concept. It was first experienced by humanity 2.5 million years ago with the invention of the Oldowan, the oldest stone tool. It was followed by the controlled use of fire 700k years Before Present, then language 200k Before Present, then the wheel 5,500 years ago, continuing its acceleration through the industrial age and into our current day. Look the results: population growth, energy generation, GDP, GDP per capita, and information produced. All exponential growth enabled by technology. This means that in the next few Parliament terms technology acceleration boosted by AI will bring fundamental changes to society, and you are going to have to debate legislation about the rights of a living being that is not based on DNA, or about the self of one person merging with the self of another to create a new organism, communicate with other mammalians or even cephalopods in their own language, discover that privacy as a concept might cease to exist when we read minds at a distance, or have computers that can perform calculations in parallel universes. Everything I mentioned has existed in the lab for close to a decade, there are even teams building commercial ventures for some. This is the present. And in this, AI is a rising tide lifting all boats. It’s not just that AI in itself is important. It’s that applied AI is changing the world by acting as a force multiplier to other technologies. This change is coming just from one branch of AI called Machine Learning. It is very important that whatever legislation you are considering it cannot be based on the limitations of Machine Learning of today because they might not hold true in a few years. As technologists, we are in control of the future as we are the ones inventing it. But we are not in control of its timing. Today AI is a land grab that will overwhelmingly benefit the winners in this race. The UK is the best country in the world for global contributions to science and technology. But it also has 100 years of history of failing to capitalize and generate wealth on those discoveries. In conclusion: We are at an inflection point and Britain is one of the leaders in AI. But that could soon change. China had its Sputnik moment and is making AI research an absolute priority after seeing the world’s best players crushed at Go. The United States had their first congressional hearing on AI last week. Russia said that AI leaders will rule the world. The governments of these countries are already buying equity positions in UK companies, either directly or via proxies. In some cases the entrepreneurs are not even aware of it. This calls for a national strategy on AI. Things like increased funding, data sharing, like opening up the NHS data, forcing the private sector, the military, and crucially the intelligence community to adopt new technologies. But having a national strategy on AI is not enough if we come from a place of weakness. We must address lack of entrepreneurship, necessary immigration, jobs destruction, low productivity and poor education to align the whole country toward the goal of ensuring that our leadership in this field is not lost. It’s about securing our future wealth, about the defense of our borders, and our culture. It is about the survival of our society. I am available to explore these subjects in greater detail at your convenience. Thank you.
My testimony at the UK Parliament
57
my-testimony-at-the-uk-parliament-11f8bd154f3d
2018-06-06
2018-06-06 09:06:47
https://medium.com/s/story/my-testimony-at-the-uk-parliament-11f8bd154f3d
false
626
null
null
null
null
null
null
null
null
null
Artificial Intelligence
artificial-intelligence
Artificial Intelligence
66,154
Rodolfo Rosini
Partner @Zeroth_AI. Excited about AI, UX, Mobile, Security and DUNE. Often NSFW and WTF.
27b95bb5ec10
rodolfor
1,172
216
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-07-01
2018-07-01 11:29:19
2018-07-01
2018-07-01 11:26:21
3
false
en
2018-07-01
2018-07-01 11:29:28
2
11f9c7bd7638
1.927358
0
0
0
AI is not about to go mainstream for consumer marketing.
2
AI has potential to upend marketing data AI is not about to go mainstream for consumer marketing. The real opportunity is for IT to implement AI for marketers so they can sort through all the data they have to uncover opportunities. Even with AI marketers need to ask the “right questions”. If there is one consistency I have found working with CPG marketers its that most are overwhelmed with data and turning piles of data into actionable insights has been a challenge . Imagine, for a second, a brand marketing manager being able to ask AI “what is the volume of our best selling product?” or “who is our most profitable customer?”. There is real potential to uncover hidden gems within marketing management, but unfortunately it seems we are a long way from getting there. I once asked for some basic sales numbers for a CPG marketer and it took most of the day to get them in a messy Excel spreadsheet that took me over a day to clean up into meaningful data. We have been hearing about big data for a long time, but what’s preventing big data from being really useful is two things. First, the data is often hard to get and second, you need people who ask the right questions and can interpret big data into actionable and meaningful insights. There is a substantial gap in these areas as company culture tends to eat good marketers for breakfast. Even with big data and AI marketers still need to focus on the basics: delivering a good product at a fair price. The basic brand promise is continually broken which explains why private label sales are increasing so rapidly. How many marketers actually get on a plane, go into a store and talk with retail people about their product or observe where it’s merchandised compared to competitors? IT departments should be viewed marketing as their customer and as such they should be experimenting with AI, but marketers also have to focus on the basic brand and product promise and realize that their audience only wants a relationship that involves delivering a better product experience. Originally published at www.newmediaandmarketing.com on July 1, 2018.
AI has potential to upend marketing data
0
ai-has-potential-to-upend-marketing-data-11f9c7bd7638
2018-07-01
2018-07-01 11:29:28
https://medium.com/s/story/ai-has-potential-to-upend-marketing-data-11f9c7bd7638
false
365
null
null
null
null
null
null
null
null
null
Marketing
marketing
Marketing
170,910
Richard A Meyer
Marketing contrarian with over 15 years of developing leading edge and award winning digital marketing initiatives.
224b9af7f029
richardameyer
61
25
20,181,104
null
null
null
null
null
null
0
null
0
183401713085
2017-10-20
2017-10-20 08:27:00
2017-11-01
2017-11-01 11:35:35
7
false
ru
2018-09-02
2018-09-02 17:35:45
7
11febc28cdf9
14.223585
17
0
0
Мы около пяти минут сидели молча и смотрели друг на друга. Мы, юристы, — потому что никогда не видели робота, столь похожего на человека…
5
Я, робот (ФЛП, 3 группа) Мы около пяти минут сидели молча и смотрели друг на друга. Мы, юристы, — потому что никогда не видели робота, столь похожего на человека. Он, робот-юрист, — потому что мы поломали ему мозг, сказав, что будем платить за него ЕСВ. Ведь налогообложение робототехники — единственный способ побороть безработицу, возникшую вследствие прогресса, не так ли? Этот текст пера Oksana Kochkodan-Ильф и Andriy Honchar-Петрова впервые опубликован нашими друзьями — газетой Юридическая практика Робот уже практически закончил заполнять налоговую декларацию, но устарел, не закончив Искусственный человек еще по Чапеку был создан, чтобы помогать работнику делать сложную работу. Роботы без инстинкта самосохранения лучше справятся с работой круглосуточной (как в юрфирме) или опасной (как в юрфирме с уголовно-правовой специализацией). И хоть цель роботизации альтруистическая — освободить человека от тяжелого и бессмысленного труда, даже у Карела Чапека все закончилось плохо: его роботы захватили мир и всех убили. Возможно, потому, что их труд обложили налогом. Роботы Чапека услышали о налогах на их труд и стали еще несчастнее Хотя на месте роботов мы бы сделали именно так. Когда новые технологии требуют поддержки правительства, правительство часто смотрит на это из-под юбки законодателя. Оно как маленький ребенок понимает, что нужно либо действовать умно, либо сидеть под юбкой и и выжидать, пока кто-то умный придумает что-то смелое. И все же, что делать, если робототехника развивается слишком быстро и вытесняет человека? Работники несложных квалификаций (как юристы-регистраторы ФЛП и ООО) легко могут оказаться на улице. Предпринимателям выгоднее использовать для простой работы безошибочные механизмы, как, например, юридических ботов для создания договоров. Совсем скоро младшим юристам даже не нужно будет бегать за справками в государственные органы. Страх лишиться работы из-за роботов особо сильный в развитых странах. Например, в США одна из ассоциаций перевозчиков предложила запретить автопилотные грузовики до 2050 года, поскольку иначе многие водители рискуют лишиться роботы. Опасаться и в правду есть чего: британские ученые доказали, что роботы скоро заберут около 30% рабочих мест в развитых странах, включая британских ученых. Решать такие проблемы должны правительства. Но их инструменты всегда бинарны: или (а) установить ответственностью за ущерб или (б) налогообложение. Бинарны, потому что впоследствии сразу видно результат — деньги попали в казну и их можно как-то использовать, например, прикупить еще боеголовок, или заказать универсистетам выпустить еще немного юристов. Другой вопрос, что это «что-то» — заказ на юристов, на бессмысленную или даже осмысленную войну — не решат проблем с безработицей. Разве что дадут временную зарплату преподавателям охраны труда на юридическом факультете. Как поет Oomph!: Tax is not enough! Tax is not enough! You just don’t care enough of that. Think of something else instead. Ответственность Но пока никто ничего лучше не придумал. Наоборот, пару стран уверенно шевельнулись в сторону юбки, пытаясь как-то задать русло новым непонятным отношениям на законодательном уровне. Министерство экономики торговли и промышленности Японии разработало правила безопасности для следующего поколения роботов, очень напоминающие первый закон Айзека Азимова («Робот не должен причинить вред человеку или своей бездеятельностью позволить причинить такой вред»). Проект правил никак не решает вопрос безработицы, но касается ответственности производителей. К примеру, он определяет, что робот должен иметь сенсоры, чтобы не наехать на человека, а также кнопку для немедленного отключения. То есть, в Японии в эпоху стремительного развития робототехники почему-то пока думают только о естественных, а не социальных правах человека. Выглядит странно, учитывая, что Япония — один из лидеров на рынке робототехники (или наоборот, ничего странного?). Не хочет отставать РФ. Хотя плотность промышленной роботизации на квадратный метр в миллион раз меньше меньше, чем в Японии, прошлой зимой в РФ основатель Grishin Robotics Дмитрий Гришин предложил законопроект о роботах, достаточно близкий к нашим реалиям. К примеру, по законопроекту, к создателям роботов могут применяться нормы об ответственности владельца источника повышенной опасности. Через пару месяцев от представителей российского парламента прозвучал тезис о сертификации робототехники — чтобы охранять человека от потенциального вреда. Но мы-то понимаем, что сертификация означает посредника между инновациями и рынком в виде государства, а значит — пополнение бюджета и возможное торможение прогресса бюрократией. И это тоже своеобразный способ сохранить нагретые рабочие места. Налоги Пока ответственность за вред — уже не новость, о прямом налогообложении роботов решили говорить только самые смелые. Одним из первых в прессу с такой мыслью попал Билл Гейтс. Он считает, что роботы забирают работу у человека, поэтому государство должно — нет, дорогие правоохранители, не забрать работу у роботов, и даже не изъять роботов у предпринимателей — позаботиться о тех, кто потерял работу. Как вариант — должен быть фонд, в который предприниматели, использующие робототехнику, будут платить налог с такой техники. Жаль только, что Билл Гейтс не предложил законопроект, как Гришин, — а то было бы очень интересно почитать юридические механизмы, которыми можно привести в жизнь озвученную идею. В каком-то похожем направлении думает меньшинство в Европарламенте. Политическое, а не то, что вы подумали. В феврале 2017 года Европарламент принял резолюцию о роботизации и искусственном интеллекте. Идея не новая — урегулировать отношения человека и робототехники. Теперь Комиссия будет работать над правилами использования роботов и, возможно, будет первой, кто таки воплотит в жизнь три закона робототехники Айзека Азимова. Каждого робота нужно будет регистрировать в ЗАГСе. Шутка, в Европейском агентстве роботизации и искусственного интеллекта. Также создадут пенсионный фонд ядл роботов. Снова шутка, извините, не удержались. На самом деле создадут фонд страхования ответственности за вред, причиненный роботами. «Чего-то не хватает», — подумали авторы законопроекта, предложившие ввести спецналог на робота, но не обнаружившие своих предложений в принятой резолюции. Борцы за социальные права граждан ЕС посчитали, что налог на робота должны выплачивать предприниматели, у которых роботы заменяют людей, дополнительно к налогам на прибыль и другим налогам. Такие спецналоги впоследствии должны были бы направляться на нужды социального обеспечения бывших работников. От богатых, так сказать, к бедным. Идея прикольная, у Гейтса одолжили, наверное. «Но вот Гейтс в интервью не описал, как осуществить…», — подумал огорченный Европарламент и отклонил предложение. сложность механизма иногда означает, что он непрактичен Почему идея «не зашла» догадаться несложно. Дело не только в сложности механизма — давайте даже не представлять, сколько изменений нужно внести в налоговое законодательство (что такое «робот», что это за налог по своей природе, какими будут критерии субъектов налогообложения, что будет объектом, каким будет размер налога и почему, кто эти новые исполнительные органы, управляющие фондом и т.п.). Налог такого рода, даже сделав счастливыми часть низкоквалифицированных работников, сделал бы несчастными двигателей прогресса, которые могут убежать в более лояльные страны. Целые государственные бюджеты могут пострадать. Ведь кто будет покупать подорожавшую впоследствии налога продукцию ЕС, если ЕС введет налог, а другие страны — нет? Кто заставит страны с дешевой рабочей силой, такие как Китай, принять что-то похожее? Кто впоследствии захочет развивать технологии, не имеющие спроса? Казалось бы, кто вообще примет законопроект о налогообложении роботов в ближайшие 20 лет? «Кто-кто — украинский парламент», — подумали мы после последнего творчества законодателей о стартапах и криптовалютах. И изложили в зарисовке. Дальше читать только в полной тишине, вслух и с выражением. Это наша муза бежит на столкновение с юридически занудными текстам Зарисовка Киев, 29 декабря 2017 года, вечер пятницы. У большинства граждан в очередях в супермаркеты настроение праздничное. Быстрее всего расходятся елки, шампанское и докторская колбаса. И только на Грушевского в это время все еще кипит работа. Вперемешку с оливье тут как всегда в последнюю минуту готовится государственный бюджет. А вместе с ним и майонезом еще парочка изменений в Налоговый кодекс, так, между прочим. Кто-то из инноваторов с Vacheron Constatin на обеих руках незаметно продвигает правку о льготах для стартапов. Пусть субъекты хозяйствования, использующие роботов (а точнее, говоря любимым слэнгом законодателя, «устройства или машины, которые без вмешательства оператора изготовляют товары или предоставляют услуги») смогут теперь стать на третью группу единого налога без 5-милионного лимита. Такой подарочек от святого Николая под подушку должны спрятать всем СПД, у которых доля роботизированных товаров и услуг соответствующих субъектов превышает 50%. Принимать? Принимать! Хлеб во все дома! Стартап во все коворкинги! Роботов на все заводы! Даешь инновации! 226 «за» — оливье ждет. 15 января. Страна потихоньку втягивается в новый год. Несколько помятые репортеры ведут репортаж из-под Жашкова, где «US Robots and Mechanical Men, Inc.» строит первый в Украине завод по сборке роботов. «Ну, наконец-то», — подумали довольные торговцы курами-гриль и подтянули свои МАФы ближе к заводу. Лето следующего года. Начинаются первые сокращения работников. Ибо нечего есть куры-гриль по два часа, дорогие работники. Вместо работников на заводах начинают работать роботы. Работают хорошо и почти безошибочно (как работник-перфекционист, вернувшийся с двухнедельного отпуска в Турции). Зима через год. В Коломые, Жашкове и Пирятине на огромных заводах роботы собирают роботов. С 13:00 до 14:00 они во дворе завода играют в нарды, шашки и обильно матерятся на джаве. Некому покупать куры-гриль под заводом, торговцы в отчаянии. Еще несколько заводов строятся в других небольших, но очень живописных городах Украины. В стране в самом разгаре массовые сокращения и акции протеста. Из ниоткуда возрождается профсоюзное движение за право человека пнуть робота. Правка, написанная ночью в одиночестве пустых стен государственного учреждения юным помощником депутата, сработала на ура: Украина становится центром разработки и внедрения робототехники. И одновременно — лидером по сокращениям работников, а также по убыткам от уменьшения спроса на куры-гриль. Все обвиняют Президента, правительство и парламент. Те, в свою очередь обвиняют ГФС, мол, критерии «субъектов хозяйствования, использующих роботов» неправильно поняли и в своем информационным письме не на ту нормативку сослались. Председатель ГФС еле держится, чтобы не укутаться в плед. Ему угрожают забрать таможню, всех разогнать и даже закрыть профильную газету. Работники, представители бизнеса, общественности и религиозных организаций что-то очень активно обсуждают на Всеукраинской налоговой конференции. Профильная газета «Налоги и колхоз» увеличенным тиражом распространяется по больницам, школам и другим бюджетным учреждениям. Детям особо нравится статья о пятом законе Азимова о налогообложении роботов. Поскольку увольнений становится все больше, статья заходит не только детям. На всех уровнях идет острая полемика относительно налогообложения роботов. Предложений очень много, из ярких: робозбор 1,5% за каждого работающего робота с каждого ФЛП; возрождение МРЭО и обязательство регистрировать роботов и платить сбор в зависимости от объема оперативной памяти; лицензирование робототехники и ежегодная плата за поддержание лицензии и т.п. Отдельно предлагается создать ГосРобоНадзор, который должен проводить проверки роботов и следить за соблюдением технологической безопасности. В этот трудный для страны день один из работников ГФС, возможно, наиболее набожный, который всю жизнь выполнял планы и выписывал газету в маленьком райцентре где-то в глубинке (или это был работник налоговой милиции, умеющий доказать статью 212 УК и влиять на правосознание налогоплательщиков), подготовил проект изменений в законодательство. Проект был удивительно простым и понятным. Украина уже была в тройке мировых лидеров по производству и внедрению робототехники, а доходы соответствующих предприятий составляли два бюджета 2017 года. Ну что такое 10% от прибыли для освобожденных от налогообложения налогоплательщиков — сущие копейки. Почему бы не перечислить их в специальный фонд, из которого работникам, попавшим под сокращение, будут платить минимальную зарплату в течение двух лет после их сокращения? Вот он, предвыборный закон! 226 «за». Правки с голоса о регистрацией роботов в МРЭО и последующими ежегодными техосмотрами также проходят. Все облегченно вздыхают. Истощенный председатель ГФС уходит в отпуск. Ему еще так много будет нужно сделать. Через день начинается Налоговый Майдан (v.2.0). Занавес. И все же, мы надеемся, что облагать инновации налогом пока никто не будет. Налогообложение робототехники — это не только серьезный экономический и немного — политический шаг, но и большая и сложная юридическая работа, требующая многих бессонных ночей юристов в парламенте, юридическом консалтинге и бизнесе. Так что пока не боимся, робот-юрист, никакого ЕСВ! А вы пока посмотрите на первых роботов. Их в 40-годах прошлого века создал Грей Волтер. Возможно, этого видео бы не существовало, если бы Волтер был украинцем и за это ему светила ответственность по статье 212 УК Украины. Но нет, это еще не все. Собственно, никто и не обещал легкого короткого чтива. Во что мы сделали: попросили наших коллег-юристов прочесть текст и отрефлексировать со свойственным им налогово-юридическим акцентом. Налогообложение роботов на примере булок Комментарий пера Viki Forsiuk руководителя налоговой практики компании Jurimex Чи можна стягнути податки за роботів. В принципі ідея на перший погляд чудова — якраз у дусі принца Лимона. Якщо можна платити податки за дощ, то чому не можна платити за роботів. Однак, коли ми ведемо мову про оподаткування роботів, треба зрозуміти, що ми хочемо? Оподаткувати факт їх використання, і нібито знайти джерело компенсацій ображеним працівникам, чи навпаки заохоти їх використання та збільшити доходи держави в цілому. Всі об’єкти оподаткування прямих податків умовно можна на три групи: це оподаткування капіталу, праці і ренти. А якщо вести про не прямі, що це оподатковується споживання. Плюс, коли в наших реаліях ведеться мова про оподаткування праці сплачується ще єдиний соціальний внесок ( в інших країнах це по суті внески роботодавця на страхування життя своїх робітників.) І ось коли ми переходимо до оподаткування виробництв з роботами, на грубому прикладі ми можемо змоделювати та ситуацію. Був завод, що виробляв булки. У виробництві було зайнято умовно три пекаря, що мали заробітну плату по 4 000грн. Пекарі звичайно чудові спеціалісти, однак спроможні видати лише 100 булок на добу. Собівартість однієї булки, умовно кажучи, 30 грн. В місяць таких булок виробляється 21 роб. день * 100 =2100 При нормі прибутку у 10% ціна булки без ПДВ буде становити 33грн., з ПДВ 39,6грн. Припускаємо, що всі булки продаються. Тобто завод за місять може отримати прибуток у розмірі 83 160грн. З них він заплатить 13 860 грн ПДВ та 6300*18%=1134 податку на прибуток (6300 — це чистий прибуток, що буде підлягати оподаткуванню). Плюс він виплатить 12 000 грн. заробітної плати, з якої утримає 2 340 податків, а також заплатить 2640 ЄСВ. Всього держава від нього отримає 19 794грн. А тепер розглядаємо ситуацію, що він замінює своїх пекарів на роботизовану автоматичну систему, що дозволяє збільшити кількість булок на місять до 200 штук. І тоді цифри будуть наступними. Собівартість однієї булки: (63000- 12 000 (зп) — 2640 (ЄСВ) + 1000 (додаткові витрати на електроенергію) + 2000 (амортизація обладнання) /2100 =24,45 Відпускну ціну не змінюємо, тобто 39,6 грн. Прибуток 39,6*200* 30 ( роботи ж не мають вихідних) = 237600грн. ПДВ = 39 600, Чистий прибуток: 237 600 -39600–146 700 (собівартість) = 51300грн. Податок на прибуток = 9 234 грн. Всього податків, отримає держава : 48 834. Підрахунки звичайно грубі. Бо передбачаємо, що попит поглине всю пропозицію. Крім того завод в принципі може зменшити ціну на свій товар . Однак ефективних податків держава зможе отримати значно більше, за рахунок чого в принципі можна або здійснити перекваліфікацію працівників або встановити норму гарантованого доходу ( як наприклад, хочуть зробити скандинавські країни). Якщо оподаткувати сам факт використання роботів, то може мати місце ситуація, що цей податок буде включатись, що ціни товарів, і власне впливатиме на обсяг виручки відповідних груп товарів, а і при певному підході може навіть теоретично збільшувати ціну товарів ( тут все залежить від вартості праці працівника і робота. Бо хоча ви роботу не платите заробітну плату, але несете витрати на його утримання, як того оплата джерел енергії, ремонт та поточне обслуговування). Варто також враховувати, що погодитись з ідею, якщо ти з роботами, то можеш бути платником єдиного податку за ставкою 5 % без обмеження ліміту обороту є не прийнятним, 1) Такі пільги з барського плеча зазвичай призводять, до неймовірних зловживань (тобто всі кому не лінь будуть заявляти, що вони високотехнічні виробництва. Крім того у заданій моделі плутається використання чи розробка роботів (тобто основних засобів) та розробка програмного забезпечення для автоматизації процесів (тобто не матеріальних активів). Для розробки роботів у більшості випадків компанії може бути економічно вигідніше бути платником ПДВ (бо запчастини у більшості випадків будуть з ПДВ) та платником податку на прибуток (через значний обсяг фактичних витрат). А для програмного забезпечення дійсно треба буде лише інтелект. 2) Наразі весь світ переходить ід прямих податків до непрямих (ПДВ). Вважаючи їх більш ефективними та такими, що краще сприймаються платниками ( при прямих у вас забирають кровне, а при непрямих люди не завжди помічають, скільки платять податків в ціни товарів чи послуг. І якщо у для вас ціна не прийнятна, ви просто споживає або більш дешевий аналог чи взагалі відмовлявсь від такого продукту). Загнать правоприменение в угол бездушного алгоритма! Дорогая редакция, пишет вам @Oksana Kobzar из адвокатского бюро Оксаны Кобзар (Харьков) Мы с коллегами читали внимательно вашу статью и, признаться, в профессиональных кругах материал вызвал волну негодования. Конечно, мы отнюдь не осуждаем, а всячески приветствуем научно-технический прогресс и рационализацию производства. Но послушайте себя, что вы говорите. Разве, по-вашему, машина в состоянии постичь тончайшие, невидимые грани и имманентные связи гражданских правоотношений во всем их разнообразии. Юрист — это прежде всего философ. Как по-вашему, осязаемы ли для робота те доктрины, те архисложные концепции, которые веками создавали и выхолащивали величайшие умы со времен римского права. Способен ли искусственный интеллект осознать, к примеру, категорический императив Канта, гегелевскую триаду, мир платоновских идей? Право есть объективное проявление свободного духа, писал Гегель. А мы хотим заключить его в двоичный код?! Простите, но мы не можем назвать иначе, чем кощунством, намерение вычеркнуть из истории достижения отечественной правовой науки и нашу славную юридическую школу. Поразительно, с какой легкостью вы выносите на публику свои псевдонаучные размышления, и вводите в заблуждение население! Так ведь граждане могут подумать, что если исковое заявление способен составить бот, то и выступать в суде может всякий, даже не обладая свидетельством адвоката!? Так может прежде поясните компьютерной программе, что такое верховенство права!? Поймите, открывая боту доступ в профессию, вы открываете ящик Пандоры, а может и врата ада. Кто же защитит граждан от ошибок машины, которые могут стать роковыми, судьбоносными для человека! Роль подмастерья, возможно, робот и осилит. Но никогда искусственный интеллект не придет на смену элите нашего профессионального сообщества, ибо алгоритм не способен заменить Мышление! А пока вы пишите свои пасквили, законодательная и исполнительная власть предпринимает действенные конкретные меры всеобъемлющего характера, чтобы предотвратить коллапс нашей правовой системы. Закон о государственной регистрации юридических лиц и физических лиц-предпринимателей, Закон о государственной регистрации вещных прав и их обременений, Закон о внешнеэкономической деятельности, Закон о порядке совершения расчетов в иностранной валюте, Налоговый и Хозяйственный кодексы — фактически последние бастионы наших ценностей и вековых традиций, которые созданы для того, чтобы робот никогда не смог заменить человека. Огромный массив законодательных и подзаконных актов Украины обеспечивает разнообразие и диспозитивность их применения, оставляет место для творчества и полета мысли, выхода за рамки шаблонов и сдвига заржавелых парадигм. И благодаря этому наша судебная практика настолько жива, динамична, свежа и лишена банальности и предсказуемости. Право каждый день, каждое судебное заседание по-новому рождается на наших глазах, и таинство этого рождения дается молодому юристу как посвящение в профессию, а опытного удерживает в ней. Все три ветви власти в стране работают синхронно для того, чтобы защитить и сохранить рабочее место юриста и престиж юридической профессии. А вы хотите загнать правоприменение в угол бездушного алгоритма? Государство и роботизация. Наукообразная заумь о технологической безработице Лорд-хранитель ключей @Сергей Верланов и @Юрий Дмитренко, есквайр свою рефлексию подготовить отказались. Вместо них ее сделала нейронная сеть. На фото: лорд-хранитель и есквайр. Фотографирует нейросеть Сеть предупреждает: текст подойдет только любителям буллет-поинтов. Существует мнение, что проблема технологической безработицы (если кто не знает, так по-умному называется то, о чем мы здесь рассуждаем) возникла чуть ли не с момента изобретения колеса. С этим можно спорить, но уж точно это явление стало известно человечеству не позднее чем с эпохи Промышленной революции (XVIII век). Вопрос о том, нравятся ли адроидам электроовцы имеет ли технологическая безработица долгосрочный негативный эффект (то есть очень много людей лишатся работы без очевидной возможности освоить новую, а целые профессии умрут), является дискуссионным. А это ключ к ответу на другой вопрос: «Должно ли государство вообще делать что-то?» (если только вы не либертарианец, потому что если да, то для вас ответ очевиден и без этого ключа). Если вы не либертарианец и считаете, что технологическая безработица вследствие стремительного развития робототехники и искусственного интеллекта будет иметь долгосрочные и необратимые последствия, вы будете исходить сначала из задач, которые вы хотите решить (презюмируем, что ваша цель — побороть негативный эффект технологической безработицы). Одна из задач, которую можно поставить, — это замедление процесса роботизации. Эта задача решается экономическими средствами — например, введением «налога на роботов» или предоставлением преференций для производств, использующих человеческий труд, и административными средствами — запретить или ограничить роботизацию тем или иным способом. По этому пути уже скоро может пойти Южная Корея (собирается отменить часть ранее существовавших налоговых льгот для компаний, инвестирующих в автоматизацию производства) — страна, которую тяжело обвинить в технологической отсталости. Другая возможная задача — восполнить пробел в доходах людей, лишенных работы. Например, заставить работодателя, уволившего человека в пользу робота, выплачивать этому человеку некую компенсацию. Или создать страховой фонд, в который роботизирующиеся предприятия и/или сами работники будут делать отчисления, и из которого пострадавшие будут получать страховые выплаты. По этому пути еще никто не пошел — неактуально пока. Короче говоря, любые государственные меры по борьбе с негативным эффектом роботизации скорее всего будут сводиться либо непосредственно к ограничению роботизации, либо к изъятию части доходов у бизнеса и перераспределению их в пользу пострадавших (напрямую или через какие-то институты). Эффективность этих мер прямо сегодня и даже завтра достаточно сомнительна и нуждается в серьезном экономическом анализе (который когда-нибудь смогут провести роботы, если люди не справятся). В уютную сказочку о том, что можно обложить роботизацию налогом и выплачивать безусловный доход всем-всем-всем, это всё может превратиться очень нескоро. Но может, в принципе. Наконец, если вы либертарианец или считаете, что технологическая безработица не будет иметь долгосрочного негативного эффекта, то государство ничего делать не должно — рынок сам все в итоге отрегулирует. В конце концов, никто механические ткацкие станки налогом не обложил и не запретил в 1771 году — и ничего. Кстати, ответственность собственника за вред, причиненный роботами, никак не связана с преодолением негативного эффекта технологической безработицы.
Я, робот (ФЛП, 3 группа)
111
я-робот-флп-3-группа-11febc28cdf9
2018-09-02
2018-09-02 17:35:45
https://medium.com/s/story/я-робот-флп-3-группа-11febc28cdf9
false
3,491
New legal media
null
axon.partners
null
The Axonomist
lawyerpreneurship
LAW,LEGALTECH,PRIVACY,BUSINESS STRATEGY,BLOCKCHAIN
axon_partners
Taxes
taxes
Taxes
12,439
Oksana Kochkodan
null
86f36472924d
oksanakochkodan
108
81
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-29
2018-09-29 18:57:35
2018-09-29
2018-09-29 18:57:35
1
false
en
2018-10-02
2018-10-02 11:52:39
7
11fef5e2b829
2.064151
0
0
0
“I am in the camp that is concerned about super intelligence.” — Bill Gates
4
Tech Tuesday: Has Moore’s Law Run Its Course? “I am in the camp that is concerned about super intelligence.” — Bill Gates Hood ornament photo by the author. 2016 was a great year for A.I. Watson beat the best of the best in Jeopardy. And a rival A.I. sibling defeated the world’s best Go player. For artificial intelligence enthusiasts everything’s was coming up roses. Siri was getting smarter and smart cars continued to prove their mettle. Meanwhile, unnoticed in the shadows of all these breakthrough events was this gloomy announcement: Moore’s Law R.I.P. Moore’s Law is one of those things like gravity that has been taken as a matter of faith since it was conceived, or revealed. It’s named after Gordon Moore, co-founder of Intel, who in 1965 observed that the number of transistors per square inch on an integrated circuit was doubling every year and would continue to do so. Ironically as this legendary “law” became universal lore it was modified to every two years. At that point right there someone should have noticed that there would be limits on how long this doubling could go on. Technology’s capabilities have had an amazing run though and it shows no signs of letting. I doubt that anyone who worked on the ENIAC could ever have imagined the power capabilities of our smart phones today. My uncle, who had worked with the ENIAC, said that the room-sized machine was powered by vacuum tubes. After about five minutes of run time a tube would burn out and they would have to walk around trying to find the burned out tube so they could replace it. (Read how vacuum tubes work here.) Even if Moore’s Law has slowed, making predictions about the future hasn’t let up one byte. The big buzz these past couple years has whirled around predictions regarding the Internet of Things (IofT). If you think that having all the computers in the world wired is remarkable, what’s coming is apparently going to dwarf this when we have all our devices, houses, transportation, manufacturing and agriculture connected. It’s no wonder that some people are a bit fearful about the possible adversity that could be caused by a superintelligent computer that goes rogue. Some believe this could even happen in our lifetimes. For others there are more immediate issues we should be concerned about. Fortunately, 95% of what we worry about doesn’t happen, so try not to lose too much sleep. On a lighter note, here’s a link to an NPR story dealing with computers and creativity. Can computers write good poetry? Can they write so well that you can’t tell who or what wrote it? It’s a six poem quiz. Read each and guess whether it was written by a human or a machine. I got all six correct. Can you? Related Story: Lance Ulanoff’s Did Google Duplex Just Pass the Turing Test? Meantime, life goes on all around you. Enjoy it while you can. Originally published at pioneerproductions.blogspot.com
Tech Tuesday: Has Moore’s Law Run Its Course?
0
tech-tuesday-has-moores-law-run-its-course-11fef5e2b829
2018-10-02
2018-10-02 13:35:16
https://medium.com/s/story/tech-tuesday-has-moores-law-run-its-course-11fef5e2b829
false
494
null
null
null
null
null
null
null
null
null
Technology
technology
Technology
166,125
Ed Newman
Retired ad man, Ed Newman is an avid reader who writes about arts, culture, literature and all things Dylan. @ennyman3 https://pioneerproductions.blogspot.com/
ddd8c63788ce
ennyman
187
268
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-09-14
2017-09-14 19:30:12
2017-09-14
2017-09-14 19:30:59
0
false
en
2017-09-14
2017-09-14 19:30:59
0
1200070607f1
1.935849
0
0
0
It’s said that humankind owes our way of life to agriculture. That is to say the practice of growing our own sustenance and deviating from…
5
The Rise, Fall, and Revival of Home Gardening in the United States It’s said that humankind owes our way of life to agriculture. That is to say the practice of growing our own sustenance and deviating from hunting and gathering allowed us to form roots, and from those roots grew societies which paved the way for the industry that brought us to the stage of development we currently inhabit. At least 12,000 years ago our ancestors began to turn away from the struggle of moving from place to place in search of food and instead began cultivating it and establishing a sense of permanence. From those moments in human history, humanity began to settle the world and in time, the American continent was colonized. By 8,000 BC, the first potatoes were being harvested from American soil, and corn by 2,700 BC. Nearly 4,400 years later, Europeans began to colonize what we now call the United States. Being as there were no corner stores or markets (the first sprouting up in the mid-1800s), colonialists grew their own produce in small doorway gardens. As village markets became possible, so too did ornamental gardens meant for leisure and aesthetics rather than survival. Eventually, gardens moved to back and side yards of homes and doorway gardens were replaced with lawns, industry of the 1900s grew, and the increase in manufacturing jobs was accompanied by a declined interest in home gardening. That is, until the 1940s. WWII’s strain on US (and the rest of the world’s) resources gave birth to a new movement promoted by the United States government: the Victory Garden. By 1943, 20 million Victory Gardens grew around 40% of US produce. Even the White House had a Victory Garden under the order of Franklin D. Roosevelt himself. The movement declined in popularity after the war ended and interest waned drastically over the following decades. In 1970, Senator Gaylord Nelson of Wisconsin promoted Earth Day, celebrated on April 22 each year, a tradition continued to this day. While the annual event began as a grassroots movement meant to draw public support for the creation of the Environmental Protection Agency (EPA), it also contributed to the passage of several environmental laws. Thanks to Earth Day’s works to make the world more environmentally conscious, interest in growing produce was renewed and “Edible landscaping” grew in popularity. This growth continued right toward the 1990s, which saw some of the fastest urban population growth in recorded US history. Innovation in small space gardening allowed the rise of edible gardens to continue and as of the early 2000s, the concept has come back to the forefront. In 2009 the Obamas brought the first vegetable garden to the White House since WWII and, as of 2013, a third of American households are growing their own produce. Farmer’s markets have seen a rebirth in towns all over the United States and urban gardening is growing in popularity and development. With increasing importance, households across the United States are once again turning our attention toward a practical and sustainable approach to food access.
The Rise, Fall, and Revival of Home Gardening in the United States
0
the-rise-fall-and-revival-of-home-gardening-in-the-united-states-1200070607f1
2018-04-10
2018-04-10 12:38:30
https://medium.com/s/story/the-rise-fall-and-revival-of-home-gardening-in-the-united-states-1200070607f1
false
513
null
null
null
null
null
null
null
null
null
Agriculture
agriculture
Agriculture
12,051
Yarden
www.yardengarden.com
8b67afc4add0
yardenmygarden
7
46
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-09-07
2018-09-07 13:37:29
2018-09-07
2018-09-07 13:43:30
1
false
en
2018-09-07
2018-09-07 13:43:30
9
120041406f1
4.818868
7
0
0
Written by Drew J. Lipman, PhD, Lead Data Scientist at Hypergiant
5
Is Neural Network Hype Killing Machine Learning? Written by Drew J. Lipman, PhD, Lead Data Scientist at Hypergiant In the fall semester of 2016, I was attending a seminar series taught by Dr. Sather-Wagstaff about the applications of algebraic topology, in particular homologies. Homology is a way to connect two sequences of objects, normally from diverse branches of mathematics, to each other in a meaningful way. Ideally, with these types of processes, you gain information about each sequence by studying the other. Throughout that semester, we learned several abstract concepts and ideas that have proven useful when approaching problems. Consider, for example, the difficult and relevant issue of determining the number of separate, geometric regions defined by a problem (and if there are any holes), and turning it into a computational issue — namely, computing Betti numbers. What does this have to do with machine learning, exactly? Ask yourself: Is taking an abstract problem and converting it into a numeric computation that can be solved not one of the fundamental problems of machine intelligence? The goal is not, has not, and should not be to build larger or more complex neural networks. The goal of machine learning has always been to make abstract problems understandable to machines. That is, to compute answers to problems. Data scientists and industrial AI developers seem to get hung up on using neural networks, or Software 2.0 as some call it, as the only acceptable technique to a problem. Consequently, the news — and, therefore, public opinion — lacks information about more fundamental techniques for converting the abstract into the computational and is filled with information about how neural networks have achieved this result, instead. That being said, neural networks do offer a wide range of applications; when trained on a large, properly-curated dataset with enough computational power and time, neural networks have produced results that match or exceed both human benchmarks and the performance of any other machine learning algorithm. This is why they represent such a rich area of research and development. The Faults at Hand What, then, are some fundamental problems with neural networks? While there are several, the ones we will discuss, in particular, are transparency of the resulting function, failure analysis for when things go wrong, and the lack of understanding in the community. What do we mean by transparency, you may ask? Consider the following toy example: An insurance company hires a data scientist to help them streamline their process of identifying people eligible for coverage. After an appropriate amount of time analyzing the data, environmental factors, health metrics, and employment trends — not to mention, a lot of Natural Language Processing of news articles — the data scientist builds a neural network that inputs applicant data and outputs a list of various details such as the expected return value and insurance duration. That may seem reasonable, but under a transparency law similar to what the EU proposed, rejected applicants have the right to ask how their data was processed and used in insurance risk assessments. A neural network does not really offer the kind of transparency required, especially in the insurance business where it is immoral, and illegal, to deny an individual coverage based on factors such as ethnicity. So, for example, how does a company defend itself against the allegation that its protocol denies people insurance based on ethnicity? How do you prove that the neural network is not, in fact, doing that? Neural networks are very good at correlating data, and oftentimes make correlations that are not there or (worse) are illegal. Sure, you can simply remove that data from the training set, but information such as hometown is important and does provide a large amount of statistical evidence regarding sensitive characteristics such as ethnicity and religion. In short, even if you do not provide the training set with the kind of data that might produce illegal correlations, the neural network might make those connections anyway. Now, consider autonomous vehicles. This is the great, long-hoped-for promise of computer vision. If nothing else, easing the tribulations of long distance driving makes these worthwhile. Inevitably, however, any accident brings about the question of whether the driver, the vehicle’s mechanics, or the AI is responsible for the accident. Who’s to be sued for damages? Knowing why an artificial intelligence failed at a task is often just as important as knowing when it failed. There is an example, often used, in which a neural network image classifier labeled African Americans as gorillas. We know that the neural network failed, but how did it fail and what can Google do to fix the problem? Is it truly a cultural diversity problem as suggested in the article? Or is it an artifact of the training set reflecting population dynamics and the fact that African Americans only appear in 16% of human photos? Could it have been due to lighting conditions confusing the classifier or some other reason? Given the susceptibility of neural networks to malicious real world attacks and fault injection, determining the cause of any failure is very important. The issue with neural networks is that, many times, we just do not know what caused an incorrect classification. Some Much-Needed Clarification Ask yourself, what are neural networks? If you answered “machine learning models based on the behavior of neurons,” then congratulations, you have bought into the hype. Try this: Ask your friendly, neighborhood neural biologist what artificial neural networks have to do with biological neural networks. One out of three times, you will be laughed out of the office, argued with, or given a look of disgust. People believe that biological neural nets are similar to artificial ones — that they actually inspired them — when they are not. A neuron (artificial or organic) receives inputs, often of varying strength, from other neurons and does something as a result. That is the extent of the similarities between the two. Basically, artificial neural nets and biological neural nets have about as much in common as I have to Charlize Theron. In short, one is an example of beauty and efficiency, while the other is an amusing facsimile created late one night in a basement. This is not to say that there is no value in using biology to inspire programs, but (as is the case with most biologically-inspired algorithms) the similarities are topical at best, and more about having a cool-sounding name to get interest — and, subsequently, funding and citations — for your research. Where Does That Leave Us? We do not understand how neural networks make decisions, and cannot explain it to others if needed; fixing neural networks when they go wrong is such a monumental task that Google often just begins again; and the public generally does not understand what they are or how they are designed. All of that being said, when a problem comes up and a so-called “intelligent” solution is called for, what do most want? A neural network! When do they want it? Right now! There seems to be no real interest in pursuing those classical methods that often provide solutions of equal or superior quality for the datasets provided. Which begs the question: Is this hype killing the development of other machine learning techniques that provide equivalent (or greater) value?
Is Neural Network Hype Killing Machine Learning?
57
is-neural-network-hype-killing-machine-learning-120041406f1
2018-09-07
2018-09-07 13:43:30
https://medium.com/s/story/is-neural-network-hype-killing-machine-learning-120041406f1
false
1,224
null
null
null
null
null
null
null
null
null
Machine Learning
machine-learning
Machine Learning
51,320
Hypergiant
Where companies speed beyond norms and realize an exploded potential. Tomorrowing Today™. https://hypergiant.com/
601d9e7f0ae
hypergiant
91
1
20,181,104
null
null
null
null
null
null
0
null
0
null
2017-10-07
2017-10-07 13:50:38
2017-10-12
2017-10-12 08:59:48
16
false
en
2017-10-13
2017-10-13 11:50:57
0
120246b1fae8
9.342453
16
2
0
Nowadays its easy to forget that data science is not all about machine/deep learning. While AI is awesome, data science is by majority a…
3
Messi vs Ronaldo (vs the world), data science edition Nowadays its easy to forget that data science is not all about machine/deep learning. While AI is awesome, data science is by majority a practice that exists to better understand real phenomenons. Besides being a data scientist, I am also a sports fan. One thing that drives me crazy is the false use of data and statistics in sports. Very often you see irrelevant facts being made assumptions upon and players/teams being compared over very weak statistics. It’s a while now, that I wanted to create a measure for comparing goals in a soccer match. Counting who has the most goals, is just plain wrong. A goal scored at the 90 minute when the scoreboard shows 1–1, is by far superior to a goal score in the same minute when leading 4–0. I have put in a lot of time and effort in coming up with a way to measure the significance of a goal, to finally establish what I call Relative Goal Value v1.0 (referred by now as RGV1). The elements RGV1 takes into considerations are: 1. Time the goal has been scored 2. Team the goal was scored against 3. Home / away goal 4. Current score of the game I have chosen to not discriminate penalties. In this post, I’ll explain about the RGV1 scoring system, and use it to compare Lionel Messi to Cristiano Ronaldo and the top 50 scorers (by RGV1) in the 5 major leagues. RGV1 Scoring System (TL;DR) Before we use RGV1 to compare player’s goal scoring, lets understand what it is about. This is the TL;DR version, assuming most people reading this will not want to go into the equations, this part will explain the essence of the scoring system, at the end of the post you can find the full equations. **Disclaimer: While RGV1 is proportional to the points won for the team, it has nothing to do directly with it. RGV1 DOES NOT measure how many points a player won for the team but rather calculates a sophisticated value of a goal. The scoring is built in a following manner The most important element and most complex being game state value. The game state value, differs in range, depending on the current score and the time left to play. When the game is tied, the value of a goal rises exponentially from 1 to 3, according to the minute of the game. When leading, the value of a goal drops exponentially as time advances, and the range is dependent on by how much the team is leading by. When trailing the score behaves like when leading, but in a smaller scale. The logic behind the game state value is that: — Goal scored on tie > goal scored when behind > goal scored when leading — On a tie, the later the goal the higher the value (goal scored on tie in the 20' minute, is worth less than a goal scored on tie in the 90' minute) — When leading, increasing the lead earlier is better — When trailing, decreasing the opponents lead earlier is better. Before deciding on these 4 points and their relativity to one another, I have consulted with many friends, some field experts in order to be as accurate as possible. Below is a plot of the game state value: Then, the game state value is multiplied by the team quality multiplier, which ranges from 0.68~ to 1, depending on the standings of the opponent team in the end of the season (a measure of team quality). And finally this is multiplied by 1/0.9, depending if it was an away/home goal. A perfect 3 score, will be achieved when scoring a winning goal on the 90 minute in an away game against the team who finished the season in the top spot. The lowest score possible, will be achieved when scoring a goal, when leading by 3+ in the 90 minute against the team that finished the season last. Before we go on to the comparison, some examples of scores: 1. In La Liga, 2016–2017 season, the goal with the highest score is Lionel Messi’s goal at the Bernabeu, when the game was tied 2–2 at the 92th minute (Score of perfect 3) 2. In La Liga, 2016–2017 season, the goal with the lowest score is Tiago’s goal for Atletico Madrid against Granada at home when leading 6–1, at the 87th minute (Score of 0.231) Examining 2009–2016 in La Liga, below is the distributions of all the RGV1 scores for all players Messi vs Ronaldo Now lets get to the interesting part. A lot has been talked about this two, and while in other areas of the game it is quite clear in each area who is best, their goal scoring is constantly compared. The data we’ll be comparing on are only on La Liga’s goals, from the year 2009 (when Ronaldo arrived at Real Madrid). First, lets see how their overall RGV1 distribution looks like Well, not so surprising… In numbers these plot is (Messi/Ronaldo) Mean: 0.950 / 0.943 (higher is better) Standard deviation: 0.547 / 0.485 25 percentile: 0.461 / 0.578 50 percentile: 0.854 / 0.861 75 percentile: 1.232 / 1.246 Minimum: 0.226 / 0.233 Maximum: 3.000 / 2.855 Looking at Ronaldo’s and Messi’s most important goals (maximum RGV1), interestingly, both happened in April, one year apart. Messi, the winning goal in the 92 minute at the Bernabeu, when the game was tied 2–2 against Real Madrid, which won the league title that season. Ronaldo, the winning goal in the 85 minute at the Camp Nou when the game was tied 1–1 against Barcelona, which won the league title that season. Moving forward, lets see what was their overall contribution, meaning sum of all RGV1 from 2009 to 2016 Messi has scored a total of 271.629 RGV1 and Ronaldo a total of 260.228, Messi in 266 appearances and Ronaldo in 254, making Messi’s average RGV1 per appearance 1.021 and Ronaldo’s 1.024. Let’s try to look now at the RGV1 per season, starting with the total RGV1 per season. Interesting to see in the graph is that the leader of each year splits evenly between them, each one taking the top spot for 4 seasons. Now, tempting to look at is the average RGV1 per season. But the truth is that this is a bad metric, since if the two had scored the same goals exactly, but one of them scored an extra goal with a low value, he would have a worse average even though he performed better. Instead we would look at ‘fixed average’ which would be the total RGV1, divided by the average goal count of the both in the same season. Here also we can see that the lead changes are equal and Ronaldo displays better stability throughout the years while Messi’s peak performance outperforms Ronaldo’s. Since the most critical aspect of RGV1 scoring is the game state value lets see how the goals distribute between different game states per player and the minutes. First, by the scoreboard status Simply amazing to see, that across 8 seasons, Messi and Cristiano has an equal amount of goals scored when 1 behind and when the game is tied. Notice that they both score when the game is tied more than any other score situation, which tells a lot to their contribution to their teams at the most important point of the game. Now lets look how they distribute their goals across minutes: Here we can see that Ronaldo’s distribution is quite uniform, while Messi prefers the second half. I have to say, that when I started with this project, I knew they were both phenomenal goal scorers, but I had hopes to see one that will stand out. As the data tells us, there is no much difference between the two and the mystery of who’s the better goal scorer is left unsolved…. But how do they stack against the rest of the goal scorers? Messi and Ronaldo Against the World Without further due, lets look at the totals, of the top 50 RGV1 ranked scorers in the period between 2009–2010 -> 2016–2017 Notice that except from Messi and Ronaldo, there are only pure strikers in the top 15. It is pretty clear that these two stand out from the crowd, as Ibrahimovic which is the closest has a total of 182.788 which is is 78~ RGV1 points behind Ronaldo and 90~ behind Messi. Also in this plot it can be seen, that counting goals and counting RGV1 are two different things. For example, Lewandowski has scored many more goals than Di Natale, while Di Natale has created more value for his club. Another great thing to see is that Ibrahimovic, Higuain and Cavani have scored lots of goals and also yielded great RGV1 showing their great significance for their club. You’d be the judge, but I believe RGV1 reflects a player’s value for his club in a better manner than goal counts. Lets see how the top 10 have performed throughout the years: We could be missing players that had great years in this plot since the graph above shows the top 10 over all the seasons. Attached below is a graph, for each season separately, plotting Messi and Ronaldo against the top 25 RGV1 scorers examining each season separately. Judging the graphs it is simply amazing what phenomenal scorers Messi and Ronaldo truly are and how consistent their dominance has been. Lots of scorers have emerged in those 8 years but none have managed to reach Messi’s and Ronaldo’s peak performance nor sustain their performance for such long period. To wrap it up, I concluded below a table for each season, with the top 10 players of that season. Before going to the data itself, I’ve added the number of times players have appeared in the top 5 of a season: Messi: 7 Ronaldo: 6 Ibrahimovic: 4 Milito, Lewandowski, Suarez, Cavani, van Persie: 2 2009–2010 2010–2011 2011–2012 2012–2013 2013–2014 2014–2015 2015–2016 2016–2017 RGV1 Scoring System If you got this far in the reading, I salute you. This part is dedicated to the equations of the RGV1’s scoring system. Let’s remind ourselves what RGV is made of Where TeamQualityMultiplier is ranging from 0.68~ to 1, and is calculated in the following manner, on a 20 team league table: Where s is a linear decreasing value between 1 and 0, depending on how many teams are in the league where the team that finished first gets 1 and last 0. Next is the HomeOrAwayGoalMultiplier which is set to 0.9 for home games, and 1 for away games. Last but certainly not least is the game state value. Game state value acts differently when game is tied and when it is in favor. Equation for tied situation: Where m, is linearly increasing from 0 to log(3) depending on the minute the goal was scored at where the 1st minute is 0 and last is log(3) When the game is in favor, the following equation is used: Where m is linearly increasing from 0 to 1, where the 1st minute is 1 and last is 1. The other variable diff is set to fixed variable depending whether the goal scorer team is leading or behind: - Behind by 1 -> diff = log(3) -Leading by 1 -> diff = 0.85 - Behind by 2 -> diff = 0.6 - Behind by 3 or leading by 2 -> diff = 0.3 - Leading by 3 -> diff = 0.15 - Behind by 3+ or leading by 3+ -> diff = 0 Some domain knowledge has been put into these equations as you can tell. Last Words I hoped you enjoyed this read and also hope that with time better statistics and measurements will enter the world of soccer. I would love to keep exploring soccer data, but unfortunately gold standard data like Opta’s is very hard to get or very expensive. Using such data (like Opta’s) amazing things can be done, especially today in the explosion of data science and AI. Today, most of the clubs are using data analysts, but the distance between an analyst and a scientist is what can make all the difference which makes this fact rather sad. I wonder what would happen if all those clubs had full time data scientists…
Messi vs Ronaldo (vs the world), data science edition
115
messi-vs-ronaldo-data-science-edition-120246b1fae8
2018-05-10
2018-05-10 21:00:21
https://medium.com/s/story/messi-vs-ronaldo-data-science-edition-120246b1fae8
false
2,065
null
null
null
null
null
null
null
null
null
Sports
sports
Sports
129,960
Elior Cohen
Data scientist, Pythonista
286fb8aea313
eliorcohen
736
10
20,181,104
null
null
null
null
null
null
0
null
0
5f1816abe091
2017-12-01
2017-12-01 12:29:37
2017-12-03
2017-12-03 12:06:00
1
false
it
2017-12-03
2017-12-03 12:06:00
6
120495382650
3.603774
2
0
0
Due metodi di apprendimento non supervisionato sono riusciti a creare un dizionario bilingue senza alcun intervento umano
5
L’intelligenza artificiale diventa bilingue, e senza dizionario Due metodi di apprendimento non supervisionato sono riusciti a creare un dizionario bilingue senza alcun intervento umano I computer potrebbero presto tradurre tra molte più lingue. La traduzione automatica del linguaggio ha fatto molta strada, grazie alle reti neurali — algoritmi computerizzati che traggono ispirazione dal cervello umano. Ma l’ ‘allenamento’ di tali reti richiede un’enorme quantità di dati: milioni di traduzioni frase per frase per dimostrare in che modo un essere umano svolga lo stesso compito. Ora, due nuovi documenti mostrano che le reti neurali possono imparare a tradurre senza testi paralleli, un’evoluzione sorprendente che potrebbe rendere i documenti scritti in diverse lingue più accessibili. “Immaginate di dare ad una persona un sacco di libri cinesi e arabi — nessuno dei quali si sovrappone — e la persona deve imparare a tradurre il cinese in arabo. Sembra impossibile, giusto?” dice il primo autore di uno dei due studi, Mikel Artetxe, informatico dell’Università dei Paesi Baschi (UPV) di San Sebastiàn, Spagna. “Ma noi mostriamo che un computer può farlo.” La maggior parte dei processi di machine learning, in cui le reti neurali e gli altri algoritmi di calcolo imparano dall’esperienza, è “supervisionata”: un computer immagina la soluzione, riceve la risposta giusta e regola il suo processo di conseguenza. Questo funziona bene quando si insegna al computer a tradurre tra inglese e francese, perché molti documenti esistono in entrambe le lingue. Non funziona così bene per le lingue rare, o per quelle diffuse ma senza molti testi paralleli. I due nuovi paper di ricerca, entrambi inviati alla Conferenza Internazionale delle Rappresentanze dell’Apprendimento del prossimo anno, ma che non sono stati oggetto di revisione paritetica, si concentrano su un altro metodo: l’apprendimento automatico senza supervisione. Per cominciare, ciascuno costruisce dizionari bilingue senza l’aiuto di un insegnante umano che dice loro quando hanno ragione. Questo è possibile perché le lingue hanno forti somiglianze nei modi in cui le parole si raggruppano l’una intorno all’altra. Le parole per tavolo e sedia, ad esempio, sono spesso usate insieme in tutte le lingue. Quindi, se un computer mappa queste co-occorrenze come un gigantesco atlante stradale vedendo le parole come delle città, le mappe per le diverse lingue si somiglieranno, ma avranno nomi diversi. Un computer può quindi capire il modo migliore per sovrapporre un atlante su un altro. Voilà! Ecco un dizionario bilingue. Le due nuove ricerche, che utilizzano metodi notevolmente simili, possono anche tradurre al livello delle frasi. Entrambi utilizzano due strategie di formazione, chiamate traduzione a ritroso e denoising. Nella traduzione a ritroso, una frase in una lingua viene grosso modo tradotta nell’altra, poi tradotta nuovamente nella lingua originale. Se la frase ri-tradotta non è identica all’originale, le reti neurali vengono regolate in modo che la prossima volta siano più simili. Il denoising è simile alla traduzione a ritroso, ma invece di passare da una lingua all’altra e viceversa, aggiunge rumore a una frase (riordinando o rimuovendo le parole) e cerca di tradurla nuovamente nell’originale. Insieme, questi metodi insegnano alle reti la struttura più profonda del linguaggio. Ci sono lievi differenze tra le tecniche. Il sistema ideato da UPV traduce più frequentemente durante l’allenamento. L’altro sistema, creato dallo scienziato informatico di Facebook Guillaume Lample, con sede a Parigi, e dai suoi collaboratori, aggiunge un ulteriore passo avanti durante la traduzione. Entrambi i sistemi codificano una frase da una lingua in una rappresentazione più astratta prima di decodificarla nell’altra lingua, ma il sistema di Facebook verifica che il “linguaggio” intermedio sia veramente astratto. Artetxe e Lample dicono entrambi di poter migliorare i loro risultati applicando l’uno le tecniche descritte dall’altro. Nell’unico risultato direttamente confrontabile tra i due paper — la traduzione tra inglese e francese del testo ricavato dalla stessa serie di circa 30 milioni di frasi — entrambi hanno ottenuto un punteggio di sottovalutazione bilingue (usato per misurare l’accuratezza delle traduzioni) di circa 15 in entrambe le direzioni. Questo non è così alto come Google Translate, un metodo supervisionato che ha un punteggio di circa 40, o gli esseri umani, che possono ottenere più di 50 punti, ma è meglio della traduzione parola per parola. Gli autori dicono che i sistemi potrebbero essere facilmente migliorati diventando semisupervisonati — ovvero, aggiungendo qualche migliaio di frasi parallele alla loro formazione. Oltre alla traduzione tra le lingue senza molti testi paralleli, sia Artetxe che Lample dicono che i loro sistemi potrebbero aiutare negli abbinamenti comuni come l’inglese e il francese se i testi paralleli sono tutti dello stesso tipo, come i reportage dei giornali, ma si vuole tradurre in un nuovo registro come lo slang di strada o il gergo medico. “Ma siamo appena all’inizio,” avverte il co-autore di Artetxe Eneko Agirre. “Abbiamo appena aperto una nuova via di ricerca, quindi non sappiamo dove vada.” “È scioccante sapere che il computer potrebbe imparare a tradurre anche senza la supervisione umana,” dice Di He, un informatico di Microsoft a Pechino il cui lavoro ha influenzato entrambi i documenti. Artetxe dice che il suo metodo e quello di Lample, caricati su arXiv un giorno dopo l’altro, siano così simili è sorprendente. Ma allo stesso tempo, è fantastico. “Significa che l’approccio è veramente nella giusta direzione.” Tradotto in Italiano. Articolo originale: Science VISIONARI è un network di imprenditori, scienziati, artisti, scrittori e changemakers che pensano e agiscono al di fuori degli schemi. Puoi fare domanda per entrare qui: https://bit.ly/visionari-entra Seguici sulla nostra pagina Facebook per scoprire nuovi progetti innovativi: Visionari
L’intelligenza artificiale diventa bilingue, e senza dizionario
5
lintelligenza-artificiale-diventa-bilingue-e-senza-dizionario-120495382650
2017-12-11
2017-12-11 21:05:30
https://medium.com/s/story/lintelligenza-artificiale-diventa-bilingue-e-senza-dizionario-120495382650
false
902
Pensare e agire fuori dagli schemi
null
VISIONARIORG
null
VISIONARI | Scienza e tecnologia al servizio delle persone
visionari
TECNOLOGIA,FUTURO,SCIENZA,VISIONARI
federicopistono
Rete Neurale
rete-neurale
Rete Neurale
1
ad astra
Per diventare socio, partecipare ai nostri eventi e attività, o fare una donazione visita: https://visionari.org
71565016dbab
VISIONARI
229
2
20,181,104
null
null
null
null
null
null
0
null
0
null
2018-05-05
2018-05-05 07:39:33
2018-05-18
2018-05-18 13:40:25
5
false
en
2018-05-18
2018-05-18 13:40:25
10
1204d470ad80
4.784277
1
0
0
2018 might well become the year of huge leaps in predictive analytics, AI, edge computing, data storage, hybrid cloud and much more… Learn…
5
Cutting-edge IT trends for 2018: Cloud, Big Data, AI, ML and IoT 2018 might well become the year of huge leaps in predictive analytics, AI, edge computing, data storage, hybrid cloud and much more… Learn about the hottest IT trends for 2018! 2017 was the year the IT industry matured and expanded beyond any previous limits, as the cloud service providers like AWS introduced the next generation of cloud instances, data storage, and processing solutions, more potent and cost-efficient. While some Big Data tools and ML technologies became obsolete and had to be dropped, new, more feature-rich and productive tooling (like Spark) comes to their place. The term Big Data itself is somewhat excessive right now, as we know the data in question is huge by default. This is why many IT experts and credible sources largely use just “data” instead. As using the data analytics efficiently is crucial for successful business decision-making, the whole pack of IT industry branches is centered around high-velocity data aggregation, inexpensive data storage, high-speed data processing and high-accuracy data analytics. Let’s take a closer look at the cutting-edge tech trends for the businesses in 2018. IoT for high-velocity data aggregation Data lakes used for Big Data analytics have multiple inlets, like social media, internal data flows from CRM/ERP systems, accounting platforms, etc. Nonetheless, when we add IoT sensors to the mix the complexity grows tenfold. In order to be able to extract valuable insights from this data, it must be aggregated and processed quickly and the amount of data inserted should be kept at the lowest level appropriate. For example, when collecting the Industry 4.0 data from fully-automated factories, there might be hundreds of temperature sensors scattered across the facility, which will be transmitting the same temperature data in normal mode. In this case, it is logical to cut off 99% of data and report only that the temperature was nominal. Only if (when) the temperature spike happens, should the edge-computing system react, locate the sensor that raised the alert, analyze the situation and act appropriately. As another example, let’s assume we have a wind power plant with multiple wind turbines rotating under the edge computing system control. If the wind blast brings small gravel that can damage the rotor bearings, the first turbine hit reports of the impact, the system identifies the threat and responds by ordering the rest of the turbines to rotate their fans in a direction allowing to avoid a collision. Such high-velocity data aggregation and analysis with edge computing systems are already available on many cloud platforms, and will further expand in 2018. Inexpensive data storage in the cloud Cloud storage is a must for data lakes, as only leveraging the cloud computing resources allows unleashing the full potential of business intelligence and Big Data analytics systems. However, when the data in question is huge, so will be the expenses on its storage. While many cloud service providers like AWS or GCP work hard on minimizing the data storage expenses, they still remain substantial. The questions of data security also raise certain concerns, as multiple departments gain access to the cloud and strict security protocols should be applied to ensure the safety of data at work. See also: Demystified: 6 Myths of Cloud Computing One possible solution is going for hybrid cloud strategy and combining the granular access right provided by on-prem infrastructure with immense and easily-scalable computational power of the public cloud. The other approach lies within using blockchain-powered cloud storage options, as some pilots proved to provide 90% cost reduction as compared to AWS. In 2018 these projects will thrive and mature, readying for an industry-wide adoption. High-speed Big Data processing The Big Data solutions provider Syncsort has published a survey of the Big Data challenges and issues faced by the enterprise businesses. One of the key findings of that survey is the fact nearly 70% of respondents mentioned the implications with ETL, meaning they struggle to process the incoming data fast enough to keep their data lakes fresh and relevant. Real-time and predictive analytics that are required to provide solid ground for on-point business analytics demand polished data processing workflows, while over 75% of the Syncsort survey respondents acknowledged they need to be able to process data more rapidly. Big Data visualization models allow providing the analytics results in a clearly comprehensible form. 2018 is likely to be the year various Big Data visualization tools become even more production-ready and able to cater to the needs of enterprise business. AI and ML used for high-accuracy data analytics The main direction AI and Machine Learning (ML) development is taking today is the improvement of the ways the humans interact with computers by writing special algorithms. These algorithms allow to automate the routine work or improve the results of the tasks where the outcomes are traditionally highly dependent on human skills. 2017 saw great accomplishments in the ML areas like text translation, optical image recognition, and various other projects. Amazon’s prediction engine is intended to provide better service to the customers, yet as of now, its accuracy is quite low, around 10% at best. In the end of 2017, AWS joined forces with Azure to develop a new-generation AI platform, Gluon API. By outsourcing the platform AWS and Azure hope to encourage AI developers of any skill level to produce more clean and efficient AI algorithms. We are sure Gluon will be heartily welcomed in 2018 and will become one of the main tools in any AI developer’s toolkit. Final thoughts on the cutting-edge IT trends for 2018 2018 will definitely be the year multiple cloud, Big Data, IoT and AI/ML projects enter the production-grade phase. Tools like Spark, JupyteR, Gluon and others will find their way to the hearts and toolkits of the enterprise specialists. People responsible for the digital transformation and increased efficiency of Business Intelligence & Big Data analytics in their companies must keep a close eye on these trends and adopt the solutions once they are ready. Initially, this story was posted on my company’s blog — https://itsvit.com/blog/cutting-edge-trends-2018-cloud-big-data-ai-ml-iot/
Cutting-edge IT trends for 2018: Cloud, Big Data, AI, ML and IoT
1
cutting-edge-it-trends-for-2018-cloud-big-data-ai-ml-and-iot-1204d470ad80
2018-05-19
2018-05-19 01:36:34
https://medium.com/s/story/cutting-edge-it-trends-for-2018-cloud-big-data-ai-ml-and-iot-1204d470ad80
false
1,047
null
null
null
null
null
null
null
null
null
Cloud Computing
cloud-computing
Cloud Computing
22,811
Vladimir Fedak
CEO of IT Svit since 2005 and don't wanna stop | DevOps & Big Data specialist
666636f4158e
FedakV
592
43
20,181,104
null
null
null
null
null
null