Dataset Viewer
audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | 638f418c8464 | 2018-09-18 | 2018-09-18 20:55:34 | 2018-09-18 | 2018-09-18 20:57:03 | 1 | false | en | 2018-09-18 | 2018-09-18 20:58:20 | 1 | 10007d3018fe | 0.958491 | 2 | 0 | 0 | A major private IT company implements blockchain, artificial intelligence, and Internet of Things to optimize and improve high technology… | 5 | Private Business, Government and Blockchain
A major private IT company implements blockchain, artificial intelligence, and Internet of Things to optimize and improve high technology workflow. The representatives of a major state structure from the same country like this experiment so much they decide to use it in their work and conclude an agreement with the IT giant. This is an ideal example of interaction between private business and the state regarding blockchain, don’t you think? What is even better is that this story is real: in South Korea a local customs office has signed the respective partnership agreement with Samsung. I believe that the near-term development of blockchain will be built on just such examples of cooperation. In a world where all the best technological decisions are copied at supersonic speed, one cannot remain behind the trends for long. That’s why I’m confident that blockchain and other crypto technologies will soon be adopted around the world. In the 21st century it would be strange to go searching for a telephone booth to make a call, when you can do so from anywhere on the planet with one click on your gadget.
https://www.coindesk.com/korea-taps-samsungs-blockchain-tech-to-fight-customs-fraud/
| Private Business, Government and Blockchain | 100 | private-business-government-and-blockchain-10007d3018fe | 2018-09-18 | 2018-09-18 20:58:20 | https://medium.com/s/story/private-business-government-and-blockchain-10007d3018fe | false | 201 | ICOBox is the first and the biggest new generation Blockchain Growth Promoter and Business Facilitator for companies seeking to sell their products via ICO crowdsales. | null | icobox.io | null | ICOBox | icobox-io | BLOCKCHAIN,ICO,ETHEREUM,ETHEREUM BLOCKCHAIN,TOKEN SALE | icobox_io | Blockchain | blockchain | Blockchain | 265,164 | Anar Babaev | null | f1ad85af0169 | babaevanar | 450 | 404 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-07 | 2018-01-07 17:04:37 | 2018-01-07 | 2018-01-07 17:06:29 | 13 | false | en | 2018-01-07 | 2018-01-07 17:18:38 | 24 | 1000c43bcb97 | 19.716981 | 0 | 0 | 0 | Introduction | 5 | EPQ draft 1 (4844 words)
https://upload.wikimedia.org/wikipedia/commons/1/1f/Sanko_Seisakusyo_%28%E4%B8%89%E5%B9%B8%E8%A3%BD%E4%BD%9C%E6%89%80%29_%E2%80%93_Tin_Wind_Up_%E2%80%93_Tiny_Smoking_Spaceman_Robots_%E2%80%93_Close_Up.jpg
Introduction
Automation is set to un-employ people at a scale and rate never seen before, while simultaneously changing societies very nature on an epic scale; to mitigate its impact we must undertake projects and policies that push ourselves, humanity, and society to its limits. The future could take one of two shapes; a utopian wonderland where everyone is happy, or a dystopia where algorithms and machines run the world for maximum efficiency leaving humanity in the slums. However, despite the impending danger, we are seemingly unaware of it. This is because automation creeps in slowly, so we don’t notice it, only once the teamsters, lawyers, and the CEO’s start to lose their jobs will we notice the extent of what we have created. This is nature of automation you don’t notice until it’s your job, your income, your life that is affected.
However, we might hope that our governments have foreseen this danger and are planning how to avoid it. Yet we do not see any real indication from the government (as of time of writing) that they need to do anything to prepare for the unemployed masses. On the contrary in the 2017 autumn budget the government announced that they wanted to see fully driverless cars by 2021 (HM Treasury, 2017). This is even though a CGPS study showed that over 4 million jobs will be lost to driverless cars in the US (Center for Global Policy Solutions., 2017). IN this report I aim to investigate three main key factors that need to be addressed by both policy makers and the public, to prepare ourselves for the future. These are; Too what extent will automation occur, how will this automation effect society and how should we mitigate its impact? By answering these three questions I hope to bring further clarity to a pressing issue that will affect society from top to bottom.
Too what extent will automation happen
The first question that must be answered in this topic is how much automation will occur. There have been many studies that have attempted to estimate this, and their findings have varied considerably. The most recent study is one by PWC (Berriman & Hawksworth, 2017). By analyzing the other studies and improving upon their methods, they conducted a new survey which concluded that.
“Our analysis suggests that up to 30% of UK jobs could potentially be at high risk of automation by the early 2030s, lower than the US (38%) or Germany (35%), but higher than Japan (21%).” (Berriman & Hawksworth, 2017) I have converted this data into a graph below along with other recent studies.
As we can see from the graph the results of automation studies vary greatly. The original study by FO classified jobs as automatable or not by looking at whether most of its tasks could be automated, this meant that they could develop an algorithm to predict what jobs could and could not be automated. AGZ on the other hand, claimed a job was only automatable if all its tasks where fully automatable. This however leads to a vastly reduced number of jobs being classed as automatable. If a job has ten parts and five can be automated surely you can still fire half the workers and maintain the same output. This meant that the results of the AGZ study underestimate the likely impact of automation. For this reason, I have chosen to use the most recent study (PwC) to base my study off.
Will new jobs be created?
If we can have expected jobs to be lost surely, we should also expect new ones to be created to take their place? However, despite this logic the data suggests otherwise see the graph below (Durden, 2017).
As we can see quite clearly although rig count and therefore oil production has increased, the number of employers has stayed almost the same. This indicates that we are producing more oil with less and less energy required. What’s more interesting if you look at the percentage of eligible workers employed over time (Gross, 2016).
As you can see from the graph, the percentage of people employed increases until a recession; at this point employment drops as business make cuts and increase automation, employment regains and then falls. Overtime this cycle reduces peak employment. Furthermore, we can see that the greater the recession the more jobs are irretrievably lost. Currently many believe that we could be in one of the biggest bubbles of all time. The crypto bubble. This term refers to the presumed bubble of cryptocurrencies such as bitcoin and Ethereum. The graph below shows the price of bitcoin over the last year alone (Coindesk, 2017).
It is widely assumed and even accepted that the crypto bubble will be the biggest of our lifetimes and that at some point it will almost certainly burst crashing everyone else with it. This conclusion is mainly drawn from its similarities to the dotcom bubble which burst in 2000–2002. In comparison to the crypto bubble the dot com bubble is expected to look almost reasonable. However, many technology enthusiasts point out that this is okay seeming as we do now all use the internet (Katz & Verhage, 2017). Either way if this is a bubble then we should expect to see job losses in the extremes, jobs that we should not expect to return.
Finally, we must consider whether the new sectors will produce new jobs. Technologists often argue that despite jobs being automated its fine because new jobs such as software developers are being created. And yes, this is true. Whilst it would be unthinkable for the jobs ‘Computer Software Engineer’ or ‘Computer Programmer’ to have existed back in 1980 and now there is 1,300,000 of them. Does not deny the fact that one team of software engineers eleven can design, build, and deploy the next ‘killer app’ within two years and walk away with one billion dollars from its sale. This is the story of Instagram (BBC News technology, 2012). This is a classic case; one we can expect to see much more of. It demonstrates that you no longer need tons of workers to make tons of money. So yes while a few new jobs will be created, we should not rely on these new jobs to support people.
In conclusion we can expect to see lots of job losses within the next 30 years. This is due to the combined effect of automation enabling more jobs to be automated and an impending recession that will drive business to make cuts and improve their business efficiency, with an increasingly small number of people required to make a business successful. We can expect to see unemployment rise to 30% by 2030 and possibly even as high as 50% by 2050.
Why do we work?
Throughout all of time humanity has been in a constant struggle to survive. From stone age man hunting the mighty mammoth, to office workers hunting the mighty pay rise. Humans have always had to strive to survive. Never have we been given the opportunity to simply have it provided to us, although yes, we do get our sustenance easier now than ever before we still must work to get it. But what if we didn’t?
There are three main points of view on the meaning of work. Some people believe that we work because it gives us meaning, that without it we would be aimless with no purpose. The other group say that we work simply for the money and that if that wasn’t an issue we could quite happily live our lives doing what we really wanted to do.
Some people argue that we work to give meaning to our lives. If we did not work, we would all very quickly turn to violence and crime. To find out more about this I conducted a survey of 249 random subjects.
From this we can see that people are clearly divided on the topic. But I was asking about work in general. If, however you ask someone whether their job has meaning you get a very different response. This is demonstrated by a 2015 YouGov poll. The poll showed that 37% of people do not believe that their job is contributing to the world (Yougov, 2015). This is a shocking statistic and makes us wonder quite what jobs these people are doing that they their jobs as pointless.
By analysing the data in the survey, we can conclude that working class people are more likely to believe that their job is not having a meaningful contribution to the world than middle class people. We can also see that some areas have significantly lower levels of meaningfulness than others such as London. Despite the elevated levels of meaninglessness in London less people said that they would be “not proud to tell a stranger what their job was”, unlike in Scotland where there are elevated levels of meaningfulness and elevated levels of shame.
So, we can conclude that people in lower economic brackets are more likely to see themselves in pointless jobs. We can also see that people in areas of high population concentration such as London and the north are more likely to be not fulfilled by their job.
In August 2013 David Graeber wrote an influential article for STRIKE! Magazine (Graeber, 2013).In this article he argued that many modern jobs are ‘bullshit jobs’. They point out that in 1930 John Maynard Keynes (arguably the capitalist equivalent to Karl Marx), predicted that by the centuries end that developed countries such as Great Britain would be so technologically advanced that people who lived there would on average work only 15 hours a week. And yes, as predicted most of manufacturing jobs have been automated yet despite this we have not achieved the 15-hour week. Graeber argues that this is due to the creation of ‘bullshit jobs’. There has been a massive explosion in the services/administration sector. In fact, between 1948 and 2011 the services sector in the US has
Figure 3: https ://www.economist.com/news/briefing/21594264-previous-technological-innovation-has-always-delivered-more-long-run-employment-not-less
gone from 45% of total employment to 68% of total employment (not including government jobs) (The Economist, 2014)
The new services sector comprises many jobs such as:
· Financial services
· Telemarketing
· Corporate law
· Academic/health administration
· Human resources
· Public relations
These are what Graeber proposes are ‘bullshit jobs’. A bullshit hob is one that provides little or no meaning to society and the world. And yet even though the people doing these jobs find them pointless they continue to do them. And what’s more they continue to be created.
Figure 4: https://www.vice.com/en_uk/article/yvq9qg/david-graeber-pointless-jobs-tube-poster-interview-912
If bullshit jobs are pointless why are they created? Many would argue that society creates jobs to ensure that they can continue to partake in society. Some would argue that because of this if people did not have to work to have a good enough income to live on then they would not work. They argue that instead they would spend their time doing things they enjoy and getting the education required to do interesting jobs such as medicine or teaching. This is backed up by universal basic income studies. A universal basic income is a guaranteed income that is paid to all eligible members of society. This is often done by a negative income tax; this is where after earning below a certain point the state stats to give you a guaranteed income. Most importantly however this payment has no strings attached. This means that if people want to then they can and can do no work and just live of benefits. However, the statistics from the studies do not show that this happens. In 1974 a basic income study was carried out in Manitoba (Canada); it showed that people barely reduced their working hours, and those that did used it to spend more time with their families and or taking additional classes reaping untold benefits for the economy (Hum & Simpson, 1993).
Many argue that even if automation does occur then people could continue to do jobs that give them meaning if they wish. Just because a job could be done by a robot does not necessarily mean it will be. If people find meaning in work, then they can continue to do so. However, if your job is mind bogglingly boring then why should you have to do it if you don’t want to? As we enter the new automated age then we are going to have to realize that we should have fun in life and if that means not working then so be it. But the clear majority will find something to do be it inventing, painting, or pushing the boundaries we must accept that our society will change to accommodate our new-found freedom.
How can we mitigate its impact?
Working on the dual assumptions that; soon robotic automation will increase so that 30% of jobs become automated (with not enough being created to replace them), and that in our current state if we get rid of work then there would be large increases in crime and violence. We can conclude that preemptive measures need to take preemptive measures to mitigate the impact. I have split these preemptive measures into two main types.
Only by combining a variety of government policies and regulation with a collective societal move towards less work based system we can ensure that minimal damage is done. This is the main subject of this report. I will first discuss potential government policies and then the action that society must implement to make the most of automation.
Government policies and responsibillities
Government policies come in the form of taxes, benefits, regulation, or programs. A tax is designed to incite a behavior using negative reinforcement i.e. persuade someone or a company to do something otherwise they will lose more money. Benefits give money to people (typically working-class people), this provides them with an income to survive even if they lose their jobs. Regulation prevents the development ‘bad robots’ such as terminators. Programs run by governments help to retrain people to get them new jobs by giving them new skills such as programming.
Tax
The tax I am investigate is a robot tax. A robot tax is a system where cooperation’s are taxed depending on how much of their workforce is automated. For instance, if you were a company that ‘employed’ a robot corporate lawyer you would pay robot tax equivalent to the income tax a corporate lawyer would have paid. This money could be used to fund other government initiatives such as new benefits and retraining programs (Varoufakis, 2017). Proponents for the tax are wide ranging and include tech giants such as Bill Gates (Gates, 2017) and futurists such as Elon Musk (Musk, 2016). However, some people such as Estonian politician Andrus Ansip believe that this is a bad idea (Ansip, 2017). It is argued that it would be difficult if not impossible to calculate the equivalent wage that the robot would have earned if a human where doing the same job. Furthermore, it is argued that this would reduce innovation as it would stop companies automating jobs, this is bad as some jobs a very dangerous and it is ethical to automate them even if it means someone loses their job (Isaac & Wallace, 2017).
Benifits
A common suggestion to mitigate the impact of robotics is the implementation of a new benefit called a Universal Basic Income (UBI), it is also known as basic income (BI), citizens income (CI) and negative income tax (NIT). But whatever its name (I shall use UBI) it involves giving all citizens a basic income (except in NIT where it is only the poorest) (Basic Income Earth Network, 2017). It has been studied in many studies in a range of situations for a variety of clients. It is argued that doing so would be cheaper than our current welfare system, this is because there would be very low administration costs. Furthermore, it is argued (and proven in studies) that a basic income gives better outcome than independent benefits (Hum & Simpson, 1993). It is also shown to increase personal development and entrepreneurship as people have a safety floor to stand on to achieve their aims be it setting up a company or training to get into a new profession. This is how UBI solves the issue of automation, it encourages personal retraining and entrepreneurship which in turn provides new jobs and bolsters the economy. Opponents argue that a UBI would encourage crime and antisocial behavior such as drug abuse. However, a report by the world bank that summarized the findings of 30 studies disproved this (Evans & Popova, 2014).
Regulation
One big worry about robots is that they will rise and take over the world. Whilst this may at first seem like an unrealistic and reactionary response to automation. However, these fears are well founded. In 2015 a robot was released by Queensland university of technology that will patrol coral reefs, and autonomously make the decision to kill the deadly crown of thorns starfish that destroys reefs (Dayoub, Dunbabin, & Corke, 2015). Although this application is undeniably good as we need to protect corals; it sets a dangerous precedent. The same technology can easily be expanded into military drones. Drones have long been used by the military, however this has led to sometimes disastrous consequences. The pilots feel detached and say it is like stepping on an ant (Pilkington, 2015). Imagine how much that feeling of detachment will become when instead of pulling a trigger you just must sign a piece of paper to authorize the strike. Despite this and warnings from high profile critics such as Stephen Hawking, Elon Musk, and Steve Wozniak (Future of Life Institute, 2015). As such it is undeniable that we should enact legislation to prevent the development of AI that decides when to kill human to ensure that we do not lose control.
programes
One proposed solution is retraining. This is where people who have been or will be made redundant due to automation are retrained to do new jobs. This retraining is funded by the government or previous employer and is usually in the form of a course or other qualification (Carson, 2015). These types of programs are useful and are a common way to mitigate impact when unemployment occurs on a mass scale. However, the type of unemployment that we will see might not end up being concentrated as it is normally. If all the manufacturing companies fired half their workers, yes there would be a lot of unemployment, but it would be widely dispersed; it is also harder to retrain people when they are dispersed as you cannot just set up one program. Therefore, these new courses will mostly have to be done online. But this again throws up another problem. The jobs that will be created/will not be automated are not manufacturing or laboring jobs, rather ones that require intelligence, independent/creative thinking, and human understanding (see next page) (McKinsey Global Institute, 2017). We can see that the jobs that will be automated the least are all degree level, education, management (less so with this one) and professionals. From this we can conclude that instead of providing standard retraining we need to other degree level retraining. To do this though the new students will have to pay tuition fees which are prohibitively high to some students let alone parents trying to support their own children going through uni who cannot access grants. In short if we want to mass retrain people at a degree level we need to get rid of tuition fees.
Societal action
Currently our society is geared to attain 100% employment. This full employment model creates pointless jobs just for the sake of keeping people working (Graeber, 2013). However, if 30% of people become unemployed this model will quickly fall apart. So undoubtedly retraining programs will appear and retraining some of the unemployed. But a large portion won’t want to be. If you are a lawyer, you’re not going to want to retrain into a teacher or a therapist because they’re completely different fields that wouldn’t interest you. And even if a UBI is implemented then we can’t all be entrepreneurs. This is mostly because it costs a lot less to run a successful company in the modern day. For instance, Instagram was bought for $1 billion, at that time it only had 13 employees (Geron, 2012). As this clearly shows you now need a lot less people to have an even bigger impact than ever before. So, we need to find something to occupy ourselves with.
Interplanatary colonisation
One suggestion is that we apply our newfound technological capabilities to undertaking a great task such as exploring space. This has several benefits.
1. It would retrain people
a. This is because starting a colony will take many new skills from all backgrounds. We could gear the retraining programs to train people to build rockets
2. It would produce employment
a. Yes, it might be much cheaper to build rockets by robot but why do that when you could employ people? On earth we could use the robots to do the mundane tasks that just have to be done such as; mass farming to feed everyone, building homes, treating illnesses.
3. Life would be less likely to be wiped out
a. We might just be the only life in the entire universe. Maybe even all of time. So it would be a real shame if we were wiped out by a single asteroid or a territorial spat or a massive plague. But if we have a self-sufficient colony on another world the chances of ALL of humanity drop to practically zero.
Despite these benefits there are some serous disadvantages. For instance we might accidently create a dystopia such as in Kim Stanley Robinsons Mars trilogy and 2312 (Robinson, The Complete Mars Trilogy: Red Mars, Green Mars, Blue Mars, 2015) (Robinson, 2312, 2013), if we want to avoid this we should ensure that the selection criteria for colonization is not financial but based on ability.
Elimination of the great killers
Throughout human history life has been short and nasty. If you were lucky enough to be born and your mother to have survived the ordeal you lived through roughly 40 grueling years of work to end up dead. By comparison even the poorest person in the first world would not suffer that much. However, many people in LIC’s (less industrialized countries) still live in this Malthusian misery trap. However, we now have the technological abilities to free them. We could use robots to mass farm to feed cheaply people (farmbot, 2018), we could use modified 3d printers with concrete to 3d print houses in areas with high homelessness (apis-cor, 2018), and we can release genetically engineered mosquitoes to crash the population of a certain type of mosquito (Carvalho DO, 2015). All these techniques use the latest in technology and robotics to solve the great problems of the world. However, to deploy we will need to work together with a large human fleet to support it.
A new social order
Automation itself will undoubtedly cause a great in politics. This is because as previously established society will have to change and so will our priorities. And seeming as political order and systems, I descended from those governed as per defined by social contract theory (Rousseau, 1913). However, as our society changes rapidly our systems will quickly unfold and become unsuitable for the modern world. This will inevitably lead to the creation of new types of government such as Futarchy (Buterin, 2014) and liquid democracy (Jochmann, 2012). However, if not properly handled the opportunity maybe seized by the ‘new radicals’ such as Donald Trump and Heinz-Christian Strache (Carswell, 2017). However, if we can seize the opportunity then we have a chance like no other to make a real lasting impact on the world.
Conclusion
Robotic automation will have a wide-ranging effect on society. The predicted levels of unemployment can only be described as catastrophic by today’s standards. To cope with this change, we must find meaning in our lives and our existence. To cope we will take on new and exciting challenges such as founding a Martian colony and becoming more than human. Sadly, though the governments that have the power to enact the decisions required to help humanity cope with the turbulence of change, seem blissfully ignorant of the dire need for discussion and debate on this most important debate.
Bibliography
Ansip, A. (2017, June 2). EU Commissioner Says No to Bill Gates’ Robot Tax Idea. (CNBC, Interviewer)
apis-cor. (2018, January 7). Home apis-cor. Retrieved from apis-cor: http://apis-cor.com/en
Arntz, M., Gregory, T., & Ulrich, Z. (2016). The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis. OECD Social, Employment and Migration Working. Paris: OECD Publishing. doi:http://dx.doi.org/10.1787/5jlz9h56dvq7-en
Basic Income Earth Network. (2017, December 28). BIEN: Basic Income Earth Network. Retrieved from About basic income: http://basicincome.org/basic-income/
BBC News technology. (2012, April 10). BBC. Retrieved from BBC|News|Technology|Facebook buys Instagram photo sharing network for $1bn: http://www.bbc.co.uk/news/technology-17658264
Berriman, R., & Hawksworth, J. (2017). Will robots steal our jobs? The potential impact of automation on the UK and other major economies. London: Price-waterhouse-Coopers LLP.
Buterin, V. (2014, August 21). An Introduction to Futarchy. Retrieved from Ethereum blog: https://blog.ethereum.org/2014/08/21/introduction-futarchy/
Carson, E. (2015, August 3). How workers can retrain for careers in an automated world. Retrieved from ZDnet: http://www.zdnet.com/article/how-workers-can-retrain-for-careers-in-an-automated-world/
Carvalho DO, M. A. (2015). Suppression of a Field Population of Aedes aegypti in Brazil by Sustained Release of Transgenic Male Mosquitoes. PLoS Negl Trop Dis, 1. Retrieved from https://doi.org/10.1371/journal.pntd.0003864
Center for Global Policy Solutions. (2017). Stick Shift: Autonomous Vehicles, Driving Jobs, and the Future of Work. Washington, DC: Center for Global Policy Solutions.
Coindesk. (2017, November 30). Price page, 2017–2018. Retrieved from Coindesk: https://www.coindesk.com/price/
Dayoub, F., Dunbabin, M., & Corke, P. (2015). Robotic Detection and Tracking of Crown-of-Thorns Starfish. Queensland: Queensland University of Technology.
Durden, T. (2017, Febuary 3). Rig Count Surges Again To 16-Month Highs (But Where’s The Oil Industry Jobs). Retrieved from ZeroHedge: http://www.zerohedge.com/news/2017-02-03/rig-count-surges-again-16-month-highs-wheres-oil-industry-jobs
Evans, D. K., & Popova, A. (2014). Cash transfers and remptation goods: a review of global evidence (English). Washington DC: World Bank. Retrieved from http://documents.worldbank.org/curated/en/617631468001808739/Cash-transfers-and-temptation-goods-a-review-of-global-evidence
farmbot. (2018, January 7). Home farmbot. Retrieved from Farmbot website: https://farm.bot/
Frey, C. B., & Osborne, M. A. (2013). THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATION? Oxford: Oxford University.
Future of Life Institute. (2015, July 28). Autonomous Weapons: an Open Letter from AI & Robotics Researchers. Retrieved from Future of Life Institute: https://futureoflife.org/open-letter-autonomous-weapons/
Gates, B. (2017, Febuary 17). Why Bill Gates would tax robots. (Quartz, Interviewer)
Geron, T. (2012, September 6). Facebook Officially Closes Instagram Deal. Retrieved from Forbes: https://www.forbes.com/sites/tomiogeron/2012/09/06/facebook-officially-closes-instagram-deal/#6bed65c61d45
Graeber, D. (2013, August 1). On the Phenomenon of Bullshit Jobs: A Work Rant. Retrieved from STRIKE! Magazine: https://strikemag.org/bullshit-jobs
Gross, B. (2016). Culture Clash. Investment Outlook, 2. Retrieved from https://17eb94422c7de298ec1b-8601c126654e9663374c173ae837a562.ssl.cf1.rackcdn.com/Documents/umbrella%2Fbill%20gross%2FBill%20Gross%20Investment%20Outlook_May%202016.pdf
HM Treasury. (2017). Autumn Budget 2017. London: HM Treasury.
Hum, D., & Simpson, W. (1993). Economic Response to a Guaranteed Annual Income: Experience from Canada and the United States. Journal of Labor Economics, 11.
Isaac, A., & Wallace, T. (2017, September 27). Return of the Luddites: why a robot tax could never work. Retrieved from The Telegraph: www.telegraph.co.uk/business/2017/09/27/return-luddites-robot-tax-could-never-work/
Jochmann, J. (2012, November 18). Liquid Democracy In Simple Terms. Youtube. Retrieved January 7, 2018, from https://www.youtube.com/watch?v=fg0_Vhldz-8
Katz, L., & Verhage, J. (2017, November 27). Bloomberg Technology. Retrieved from Novogratz Says Crypto Will Be ‘Biggest Bubble of Our Lifetimes’: https://www.bloomberg.com/news/articles/2017-11-28/novogratz-says-bitcoin-to-win-out-over-other-digital-currencies
McKinsey Global Institute. (2017). A FUTURE THAT WORKS: AUTOMATION, EMPLOYMENT, AND PRODUCTIVITY. London: McKinsey&Company.
Musk, E. (2016, November 4). Elon Musk: Robots will take your jobs, government will have to pay your wage. (CNBC, Interviewer)
Pilkington, E. (2015, November 19). The Gaurdian. Retrieved from Life as a drone operator: ‘Ever step on ants and never give it another thought?’ : https://www.theguardian.com/world/2015/nov/18/life-as-a-drone-pilot-creech-air-force-base-nevada
Robinson, K. S. (2013). 2312. London: Orbit.
Robinson, K. S. (2015). The Complete Mars Trilogy: Red Mars, Green Mars, Blue Mars. New York City: Harper Voyager.
Rousseau, J. J. (1913). Social Contract & Discourses, Translated with Introduction by G. D. H. Cole. New York: Dutton&Co. Retrieved January 7, 2018, from http://www.bartleby.com/br/168.html
The Economist. (2014, Jannuary 18). The onrushing wave. Retrieved from The Economist: https://www.economist.com/news/briefing/21594264-previous-technological-innovation-has-always-delivered-more-long-run-employment-not-less
Varoufakis, Y. (2017, Febuary 27). A Tax on Robots? Retrieved from Project Syndicate: https://www.project-syndicate.org/commentary/bill-gates-tax-on-robots-by-yanis-varoufakis-2017-02?barrier=accessreg
Yougov. (2015, August 12). Yougov|News|37% of British workers think their jobs are meaningless. Retrieved from Yougov: https://yougov.co.uk/news/2015/08/12/british-jobs-meaningless/
X
| EPQ draft 1 (4844 words) | 0 | introduction-3-1000c43bcb97 | 2018-01-07 | 2018-01-07 17:18:39 | https://medium.com/s/story/introduction-3-1000c43bcb97 | false | 4,854 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | George Sykes | null | 93b9e94f08ca | tasty231 | 6 | 22 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-08 | 2018-03-08 07:04:31 | 2018-03-08 | 2018-03-08 07:07:42 | 1 | false | en | 2018-03-08 | 2018-03-08 07:07:42 | 3 | 100139913e4c | 2.211321 | 0 | 0 | 0 | Various associations in the present days are opening entryways for huge information. So as to open the power, Data Science Training in… | 4 | Ascent of data Science, SAS and Big data Analyst Trainings Programs
Various associations in the present days are opening entryways for huge information. So as to open the power, Data Science Training in Mumbai assumes an irreplaceable part. They have the capacity of driving a plenty of data which exists inside the foundation. An Data Science researcher is helpful with regards to dissecting and preparing information.
A Data Scientist who has an accomplished in the business will fill in as vital accomplice and confided in counselor in the administration of a business association and ensure that the workers upgrade the examination abilities of their own. An information science courses in pune assumes an irreplaceable part in the correspondence and exhibit of the estimation of the investigation of a foundation for the encouraging ad libbed method of taking choices crosswise over various phases of a business through following, estimating and additionally recording distinctive execution measurements.
For what motivation to pick Big Data Analytics?
This Big Data Analytics takes various arranged parts and distinctive capacities that make a motivation from data. Information science preparing in Mumbai is the right calling route for the social occasion of individuals to put in unprecedented demand in the slanting stage.
Course Overview
Data Analytics Training makes the gathering of spectators utilize aptitudes from major to front line level in each and every module to comprehend all the business challenges. Affirmation is passed on to the wannabes toward the complete of Big Data Analytics Course that puts in exceptional demand to secure a work in reputed associations.
Features
• Division and Clustering
• Display Building and Validation
• Machine Learning: Unsupervised Learning
• Characterization Models
• Making an Analytical Dataset
• Feel straightforwardness to arrive a position
What will you understand in this course?
Interminable supply of Big data Hadoop training in Mumbai, hopefuls will lay a magnificent charge over each and every module to go up against genuine challenges.
• Deploying of Data Analysis Life cycle to address gigantic data examination wanders
• Reframing of business challenges as a demonstrative test
• Enhances capacities in various coherent strategies and mechanical assemblies to dismember enormous data and arrangement of quantifiable models and perceiving of bits of learning that can incite huge results
SAS Training in Pune, the course stays for quantifiable examination structure, which is an item game plan used for front line examination inside the work put. It is moreover used for data organization game plans, business learning, judicious examination and that is only the start. It can be an important gadget to empower you to manage your data more feasibly and to empower you to build up your business later on.
According to the present market circumstance, the void of data specialist fulfills with each passing day. However in the meantime, associations scan for the proficient data specialists as the stock system of understudies from this master course limits after a particular point in time. In any case, foundations that give these courses guarantee position straightforwardly after the course gets over. You require not scan for any movement consultancy firm to get a game plan. Associations search for you straightforwardly after you get the confirmation from the establishments.
| Ascent of data Science, SAS and Big data Analyst Trainings Programs | 0 | ascent-of-data-science-sas-and-big-data-analyst-trainings-programs-100139913e4c | 2018-03-08 | 2018-03-08 07:07:42 | https://medium.com/s/story/ascent-of-data-science-sas-and-big-data-analyst-trainings-programs-100139913e4c | false | 533 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | tech data | Tech data Providing Courses Like SaS Training in Mumbai And Pune Hadoop Big Data Training Python Training Blue prism training. www.techdatasolution.co.in/ | 60a3bfd83742 | techdatasolutions18 | 4 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a8fc5dd0676e | 2018-04-16 | 2018-04-16 22:49:09 | 2018-04-16 | 2018-04-16 22:58:07 | 0 | false | en | 2018-04-16 | 2018-04-16 22:58:07 | 2 | 1002a55eca89 | 0.65283 | 1 | 0 | 0 | I discussed this with Michelle Tsng on my Podcast “Crazy Wisdom”. | 5 | Can a robot love us better than another human can?
I discussed this with Michelle Tsng on my Podcast “Crazy Wisdom”.
She says that a robot can love us better than a human being because there is no judgement. Human beings, particularly ones who have been traumatized can subconsciously detect when someone is judging them. They know when to keep their true feelings hidden from people who judge them and thus the best guide they can find is someone who can withhold judgmental thoughts and just express a safe, warm, and loving connection.
As robots become more sophisticated they might be able to provide this loving and warm connection. In this audio clip, Michelle discusses her experiences talking with Sofia, a robotic companion to human beings. She says that soon we will build robots who are better at love than humans are.
What do you think? Would you ever feel comfortable sharing your most intimate experiences or seeking therapeutic treatment from a robot?
You can check out the full interview on my website.
| Can a robot love us better than another human can? | 50 | can-a-robot-love-us-better-than-another-human-can-1002a55eca89 | 2018-04-20 | 2018-04-20 00:47:51 | https://medium.com/s/story/can-a-robot-love-us-better-than-another-human-can-1002a55eca89 | false | 173 | Non-obvious meditation advice from people on the battlefront of daily creation | null | yogastew | null | Crazy Wisdom | crazy-wisdom | MEDITATION,MINDFULNESS,YOGA,CREATIVITY,BUSINESS | stewartalsopIII | Robotics | robotics | Robotics | 9,103 | Stewart Alsop | null | d0481dc55f0e | stewartalsop | 512 | 531 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-20 | 2017-10-20 21:11:41 | 2017-10-22 | 2017-10-22 20:23:57 | 2 | false | en | 2017-12-02 | 2017-12-02 13:31:34 | 17 | 10033db0a000 | 7.055031 | 9 | 0 | 0 | An Active List of Interesting Use Cases Mentioned In Class | 5 | 2017 Big Data, AI and IOT Use Cases
An Active List of Interesting Use Cases Mentioned In Class
Image Source: Randstad Article
I’ve heard more use cases of Big Data in the last 10 days, than ever before. Therefore, I’ve decided to start a post where I compiled all the examples — with additional sources for all of us to learn more about them. I plan on updating this on a daily/weekly basis — so please follow me to stay on the loop.
The Big Data Professors at IE are all working professionals or researchers in the field, so they use countless examples to show us how the concepts taught in class are being applied in the real world.
Use cases will be divided by “function”, but you can expect to see examples of big companies, startups, NGOS, and individuals. The focus is to understand not just the impact, but also the Ripple Effect of AI and IOT innovations.
If you have any use cases that should be added, additional resources, or observations feel free to comment below. I want this to be a reference guide for all!
Use Cases were last updated on: October 30, 2017
>> Solving the Water Scarcity Problem
The UN predicts that half the world’s population will live in a water-stressed area by 2030. Therefore, private and public organizations are coming together to find solutions. Due to the improvement of network connectivity and accuracy of sensors, the challenge seems like an addressable one.Whether it is in major cities like San Francisco, or developing countries like Africa, smart sensors are being installed in water wells and pumps in order to track its quality and quantity. Equitable Allocation of clean water is the main priority for the next decades. It is proven that every $1 spent on water and sanitation generates $8 as a result of saved time, increased productivity and reduced healthcare costs.The current complexity of water systems and budget limitations are the largest obstacle to faster adoption of smart water meters.
Learn more:
WPDx | The Water Point Data Exchange is the global platform for sharing water point data
The amount of water point data being collected is growing rapidly as governments and development partners increasingly…www.waterpointdata.org
Access to data could be vital in addressing the global water crisis
Two hundred miles from the nearest town, a farmer in Tanzania picks up his phone and notices an alert. Thanks to an app…www.theguardian.com
The Internet of everything water
Imagine a world where your spice cabinet reminds you to buy salt, or your cell phone sends a text message about the…www.un.org
>> Detecting Defective Genomes & Saving Lives
Deep Genomics is leveraging artificial intelligence, specifically deep learning to help decode the meaning of the genome. Their learning software is developing the ability to try and predict the effects of a particular mutation based on its analyses of hundreds of thousands of examples of other mutations; even if there’s not already a record of what those mutations do. So far, Deep Genomics has used their computational system to develop a database that provides predictions for how more than 300 million genetic variations could affect a genetic code. For this reason, their findings are used for genome-based therapeutic development, molecular diagnostics, targeting biomarker discovery and assessing risks for genetic disorders.
Learn More:
Company
THE NEXT-FRONTIER GENETIC MEDICINE COMPANY Our mission at Deep Genomics is to create a new universe of life-saving…www.deepgenomics.com
Top Artificial Intelligence Companies in Healthcare to Keep an Eye On - The Medical Futurist
No one doubts that artificial intelligence has unimaginable potential. Within the next couple of years, it will…medicalfuturist.com
>> Training Neurons to Detect Bombs
All of the big tech firms, from Google to Microsoft, are rushing to create artificial intelligence modelled on the human brain. Mr Agabi is attempting to reverse-engineer biology and emphasizes how “our deep learning networks are all copying the brain…you can give the neurons instructions about what to do — in our case we tell it to provide a receptor that can detect explosives.” He launched his start-up Koniku over a year ago, has raised $1m (£800,000) in funding and claims it is already making profits of $10m in deals with the security industry.
Learn More:
The man teaching a computer to smell
Nigerian Oshi Agabi has unveiled a computer based not on silicon but on mice neurons at the TEDGlobal conference in…www.bbc.com
>> Influencing Elections
On November 9, it became clear what Big Data can do. The company behind Trump’s online campaign — the same company that had worked for Leave.EU in the very early stages of its “Brexit” campaign — was a Big Data company: Cambridge Analytica. “Pretty much every message that Trump put out was data-driven,” says Cambridge Analytica CEO Alexander Nix
Learn More:
The Data That Turned the World Upside Down
On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. The 34-year-old researcher…motherboard.vice.com
>> Saving Billions in Energy Costs
The General Services Administration, for example, has found a way to save $13 million a year in energy costs across 180 buildings — all thanks to a proprietary algorithm developed and monitored from many states away, in Massachusetts. Among the problems discovered: malfunctioning exhaust fans. Much of the leaps in energy efficiency are possible due to the widespread adoption of networked and highly sophisticated energy meters around the country over the last 10 years. Energy meters used to be checked onsite once a month, generating 12 basic data points a year, read and logged by humans. Now, meters register a raft of data every 15 minutes, accessible anywhere remotely, generating 36,000 data points a year.
Learn More:
'Big data' is solving the problem of $200 billion of wasted energy
At its best, technology is able to tackle huge problems with remarkable ease. The General Services Administration, for…www.businessinsider.com
>> Predict Wealth from Space
Penny is a free tool built using high-resolution imagery from DigitalGlobe, income data from the US census, neural network expertise from Carnegie Mellon and intuitive visualizations from Stamen Design. It’s a virtual cityscape (for New York City and St. Louis, so far), where AI has been trained to recognize patterns of neighborhood wealth (trees, parking lots, brownstones and freeways) by correlating census data with satellite imagery. You don’t just extract information from this tool though, click on the link below and drop a grove of trees into the middle of Harlem to see the neighborhoods virtual income level rise or fall. What is impressive about this tool is that it doesn't just look at the urban features you add, it’s the features and the context into which they’re placed that matters.
Learn More:
Meet Penny, an AI to predict wealth from space
Penny is a simple tool to help us understand what wealth and poverty look like to an artificial intelligence built on…penny.digitalglobe.com
What is Penny?
A technical guide for the busy CEO
NEEDS EDITING & CLEANUP, ERIC WORKING ON IThi.stamen.com
>> Justifying Billboard Pricing
Outdoor marketing company Route is using big data to define and justify its pricing model for advertising space on billboards, benches and the sides of busses. Traditionally, outdoor media pricing was priced “per impression” based on an estimate of how many eyes would see the ad in a given day. No more! Now they’re using sophisticated GPS, eye-tracking software, and analysis of traffic patterns to have a much more realistic idea of which advertisements will be seen the most — and therefore be the most effective.
Learn More:
How big data is changing outdoor media
Seeing an ad outdoors has a greater impact on us than one served to our laptop or phone. We come across it, 'discover…econsultancy.com
>> Turning Neighborhoods into Farmers Markets
Falling Fruit´s stated goal is to remind urban people that agriculture and natural foods do exist in the city —but that you might just have to access a website to find it. It combined public information from the U.S. Department of Agriculture, municipal tree inventories, foraging maps and street tree databases to provide an interactive map to tell you where the trees in your neighborhood might be dropping fruit.
Learn More:
Falling Fruit
A massive, collaborative map of the urban harvest uniting the efforts of foragers, freegans, and foresters around the…fallingfruit.org
>> Rescue you from under the snow
Ski resorts are even getting into the data game. RFID tags inserted into lift tickets can help optimize operations, collect data on skier performance, personalize offerings to customers, and gamifying the experience. In many cases though, the technology is being used to identify the individual movements of the skiers that get lost.
Learn More:
Even Ski Resorts Are Benefiting From The Big Data Explosion | Articles | Big Data
Even Ski Resorts are Benefiting From the Big Data Explosionchannels.theinnovationenterprise.com
>> Find Lost Relatives
Consider the millions of Ancestry family trees. How valuable would it be to link to those trees via DNA? You’d be able to determine genetic connections and uncover new family lines, deep relationships, and insights like you never have before. The first thing Ancestry.com does with your autosomal test results is compare them with other DNA samples on their database to look for family matches. They compare the over 700,000 markers examined on your genome to every other person in their database. The more markers you share in common with another person, the more likely you are to be related. The probable relationship between any two people is calculated based on the percentage of markers they have in common. Next, they sort the matches by relationship and send you a list of your DNA family.
Learn More:
AncestryDNA™ | Learn How DNA Tests Work & More
Learn more about the science and technology behind our most advanced DNA test at AncestryDNA™.www.ancestry.com
>> Financial Inclusion in Africa
Analysis of mobile phone data can help increase subscribers’ use of banking services, boosting their economic resilience and inclusion.
Learn More:
https://olc.worldbank.org/sites/default/files/WBG_BD_CS_FinancialInclusion_0.pdf
To be Continued…
If you learned something new about Big Data from this guide, please share it with your friends. It is up to us to encourage people to join this field, and be part of building the future.
Speaking of which, I’d love to hear from you. Reach out to me on Linkedin or email at [email protected]✉️ .
#bigdata #ai #iot #machinelearning #startups #digitaltransformations #impact #agile #newworld #graduateprogram #sharingknowledge
Author: Melody Ann Ucros
I’m a Masters in Big Data & Business Analytics Candidate @ IEBusinessSchool, and an Entrepreneurship Evangelist wherever I go. Oh, and I love chocolate! … Follow Me ❤
| 2017 Big Data, AI and IOT Use Cases | 27 | 2017-big-data-ai-and-iot-use-cases-10033db0a000 | 2018-05-07 | 2018-05-07 04:30:55 | https://medium.com/s/story/2017-big-data-ai-and-iot-use-cases-10033db0a000 | false | 1,768 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Melody Ucros | Entrepreneurial Techie who loves helping startups, playing with data & exchanging knowledge with impact-makers around the world. @IEMasterBigData ’18 | c136c563b31f | melodyucros | 195 | 47 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | b54d31a2a99a | 2018-03-30 | 2018-03-30 07:00:59 | 2018-03-30 | 2018-03-30 07:06:33 | 1 | false | th | 2018-03-30 | 2018-03-30 07:09:31 | 1 | 1003fc1d980e | 0.520755 | 0 | 0 | 0 | ในทุกวันนี้ ธุรกิจต่างต้องก้าวเข้าสู่โลกดิจิทัลคลาวด์จึงเป็นระบบพื้นฐานต่าง ๆ ที่คุนต้องทำความรู้จักและใช้ให้เป็น | 5 | Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day 2018
ในทุกวันนี้ ธุรกิจต่างต้องก้าวเข้าสู่โลกดิจิทัลคลาวด์จึงเป็นระบบพื้นฐานต่าง ๆ ที่คุนต้องทำความรู้จักและใช้ให้เป็น
Oracle จึงเป็นตัวช่วยให้กระบวนการนำข้อมูลเข้าสู่คลาวด์ของคุณง่ายดายยิ่งขึ้นด้วยผลิตภัณฑ์ชั้นนำที่ออกแบบมาสำหรับองค์กรโดยเฉพาะซึ่งสามารถรองรับเทคโนโลยีในอนาคต อย่างเช่น AI และแชทบอทอื่น ๆ อีกมากมาย
Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day2018ซึ่งเป็นโอกาสอันดีที่คุณจะได้ใช้ประโยชน์จากนวัตกรรมล่าสุดเกี่ยวกับคลาวด์
วัน : พุธที่ 4 เมษายน 2561
เวลา : 08:30 น. -17:30 น.
สถานที่ : โรงแรมเจดับบลิวแมริออท (สุขุมวิทซอย 2)
ห้อง : แกรนด์บอลรูม ชั้น 3
ภายในงานคุณจะได้รับข้อมูล อาทิ
· ซีเคียวริตี้บนคลาวด์
· วิธีปรับปรุงระบบไอทีให้ทันสมัยและลดต้นทุนและการยกข้อมูลที่มีอยู่อย่างมากมาย เข้าไปไว้ในระบบคลาวด์เดียวกัน
· พบกับความสามารของ Oracle Cloud ที่ช่วยรองรับเทคโนโลยีใหม่ๆ และเชื่อมต่อการทำงานกับแอพพลิชั่นองค์กรที่คุณใช้อยู่ในปัจจุบัน
· เรียนรู้เพิ่มเติมเกี่ยวกับศักยภาพและประโยชน์ของนวัตกรรม Next Generation
· ไขข้อสงสัยกับผู้บริหาร Oracle และผู้เชี่ยวชาญในอุตสาหกรรมท่านอื่นๆ
นอกจากนี้ ยังมีเวทีอภิปรายให้ท่านได้ฟังวิสัยทัศน์ของผู้บริหาร Oracle และผู้เชี่ยวชาญในอุตสาหกรรมท่านอื่นๆ ก่อนที่จะแยกกลุ่มย่อยเพื่อไปฟังสัมมนา ฟังตัวอย่างกรณีศึกษาหรือรับชมการสาธิตผลิตภัณฑ์
ผู้ที่สนใจเข้าร่วมงานฟรี โดยลงทะเบียนในที่ http://reminder.chiq-511.co.th/oracle.phpหรือ สอบถามข้อมูลเพิ่มเติมได้ที่คุณ ชาติรส อินเขตน์ (แก้ม)
โทร. 02–408–8770 อีเมล : [email protected]
| Oracle จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน Oracle Could Day 2018 | 0 | oracle-จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน-oracle-could-day-2018-1003fc1d980e | 2018-03-30 | 2018-03-30 07:09:33 | https://medium.com/s/story/oracle-จึงขอเชิญชาวไอทีทั้งหลายเข้าร่วมงาน-oracle-could-day-2018-1003fc1d980e | false | 85 | Enterprsie IT Knowledge for IT Community | null | enterpriseitpro | null | Enterpriseitpro | enterpriseitpro | null | Suwaschai_ITPro | Oracle | oracle | Oracle | 1,707 | Dearraya Naja | null | d40e6591ecfa | dearrayanaja | 22 | 19 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-19 | 2018-03-19 09:55:14 | 2018-03-19 | 2018-03-19 09:55:14 | 4 | false | en | 2018-03-19 | 2018-03-19 09:59:11 | 2 | 1003ff48854 | 2.133962 | 1 | 0 | 0 | If your company hasn’t already considered integrating artificial intelligence or its satellite technologies into its current processes, you… | 2 | Artificial Intelligence is the Next Frontier
If your company hasn’t already considered integrating artificial intelligence or its satellite technologies into its current processes, you might begin to find yourself significantly behind in the game by the end of the year.
Who adopts AI
Currently there are huge amounts of dollars being invested in AI, but studies done by McKinsey Global reveal that adoption is very low. In 2016, the hundreds of surveyed companies invested between $25–39 billion in artificial intelligence. Of the investing companies, 75% were tech giants in the industry. The other 25% were start-ups. That amount has tripled since 2013. Adoption nevertheless is low. 41% of the surveyed companies were uncertain about the benefits that AI could bring them. Only about 20% said they already adopted AI into their company. 40% said they are contemplating it, and only 9% found themselves simply experimenting with it.
When to begin adopting AI
The challenge for new adopters of AI seems to be their current familiarity with the tech world and tech systems integrated into their business processes and workflows.
According to McKinsey’s studies, those companies who adopted AI were already strong in the digital sector (telecommunications, high tech, automotive and assembly, and financial services). Those companies with less adoption were typically in the education sector, health care, and travel and tourism. Early adopters have usually been larger businesses, adopting AI in core activities, focusing on growth over savings, and adopting multiple technologies.
Successful AI adoption experiences
There appear to be five successful transformations that source value for AI adoption in companies. Case analyses have brought about more precise results and diagnoses through AI. AI has been effective at creating data ecosystems for businesses that manage large volumes of data. Applying tools and techniques into established systems has been an added value of AI, as well as developing workflow integration. Ultimately, AI has aided businesses in fostering an open culture and organization.
AI has created value already in smarter forecasting, optimizing production and maintenance, targeted sales and marketing, and providing enhanced user experiences.
2018 will be a year of significant investing in AI. Hopefully companies are looking into cost/benefit analyses and deciding early on how to begin integrating AI into their business.
Originally published at avoncourtpartners.com on March 19, 2018.
| Artificial Intelligence is the Next Frontier | 1 | artificial-intelligence-is-the-next-frontier-1003ff48854 | 2018-03-23 | 2018-03-23 13:51:42 | https://medium.com/s/story/artificial-intelligence-is-the-next-frontier-1003ff48854 | false | 380 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Bill D. Webster | null | b8be538ba286 | bill.d.webster | 4 | 41 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f92990997e09 | 2017-12-13 | 2017-12-13 12:27:55 | 2017-11-27 | 2017-11-27 08:00:00 | 1 | false | en | 2017-12-13 | 2017-12-13 16:10:02 | 1 | 10048798ad6 | 1.818868 | 0 | 0 | 0 | Aidoc, a leading AI startup utilizing deep learning to augment radiologists’ workflow and highlight anomalous cases, which are often highly… | 5 | Aidoc Gets CE Mark for Deep Learning Solution
Aidoc, a leading AI startup utilizing deep learning to augment radiologists’ workflow and highlight anomalous cases, which are often highly urgent, today announced that it received CE (Conformité Européenne) marking for the world’s first commercial head and neck deep learning medical imaging solution. CE marking allows for widespread commercialization of Aidoc’s solution in Europe.
Aidoc’s solution augments radiologists’ workflow through its unique ability to comprehensively detect abnormalities in imaging of both the head and neck, an anatomical area responsible for a major portion of medical images. Providing significant value for day-to-day diagnosis, time saved by Aidoc’s solution could be extremely impactful in trauma cases, where time can be the difference between the patient’s life and death.
Aidoc’s deep learning technology highlights a vast array of medical findings to help radiologists prioritize readings, aimed at facilitating interpretation and reducing time to decision when it matters most. Radiologists can now perform smart optimization of their worklist by prioritizing cases based on AI medical image analysis in conjunction with other clinically available data. Aidoc’s solution is agnostic to radiologists’ incumbent software, integrating seamlessly and providing immediate results.
“The amount of medical imaging — especially CT and MR scans — is increasing dramatically, but the number of radiologists has plateaued, creating unsustainable bottlenecks and making the radiologist’s already complex work even more challenging,” said Aidoc CEO Elad Walach. “Our technology can have a monumental impact augmenting the radiology workflow, aimed at more cost-effective treatment for medical centers and practices, and the healthcare system as a whole. With the CE mark, we have a unique opportunity to update outdated technology for the benefit of hundreds of millions of Europeans.”
The CE marking was based on data collected in clinical trials validating Aidoc’s precision, which compared the solution’s results to unassisted radiologists’ review of those cases. Cedars-Sinai Medical Center in Los Angeles also assessed Aidoc’s solution earlier this year and that study resulted in impressive accuracy in scan analysis.
“In our clinical trial, Aidoc’s technology has demonstrated its ability to enhance our radiologists’ workflow, as abnormal scans can be prioritized and more carefully reviewed,” said Dr. Barry D. Pressman, MD, Chairman of Imaging at Cedars-Sinai Medical Center. “Our firsthand experience has led me to believe in the technology’s potential to achieve a significant increase in our radiologists’ productivity and accuracy. It’s a win both for our physicians and our patients. Aidoc’s AI powered solution will help our radiologists be their best, and streamline their workflow.”
For the original press release, click here
| Aidoc Gets CE Mark for Deep Learning Solution | 0 | aidoc-gets-ce-mark-for-deep-learning-solution-10048798ad6 | 2018-03-15 | 2018-03-15 20:12:15 | https://medium.com/s/story/aidoc-gets-ce-mark-for-deep-learning-solution-10048798ad6 | false | 429 | Partnering with innovative A-round startups | null | TLV-Partners-1619802984967099 | null | TLV Partners | tlv-partners | VC,STARTUP NATION,ISRAEL | tlv_partners | Machine Learning | machine-learning | Machine Learning | 51,320 | TLV Partners | An Israel based venture capital firm focused on Seed and A investments. | 86ffc8e86f07 | TLV_Partners | 52 | 31 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-10-12 | 2017-10-12 04:15:29 | 2017-10-12 | 2017-10-12 04:26:19 | 2 | false | en | 2017-10-12 | 2017-10-12 04:26:31 | 0 | 10050686d0e4 | 1.005975 | 2 | 0 | 0 | Actually ai is efiicting our world in a very good way, efficient way though it’s gonna kill a no. Of different jobs in the world of today… | 1 | Ai and it’s impact on the world
Actually ai is efiicting our world in a very good way, efficient way though it’s gonna kill a no. Of different jobs in the world of today and it’s gonna automate some of them but it also opens opens up a no of opportunity of different jobs Jobs for people like in the current world it saves us all that complex statistics, Mathematics that we would otherwise have to be doing if we didn’t had ai do that for us and ai also gets us new features like now machines can talk like humans someday we might have machines that are not only machines but they are human like an d that could solve a no of current problem and if you fear an ai apoocylapse so you should not if you have usedsny of command line, server side languages or you are a programmer you might have known by now that machines are way dumber that humans.
| Ai and it’s impact on the world | 3 | ai-and-its-impact-on-the-world-10050686d0e4 | 2017-10-27 | 2017-10-27 10:41:44 | https://medium.com/s/story/ai-and-its-impact-on-the-world-10050686d0e4 | false | 165 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Sameep Yadav | Neural Networks,Machine learning, data mining. | cb175eceafb0 | SameepYadav | 5 | 14 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d777623c68cf | 2016-12-24 | 2016-12-24 08:07:26 | 2016-12-24 | 2016-12-24 13:07:10 | 11 | false | en | 2018-09-01 | 2018-09-01 17:08:04 | 12 | 10062f0bf74c | 5.684906 | 80 | 2 | 0 | The model for deep learning consists of a computational graph that are most conveniently constructed by composing layers with other layers… | 4 | The Meta Model and Meta Meta-Model of Deep Learning
Credit: Inception (2010) http://www.imdb.com/title/tt1375666/
The model for deep learning consists of a computational graph that are most conveniently constructed by composing layers with other layers. Most introductory texts emphasize the individual neuron, but in practice it is the collective behavior of a layer of neurons that is important. So from an abstraction perspective, the layer is the right level to think about.
Underneath these layers are the computational graph, it’s main purpose is to orchestrate the computation of the forward and backward phases of the network. From the perspective of optimizing the performance, this is an important abstraction to have. However, it is not at the ideal level to reason how it all should work.
Deep Learning frameworks have evolved to develop models that ease construction of DL architectures. Theano has Blocks, Lasagne and Keras. Tensorflow has Keras and TF-Slim. Keras was originally inspired by the simplicity of Torch, so by default has a high-level modular API. Many other less popular frameworks like Nervana, CNTK, MXNet and Chainer do have high level model APIs. All these APIs however describe models. What then is a Deep Learning meta-model? Is there even a meta meta-model?
Figure: This is what a Deep Learning model looks like.
Let’s explore first how a meta-model looks like. A good example is in the UML domain of Object Oriented Design. This is the UML metal model:
Credit: Eclipse ATL project
This makes it clear that Layers, Objectives, Activations, Optimizers, Metrics in the Keras APIs are the meta-models for Deep Learning. That’s not too difficult a concept to understand.
Figure. Deep Learning Meta Model
Conventionally, an Objective is a function and an Optimizer is an algorithm. However, what if we think of them instead as also being models. In that case we have the following:
Figure. Make everything into networks
This definitely is getting a whole lot more complicated. The objective function has become a neural network and the optimizer has also become a neural network. The first reaction to this is, has this kind of architecture been tested before? It’s possible someone is already writing this paper. That’s because an objective function that is a neural network is equivalent to the Discriminator in a Generative Adversarial Network (GAN) and an Optimizer being a neural network is precisely what a meta-learner is about. So this idea is not fantastically out of mainstream research.
The second reaction to this is, shouldn’t we make everything neural networks and be done? There are still boxes in the diagram that are still functions and algorithms. The Objective’s optimizer is one and there are 3 others. Once you do that, there’s nothing else left that a designer needs to define! There are no functions, everything is learned from scratch!!
So a meta-model where everything is a neural network looks this:
Figure. Deep Learning Meta-Model
Where the mode is broken apart into 3 parts just for clarity. Alternatively, it looks like this:
Figure. Deep Learning Meta-Model
What this makes abundantly clear however is that the kinds of layers that are available come from a fixed set (i.e. fully connected, convolution, LSTM etc.). There are in fact research papers that exploit this notion of selecting different kinds of layers to generate DL architectures( see: “The Unreasonable Effectiveness of Randomness” ). A DL meta-model language serves as the lego blocks of an exploratory RL based system. This can generate multiple DL meta-model instances to optimize for the best architecture. That is a reflection of the importance of Deep Learning Patterns. Before you can generate architectures, you have to know what building blocks are available for exploitation.
Now, if we make a quantum leap into meta meta-model of Deep Learning. What should that look like?
Let’s look at how OMG’s UML specification describes the meta meta-model level (i.e. M3):
https://en.wikipedia.org/wiki/Meta-Object_Facility
The M3 level has a simplified structure that only includes the class. Following an analogous prescription, we thus have the meta meta-model of Deep Learning defined by the following:
Deep Learning Meta Meta-Model
Despite the simpleness of the depiction, the interpretation of this is quite interesting. You see, this is a meta object that an instance of which is the conventional DL meta-model. These are the abstract concepts that define how to generate new DL architectures. More specifically, it is the language that defines the creation of new DL models such as a convolution network or a autoregressive network. When you work at this level, you essentially generate new kinds of DL architectures. This is what many DL researchers actually do for a living, designing new novel models.
There is one important concept to remember here though, the instance, model, meta-model and meta meta-model distinction are concepts that we’ve invented to better understand the nature of language and specification. This concept that is not essentially and likely does not exists in separate form in reality. As an example, there are many programming languages that do not have a distinction between instance data and model data. Languages like Lisp are like this, where everything is just data, there is not distinction between code and data.
The idea of “code is data” applied to DL is equivalent to saying that the DL architecture are representations that can be learned. We as humans require the concept of a meta meta-model to get a better handle of the complex recursive self-describing nature of DL systems. It would be interesting know what the language of the meta meta-model should look like. Unfortunately, if this language is one that is learned by a machine, then it may likely be as inscrutable as any other learned representation. See: “The Only Way to Make DL Interpretable”.
It is my suspicion though that this meta meta-model approach if pursued in greater detail may the key in locking “Unsupervised learning” or alternatively “Predictive learning”. Perhaps our limited human brains cannot figure this out. However armed with meta-learning capabilities, it may be possible for machines to continually self improve upon themselves. See “ Meta-Unsupervised-Learning: A supervised approach to unsupervised learning” for an early take on this approach.
The one reason that this may not work however is that the vocabulary or language that is the is limited (see: Canonical Patterns) and therefore “predictive learning” is not derivable from this bootstrapping method. Meta-learners today discover can only the weights and the weights are just parameters of a fixed DL model. A discovery, even through evolutionary methods, can only happen if the genesis vocabulary is at the correct level. Evolution appears to be a Meta Metal-Model process.
There is plenty that is missing in our understanding of the language for the meta meta-model of DL. Perhaps we can discover this only if we work up the Capability levels of Deep Learning intelligence. DARPA has a program that is researching this topic “DARPA goes ‘Meta’ with Machine Learning for Machine Learning”. I hope to refine this idea over time.
“DARPA goes ‘Meta’ with Machine Learning for Machine Learning”.
See Deep Learning Design Patterns for more details or visit “Intuition Machine” to keep abreast about the latest developments.
For more on this, read “The Deep Learning Playbook”
| The Meta Model and Meta Meta-Model of Deep Learning | 271 | the-meta-model-and-meta-meta-model-of-deep-learning-10062f0bf74c | 2018-09-01 | 2018-09-01 17:08:04 | https://medium.com/s/story/the-meta-model-and-meta-meta-model-of-deep-learning-10062f0bf74c | false | 1,162 | Deep Learning Patterns, Methodology and Strategy | null | deeplearningpatterns | null | Intuition Machine | intuitionmachine | DEEP LEARNING,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,DESIGN PATTERNS | IntuitMachine | Machine Learning | machine-learning | Machine Learning | 51,320 | Carlos E. Perez | Author of Artificial Intuition and the Deep Learning Playbook — Intuition Machine Inc. | 1928cbd0e69c | IntuitMachine | 20,169 | 750 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-27 | 2018-04-27 06:36:48 | 2018-04-27 | 2018-04-27 06:41:01 | 0 | false | en | 2018-04-27 | 2018-04-27 06:41:01 | 0 | 1007d0d6ab91 | 2.528302 | 0 | 0 | 0 | 1. Get Executive Ownership | 3 | Top 10 Tips for the Data Science Team To Succeed
1. Get Executive Ownership
One of the important contributing factors to any project is getting executive buy-in. It is your job as a data science software manager or project manager to get your executives to believe for your mission. Without them, your project will not cross on.
2. Gain the trust of your peers
Many of the managers don’t consider their information. They need new dashboards, data science teams, the complete nine yards. If you can’t even consider your statistics. Prefer a quote from Sherlock Holmes who said about how statistics is the foundation for the constructing blocks of wondering. If that is real, and you don’t consider the residence you have got built. It will fall on pinnacle of you. Get your managers to agree with you and your facts!
3. First implement a simple project successfully
Everyone desires to broaden the following Google or Facebook set of rules. If your team is simply beginning out and also you need them to be successful start small. Once you get that first win under your belt. Executives could be begging you to help them with everything. Then you’ll need to paintings on making sure your projects are bombarded by means of requests all the time, or at least, simplest the proper projects are being worked on.
4. Standardize your data science procedures
Data science has quite a few cool technology and tools that permit for extremely good perception. However, like software engineering, even with all of the cool matters you could do. Without techniques, you may fall in the back of projects, make terrible products and fail to maintain finish initiatives. This manner you need to report your approaches. It seems like a waste of time, till you begin having inner breakdowns of tasks.
5. Play nicely with different departments
Every commercial enterprise is a team game. You have accounting, finance, operations, sales and all the other departments that your team desires to work with. They all commonly have their very own data warehouse and also you want that information! If you’re fortunate, there is one primary crew that manages all the databases. Even if that is authentic. I still need to get the information from a couple of groups. In addition, all those teams will likely want to have a few necessities in your tasks. So make certain to play nice.
6. Build a prototype first for early purchase-in
Build a prototype (positive, in python)! Show your team and your supervisor what it can do. People want motion, not just theories and phrases. Set up a prototype, if you may, get actual information. If you can’t then pump it with some statistics but make sure the functionality is there. Make it tangible, interactive, and actionable!
7. Design for robustness and maintainability
We can’t pressure this sufficient. Make sure something dashboard you build, technique you place in the vicinity, or algorithm you broaden is maintainable. If you go away the enterprise the next day. Will the project still work? Seriously! People will in case you left at the back of no documentation, and never shared your code.
8. Get a Data Science Guide
There is quite a few statistics technology consulting businesses with a view to increasing a facts technological know-how manual of right enterprise practices to your team. This would require they check your team’s modern-day status and work with them to recognize where they might be greater powerful. Often instances that is skipped by maximum teams, so it is beneficial to usher in outside assistance.
9. Collect a lot of smooth facts as possible
Data comes from all exceptional sources. You can get it from inner warehouses, outside APIs and pretty much anywhere. Gather as tons of it as you can, and ensure it’s far managed and clean.
10. Make a decision, give an actual opinion
As a data scientist, you have power. You have data that means you can make conclusions with confidence.
| Top 10 Tips for the Data Science Team To Succeed | 0 | top-10-tips-for-the-data-science-team-to-succeed-1007d0d6ab91 | 2018-04-27 | 2018-04-27 06:41:02 | https://medium.com/s/story/top-10-tips-for-the-data-science-team-to-succeed-1007d0d6ab91 | false | 670 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | jessica jessy | null | 24175eae4140 | IQOnlineTrainin | 2 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-08 | 2018-04-08 15:48:57 | 2018-04-08 | 2018-04-08 17:14:21 | 1 | true | en | 2018-04-09 | 2018-04-09 15:25:23 | 3 | 100a2247898 | 2.739623 | 31 | 1 | 0 | Last Friday the movie “Do you trust this computer” by Chris Paine was launched (free to watch until the end of Sunday, April 10). It is a… | 5 | Don’t trust “Do you trust this computer”
from http://doyoutrustthiscomputer.org/watch
Last Friday the movie “Do you trust this computer” by Chris Paine was launched (free to watch until the end of Sunday, April 10). It is a documentary that deals with the potential consequences of Artificial Intelligence (AI), and repeats once more Elon Musk’s often quoted warnings about the dangers of AI. In fact, a representative for Elon Musk has confirmed that Musk is bankrolling the movie’s free online release.
Unfortunately, even though it displays an impressive list of experts, the overall message is too biased and one-sided to be trusted.
In short, it is a dangerous distraction from the urgent need to act on achieving Responsible AI now!
And here are some other reasons:
It is unclear who the makers expect as audience. It is too alarmist and dystopic a general audience, scary even. If the aim is participatory AI, and ensure everyone’s commitment, then such a scary message will just achieve the opposite. It is time to act, not to scare. To look for solutions and work together across disciplines to get “AI for good”. This is not helpful. Great, great missed opportunity. It is time for Responsible AI, and this includes using proper narrative and frame the problems correctly.
The lineup of experts is impressive, including several of my own ‘heroes’. However of the 26 experts listed in the movie’s website only 3 are women. This is a great missed opportunity for the film. There are many highly qualified female AI researchers and professionals, with equally, or even more, impressive contributions to the field as the experts interviewed. But most importantly, this leads to a skewed, biased, view of the field (see point 3.). A better representation of different views, multidisciplinary, multidimensional, gender and culturally balanced, would have led to a better narrative more balanced about risks and benefits of AI.
In order to deal with the impact of AI, about which the documentary is so concerned, is exactly to ensure, enforce and demand participation, inclusion and diversity.
The absurd underlying message that superintelligence is about winning.
True intelligence is about social skills, about collaboration and contribution to a greater good, about getting others to work with us in order to survive and prosper. There is no reason to expect superintelligence (if at all possible, see point 4) will be different. I suppose that this obsession with ‘winning’ is a male thing, specially the generation of the men appearing in the movie who grew up play war-like games… But as a message this is unethical. Just shows the need for all of us to stand up for participation, inclusion, diversity in AI now!
General Artificial Intelligence and narrow AI are very, very different. The movie makes a mess of this, inexplicable given the quality of the experts. We already have many real applications of narrow AI. But intelligence is not a one-dimensional thing nor a cumulative one. It is not by improving on one application of AI or by combining many different narrow AI systems that will get us to artificial general superintelligence. Moreover, intelligence is not just about knowing, is about feeling, enjoying, pushing limits… I often run marathons. I don’t doubt that is possible to build a ‘running robot’ but will it ever experience, and enjoy, what means to run a marathon, to push through the pain and enjoy it ?
The “Terminator”. Really, guys??? Are you expecting anyone takes this serious? Such a “Terminator” view on AI is misleading and unhelpful. An ethical approach to AI also means to ensure a correct view on its capabilities and to increase public awareness. I start seriously wandering if this fixation by tech corporations on dystopic views of the future are now a way from them to move public attention away from their practices and avoid regulation and corporate responsibility? Less “terminator” and more participation and inclusion is need. This too is AI ethics.
The movie is far too long, repetitive, boring even. The message “Responsible AI” deserved much better.
| Don’t trust “Do you trust this computer” | 233 | dont-trust-do-you-trust-this-computer-100a2247898 | 2018-04-19 | 2018-04-19 07:14:14 | https://medium.com/s/story/dont-trust-do-you-trust-this-computer-100a2247898 | false | 673 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Virginia Dignum | null | fb01d0a3bc3f | virginiadignum | 72 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-05 | 2018-01-05 22:34:12 | 2018-01-05 | 2018-01-05 22:42:18 | 1 | false | en | 2018-04-24 | 2018-04-24 17:09:10 | 1 | 100a259bd314 | 6.256604 | 2 | 0 | 0 | It is important to understand where we are to see what the future holds. We live in a time of hedonism, what people call a hookup culture… | 5 | The Cultural Revolution: Robots and Trust
It is important to understand where we are to see what the future holds. We live in a time of hedonism, what people call a hookup culture. Of disposable relationships. The most likely outcome is for the hookup culture to evolve. To a society is a similar situation Japan finds itself now. With the commodification of relationships. People will burn out, once this has run its course. Humans will have to connect based on connection and the want for children, as everything else has been parsed for profit or commodified.
There are two things that have and will continue to hinder the hookup culture to this point, pregnancy and rape. While contraceptives have mitigated unwanted pregnancies, they are not fully effective. As to the latter subject, well, there likely isn’t a solution to that.
So what happens when we add robots?
Robots and the Black Market
It is important to tackle this issue as this will be one of the main reasons for the introduction of pleasure robots. Pandora’s box will likely be open because of this and it will also cause a major societal shift.
The first major landmark ship shift will likely be pleasure robots.
The sex trade a multi-billion dollar industry. It commodifies everything about human relationships. It is important to understand it exists because people want it and it is illegal. While the hookup culture has devalued the price for sex it has also made it easier for the black market to go undetected. The black market makes a profit off the sex trade by indentured servitude. Individuals are forced through debt, violence and addiction to continue hooking. Illegal operations have high costs both implicit and explicit.
Robots will be introduced to combat this illegal market. Robots offer many advantages they do not need food, water, sleep, and do not become pregnant. More importantly, they are not human, which means they can be considered property. Which, can be legally owned and mass produced. Mass production will cause the further commodification of sex. Which in-turn will likely drop its price to virtually zero as availability increases.
This will force the illegal trade to do either compete on a comparable price point, try to compete on another level, or go out of business. Given the costs stated previously much of the market will dry up unless a caste system occurs. While there may be a higher end market it will be a small niche compared to the behemoth it is currently.
Thus legalized robots will irrefutably damage the sex trade. However, the bigger consequence will be the commodification of sex.
Flipping Culture on its Head:
Widespread pleasure robots may kill the pornography industry if it does not evolve fast enough. While the industry has tried to adapt to the internet, few will continue to pay as the legalized widespread access of robots becomes available. The pornography industry will survive however if it adapts to Augmented and virtual reality, especially if sense integration occurs.
Religions, particular Christians will be torn on this issue. I mention Christians specifically because their religion is what Western society stands on, due to its values, laws, history etc. Is it adultery, if it isn’t human, let alone alive? Is a man or woman still chaste if only a robot has been involved? These and many other moral questions will need to be answered.
The opening of Pandora’s box of robots, however, will eventually kill the hookup culture. As the prevalence of robots makes human interaction irrelevant. Men in particular who are disenfranchised by the current culture often drop out will now have an outlet. These individuals will be the first and will likely proselytize their lifestyle. Those who may respond with snark, I will remind that this is common in Japan. What starts as a taboo often becomes a societal norm.
This will not be without its consequences. The more men who switch from traditional relationships to mechanical ones leaves more women without partners. While some may be filled by mechanical ones undoubtedly it will not be at the same pace. This due to women’s higher desire for connection than men, and men’s higher desire for sex. One of which is easily fulfilled, the other much harder. Thus a glut of women will likely compete for a smaller amount of men. With more options, men will become more selective. To compete for a mate a war of escalation will occur. This combined with technology needing for a culture of trust will increase the likelihood women will return to chastity until marriage.
The legalization of these robots could, potentially prevent societal collapse and violence. As men and women who are unable to form social connections would now have an outlet for their unfulfilled needs. Individuals who have dropped out of society can now integrate to some degree. As the idea of pleasure robots becomes less stigmatized more people will begin to replace real relationships with robots. Robots will evolve to fulfill these new roles. Which will cause the further decline of relationships, marriage and birthrates.
Children and Robots:
Male birth control methods are primitive at best. New innovations that reduce the downsides will increase the deliberate action to have children. Unplanned children will drop dramatically. This will also reduce potential “gold diggers”. Children will be a deliberate choice, by both parties.
Thus birthrates will plummet. Far more then anyone can possibly imagine. People will call it the end of humanity, however, they lack vision. Artificial wombs may be the answer as humans can be created without the need for a female host. The other answer resembles a Margaret Atwood novel due to a crypto caste system.
Artificial wombs may also cause the idea of children to be more thought out, given the likelihood of genetic engineering. Children will be altered for optimum health. Mate selection in the future is quite likely to be based on genetics. Artificial wombs will likely lessen the maternal instinct of women and her potential child. Which will make the child more of a commodity than a unique being, due to less of a psychological attachment. Infanticide may increase due to this lack of connection.
Maternity, as we know of today, may be reserved for the rich. As the expense of rearing a child normally will be extremely high when compared to an artificial womb. It is entirely possible a system of sperm banks, genetic engineering and artificial wombs for adoptive parents emerges. One that selects for genetic diversity. This to reduce the likelihood of mass extinction while removing undesirable traits.
The Outcome:
Personal robots will be first reserved for the wealth due to the newness and complexity. So robots will operate in a manner similar to medieval brothels. Designated areas, cordoned off, especially if Artificial Intelligence continues to grow at the speed of Moore’s Law. Also due to the cost of ownership and maintenance. This will eventually move towards mass ownership as costs come down and the cultural stigma subsides. Crimes regarding adult prostitution and sexual violence will likely drop. Poverty may increase as prostitution becomes a less viable way to earn money.
The culture will eventually accept albeit begrudgingly the robots as they move from pleasure to romantic companion. Women will likely choose chastity if the above situation occurs, even if they have the full cultural, legal and independent rights to do otherwise. But why? A war of escalation. While the majority may live their lives how they like, a small group will counter the rise of robots by doing the opposite of what the group does.
This group will be more valued due to its rarity like all things, especially given the abundance of pleasure. This group will be hated however they will be more successful in obtaining relationships, all else equal. As a result, a cultural movement will emerge.
In return, these women will want longer courtships as they only have one chance due to their choice. Men will agree, as pleasure is bountiful. Courtship will occur again, as the value of relationships takes on new meaning. A culture of trust. Similar to the Victorian era will develop. Courtship and chastity will be normal. Chaperones of robots or via the Internet of things is entirely possible. Trust and in turn honour will be valued. This due to the changing nature of relationships and the irreversibility of cryptographic transactions. Who you spend your time with, personally and in business, matter more.
The irreversibility of cryptocurrency transactions will impact business. Marriage unions, the joining families to secure alliances and business ties may also occur once again. It sounds ridiculous now, but what is the likelihood you would rip off your family? I bet less likely than some random stranger.
Lineage and Dynasty two words very uncommon today will likely make a resurgence into the public consciousness and lexicon.
Conclusion:
It is difficult to predict the full extent of how robots will change human society. This article offers a brief glimpse into what it may look like. Robots will likely speed up the current culture of hedonism, which will cause an eventual reversal. There will be three main robots that will change human society, pleasure robots, companion robots and artificial wombs.
One thing is for certain they will be in every part of our lives, ubiquitous. It will often change unexpected things in ways we did not expect. It is possible men and women play more of an active role in courtship. With the ideas of connection and children being at the heart of a relationship. If so this would create more stable and longer lasting relationships.
| The Cultural Revolution: Robots and Trust | 3 | the-cultural-revolution-robots-and-trust-100a259bd314 | 2018-04-24 | 2018-04-24 17:09:11 | https://medium.com/s/story/the-cultural-revolution-robots-and-trust-100a259bd314 | false | 1,605 | null | null | null | null | null | null | null | null | null | Sex | sex | Sex | 23,511 | A.l. | Persuader. Futurist. Blockchain. Sovereign Individual. https://twitter.com/Kairon01 | 8939e3e1c0ae | Kairon | 39 | 8 | 20,181,104 | null | null | null | null | null | null |
0 | from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
from datetime import datetime, timedelta
from selenium import webdriver
import time
https://sites.google.com/a/chromium.org/chromedriver/downloads
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument("--test-type")
options.binary_location = "Your drive:\Your directory\chromedriver.exe"
driver = webdriver.Chrome("Your drive:\Your directory\chromedriver.exe")
my_url = 'https://www.facebook.com/'
driver.get(my_url)
login = driver.find_element_by_id('email');
senha = driver.find_element_by_id('pass');
login.send_keys('your user')
senha.send_keys('your password')
submit_button = driver.find_elements_by_xpath('//*[@id="loginbutton"]')[0];
Now is just submit the information using the click method and <i>voilà</i>, we're in.
submit_button.click()
| 10 | 32881626c9c9 | 2018-09-24 | 2018-09-24 00:20:04 | 2018-09-24 | 2018-09-24 00:21:16 | 1 | false | en | 2018-10-03 | 2018-10-03 14:04:42 | 4 | 100c041c1bf8 | 2.079245 | 4 | 0 | 0 | Hi everybody , this little snippet will show you how to use a selenium lib in order to make an automated web scraping you can use to… | 5 | A Little Snippet to Automate Web Scraping using Python and Selenium
“grayscale photo of dew on spider web” by Rúben Marques on Unsplash
Hi everybody , this little snippet will show you how to use a selenium lib in order to make an automated web scraping you can use to analyse data, find patterns,etc.
This snippet is the first of many other, each one will show you the next step, this one shows the automated connection in a web page, in this case, the facebook, the next will show you how to scraping a web page using beautiful soap, after we’ll to download the data and keep it in a database and so on.
According with the documentation, the selenium package is used to automate web browser interaction from Python and used to make automated tests.
You can find more information in https://pypi.org/project/selenium/
Several browsers/drivers are supported (Firefox, Chrome, Internet Explorer), as well as the Remote protocol.
Supporte Python versions: Python 2.7, 3.4+
For the installation you can use on of this 3 options:
using pip:
pip install -U selenium
You can download the source distribution from PyPI (e.g. selenium-3.14.0.tar.gz), unarchive it, and run:
python setup.py install
Finally if you’re using Anaconda:
conda install -c conda-forge selenium
The first thing we need to do is to import the libraries we’ll use in this snippet.
In this case, for this first step, the most important one is the selenium where we’ll make the automated connection.
In [38]:
After importing the libs, in order to run the code, we need to choose the correct driver to use in.
Selenium requires a driver to interface with the chosen browser.
Here, we’ll use the Chromium, but many other can be used.
you can find the driver here:
you can find more information in the Selenium project’s page.
With Chrome driver installed, we need to set some option in order to run it.
In [39]:
Next step is to set the url we’ll use and get it with the driver.
In [40]:
In [41]:
In our case we have a form to fill in order to access the web page, so we need to get the html ids from the respective fields. It’s easy to find them using the driver methods like find-element_by_id or find_elements_by_xpath.
In [42]:
In [43]:
In [46]:
In [ ]:
In [47]:
In the next topics we’ll learn how to get the data using beautiful soap, store it in a database and analyse it using some tools like pandas, matplotlib, sklearn, etc.
Enjoy the code, improve it if you want!
See you!!!!
| A Little Snippet to Automate Web Scraping using Python and Selenium | 11 | hi-everybody-this-little-snippet-will-show-you-how-to-use-a-selenium-lib-in-order-to-make-an-100c041c1bf8 | 2018-10-03 | 2018-10-03 14:04:42 | https://medium.com/s/story/hi-everybody-this-little-snippet-will-show-you-how-to-use-a-selenium-lib-in-order-to-make-an-100c041c1bf8 | false | 498 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Python | python | Python | 20,142 | Alexandre Dall Alba | null | f46e6d397edf | alexandredallalba | 14 | 16 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | dee560a00777 | 2018-08-02 | 2018-08-02 23:14:38 | 2018-08-03 | 2018-08-03 03:36:38 | 1 | false | en | 2018-08-03 | 2018-08-03 04:14:59 | 3 | 100c15ab725a | 1.309434 | 0 | 0 | 0 | Our new Unleash live Release Cozumel (v1.15) has just arrived with many new features and some bug fixes. | 4 |
Product Release Wrap-up July
Our new Unleash live Release Cozumel (v1.15) has just arrived with many new features and some bug fixes.
Take off for a flight at https://cloud.unleashlive.com
Here is the detailed run down:
Enhanced Features:
HD live video streaming latency decreased by another 20%. Benchmarking shows we are now about 50–80% faster than typical Youtube or facebook live stream. Read here for more details
Refreshed A.I. live in-stream UI overlays.
Added in-stream video A.I. object count analytics.
3D Modelling jobs allowance increased from 250 to up to 500 images on all Business subscriptions. Contact us for even higher allowances.
Full screen Point cloud and 3D Model view for more immersive showcases.
Point cloud tools menu updated with enhanced measurements and rendering options.
Additional browser theme options for rich charcoal titanium background or bright white for 3D models.
More fluid touch and mouse interaction for 3D models to inspect any model location. Pan/tilt, pinch/zoom, rotate.
For even faster browser navigation, we added advanced model controls, enabling different lighting for models and low, med, high resolution of models.
Quick view of latest media library items.
Expanded inventory of user guides with detailed workflow steps and Youtube videos.
Enhanced sharing functionality of VR models.
Several new A.I. inference models from various 3rd party developers available for testing in connected HD live streams. This is still an experimental feature. For example: Improving track inspections with automation.
A.I. developer sandbox features updated.
Bug Zapper:
Several users reported issues with Google sign-in on older Chrome browser versions.
Some users reported issues with lack of thumbnails on older Safari browser.
Linked Unleash live Youtube user guides sometimes did not start playing with certain privacy settings in Chrome.
Several Android 6 and iPhone 11 stability fixes.
| Product Release Wrap-up July | 0 | product-release-wrap-up-july-100c15ab725a | 2018-08-03 | 2018-08-03 04:14:59 | https://medium.com/s/story/product-release-wrap-up-july-100c15ab725a | false | 294 | Unleash live is a cloud based software platform ingesting live video and imagery, applying real time A.I. analytics and delivering instant decision making capabilities | null | UnleashLive | null | Unleash live Blog | unleash-live-blog | ARTIFICIAL INTELLIGENCE,REAL TIME ANALYTICS,COLLABORATION TOOLS,DRONES,3D MODELING | Unleashlive | Hd Live Streaming | hd-live-streaming | Hd Live Streaming | 1 | Unleash live | Powerful A.I. live stream for faster decisions | 86550e962a4f | unleashlive | 6 | 13 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-23 | 2018-03-23 17:12:10 | 2018-03-01 | 2018-03-01 11:16:51 | 0 | false | en | 2018-03-23 | 2018-03-23 17:12:21 | 4 | 100c470f532b | 2.350943 | 1 | 0 | 0 | AI Saturdays , a global event conducted by AiDevNepal has been very successful till date. It has been heading forward with the motto “Learn… | 1 | AI Saturdays by AiDevNepal : A Review from participant by Raisha Shrestha
AI Saturdays , a global event conducted by AiDevNepal has been very successful till date. It has been heading forward with the motto “Learn , Share and Grow together”. It is a great opportunity for the learners to be a part of AI Saturdays and learn informative things related to Artificial Intelligence (AI) from the well experienced mentors who are professional AI developers of Nepal. AiDevNepal has taken a great step by taking initiation to conduct this global event “AI Saturdays “ in Nepal. AIDevNepal has enlightened a number of AI enthusiasts by giving them opportunity to get involved in these workshops of AI Saturdays.
I myself being a member of the workshop would love to share the experience I gathered. The first workshop involved interaction and knowledge sharing from well known experienced professionals in the field. In the later workshops we learnt about basics of AI , tools used for AI implementation, Basic Libraries and functions. Then in next workshops we learnt implementation of AI . We are in the process of learning. We learnt implementation of number of things like decision tree, deep learning and so on. We dealt with examples which fall under these categories. We got sound knowledge regarding topics which we had only surface level information. I am glad I got to be a part of these workshops and learnt this much of stuff.
From very surface level, we are rising a step ahead in each workshop. This is making us very enthusiastic to learn more in the field of AI. As a result of this enthusiasm we are working on our project in AI as a assignment given to us by our mentors even on 1st March, when Holi is celebrated in Nepal. Instead of playing Holi people are working out in their codes to get more accuracy in their AI projects. This is great interest development. And AiDevNepal deserves a round of applause for being able to enlighten people with knowledge of AI and making them more enthusiast in the field.
Till date 6 workshops have been conducted along with 2 interactive AI meetups. A number of workshops are yet to come and all of us are very excited to learn further. The organising team always encourage us to learn, share and grow together. So the entire team of the AiDevNepal including the organisers, and participants share a lot of knowledge with each other. We discuss regarding our confusions , share the discoveries or helpful tutorials found in our facebook group “DN: AI Developers Nepal” or “AiDevNepal”. In this way we actually learn , share and grow together.
The day when 14 workshops of AI Saturdays will be completed will be a day of pride for all of us. We participants will always try to share the knowledge gained from AiDevNepal by being associated with AiDevNepal itself. We shall try to make the motto of “Learn, Share and Grow together” by very much implementing it and making AI successfully established in Nepal some day. As a very good initiation has already begun and a number of enthusiasts are getting enlightened, that day is not too far. Cheers to AiDevNepal for this great initiation.
AiDevNepal has prepared a number of materials for the workshop which is also in their github link mentioned below. Everyone are free to use the material but are requested to give reference to AiDevNepal whenever the material is used for the purpose of knowledge sharing. You can also subscribe to AiDevNepal in youtube and watch informative videos related to AI.
Website of AiDevNepal : https://aidevnepal.github.io/
Github Link : https://github.com/AiDevNepal
Youtube Channel Link: https://www.youtube.com/channel/UChk69vbMbxBPRutHpcDfe0Q
Originally published at medium.com on March 1, 2018.
| AI Saturdays by AiDevNepal : A Review from participant by Raisha Shrestha | 1 | ai-saturdays-by-aidevnepal-a-review-from-participant-by-raisha-shrestha-100c470f532b | 2018-05-05 | 2018-05-05 00:38:29 | https://medium.com/s/story/ai-saturdays-by-aidevnepal-a-review-from-participant-by-raisha-shrestha-100c470f532b | false | 623 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | AIDevNepal | An Artificial Intelligence community in Nepal. | 902f6a19bfdb | aidevelopersnepal | 44 | 31 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-10 | 2018-05-10 06:58:52 | 2018-05-10 | 2018-05-10 06:59:24 | 0 | false | en | 2018-05-10 | 2018-05-10 06:59:24 | 1 | 100d3c69c32c | 0.067925 | 0 | 0 | 0 | https://www.linkedin.com/pulse/age-theory-over-machine-learning-emerges-sam-ghosh/ | 3 | Is the age of theory over as Machine Learning emerges?
https://www.linkedin.com/pulse/age-theory-over-machine-learning-emerges-sam-ghosh/
| Is the age of theory over as Machine Learning emerges? | 0 | is-the-age-of-theory-over-as-machine-learning-emerges-100d3c69c32c | 2018-05-10 | 2018-05-10 06:59:25 | https://medium.com/s/story/is-the-age-of-theory-over-as-machine-learning-emerges-100d3c69c32c | false | 18 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | Sam Ghosh | Founder at Wisejay Private Limiter and SEBI Registered Investment Adviser | 338669bc3e3f | samghosh | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-06 | 2018-01-06 04:00:43 | 2018-01-07 | 2018-01-07 07:20:10 | 9 | false | en | 2018-01-08 | 2018-01-08 05:28:09 | 4 | 100deb28f54d | 5.70566 | 17 | 0 | 0 | Artificial Intelligence (AI) is currently one of the most popular topics in the industry with seemingly endless applications in everything… | 5 | AI based UI Development (AI-UI)
Artificial Intelligence (AI) is currently one of the most popular topics in the industry with seemingly endless applications in everything from matchmaking to self-driving cars. The most disturbing aspect of AI that we hear is it will result in massive job losses across industries. Can AI also affect the IT jobs? If so which skills will be impacted? When? How? These are some questions every software engineer must be seeking.
Creative designers or business users comes up with UI (User Interface) ideas for application/ website on a sheet of paper or on a whiteboard or on their fancy graphics tablet. It is a job of an UI developer to convert the design idea/ wireframes into a working UI keeping the creative design intent in mind. This is one of the complex, time-consuming step in software development process. In this article, we will see an interesting example of applying AI for UI development. We will try to understand this by comparing it with human learning process and (over)simplifying the technology behind it.
Typical hand drawn design for a UI
Mimicking our eyes and brain
As a child, we learn to observe and label the things around us. The learning happens through feedback provided by our parents and others. Our brain gets trained to look for some pattern, texture, color, size in a object to identify it. In AI, Convolutional Neural Network (CNN) is a class of deep neural network very effective at recognizing the objects in a given image.
Basic idea behind CNN is to look for some shapes or patterns with the help of various filters in small parts of the image one at a time. Below figure shows applying 2 filters to look for slanted lines. Based on the filter results features are extracted. Finally by voting for the extracted features, the algorithm can conclude on the objects in the image.
Describing the image
The child starts uttering a single word label for each identified object, such as ‘ball’. Soon she will also learn to identify the relationship between the identified objects and describe it in a short sentences such as ‘a red ball and a brown bat is on the lawn’. The learning happens through a cycle of trial and errors.
In AI, for a given image constructing sentences from the word labels is a job of LSTM (Long Short Term Memory) networks. This process is called as image captioning.
Below are some examples of AI based image captioning. More such examples are at http://cs.stanford.edu/people/karpathy/deepimagesent/
The image captioning is achieved by appending LSTM network to the CNN discussed earlier. LSTM is very effective in language related tasks, because of their unique property of referring to their previous outputs. LSTM generates a word at a time. The next word is decided based on it’s inputs, but also on previous words generated. e.g. in a sentence ‘My name is John.’, you can say ‘John’ only if earlier three words were ‘My name is’. The sequence of words forms into a sentence. Like any other neural network, LSTM goes through a learning at building the sentences.
UI Development Process
Typically UI development happens through following steps,
Creative designers or business users of the application likes to hand draw their UI design ideas on a whiteboard or a graphic tablet or even a piece of tissue paper.
Designer uses wireframing tool on a computer to create the same design again. This is a redundant step.
UI developers will translate the wireframes into a working UI code. The developers and designers goes through a iterative process till the expected UI is built. This step is a time consuming and repetitive process.
AI based UI development
What if the hand-drawn design idea is directly translated to a working UI? AI can do this. Below is an example of the same.
UI generated with pix2code
In image captioning, AI describes objects (such as dog, horse) in a scene and builds a English sentence describing the objects and their relationship with each other.
In case of UI code, the UI design is like a scene, but instead of dog and horse will have UI objects like button, slider. Instead of English language, the objects will be described in UI code. The UI code is having a limited vocabulary (such as button, slider), and relationship between objects are described with few more words (such as position, hierarchy). Thus UI code generation can be considered specific use case of image captioning.
UI code generation goes through two stages.
Training Stage:
Imagine a child (child_1) learning to look at many UI images and creating a list of the UI objects for each UI image. Other child (child_2) learns to read the descriptive code for the same UI. Third child (child_3) learns to find the relationship between the child_1 and child_2’s learning. They together learn to observe a image and create a corresponding UI code.
CNN takes role of Child_1, LSTM as Child_2 and another LSTM as Child_3. (For a complete technical explanation, refer the link for pix2code paper at end of the article.)
Sampling Stage:
The trained model is now ready to process hand drawn GUI drawing. The code context is updated for each prediction to contain the last predicted token. The resulting sequence of DSL tokens is compiled to the desired target language (e.g. for android, iOS, HTML etc.) using traditional compiler techniques.
Benefits of AI-UI
For designers and developers, AI based solution would save critical time early on a project by rapid prototyping, boost iteration cycles, and eventually enable the development of better apps.
They will save on all the trivial, repetitive and redundant tasks.
It also will allow designers and developers to focus on what matters the most that is to bring value to the end-users.
The entry barrier to build apps will become really low. Learning to use a UI design tool takes time, learning to code takes even more time. However everyone can draw UI on paper. This will allow your grandma to go from an idea to a working UI running on her phone in a matter of seconds.
Current and future state
As of now only few AI based UI development products (e.g. Uizard) are getting developed and not yet reached maturity to replace the human UI developers. But still they are good as an assistant for any UI developers. In coming years, we may see new approaches and improved AI products, where this assistant will take over the role of the experienced UI developer. It’s time for UI developers to look at the changing trends and get ready for Reskilling.
Still many of us may think generating UI code from the creative designers drawings is OK, but AI itself cannot come up with it’s own creative UI designs. We still need artists, creative designers, Right? Maybe wrong! AI has Generative Adversarial Network (GAN) and Creative Adversarial Networks (CAN) have proven to generate art and sometimes better than humans. We will discuss this in some other article.
References
pix2code: Generating Code from a Graphical User Interface Screenshot by Tony Beltramelli https://arxiv.org/pdf/1705.07962.pdf
Deep Visual-Semantic Alignments for Generating Image Descriptions by Andrej Karpathy, Li Fei-Fei http://cs.stanford.edu/people/karpathy/cvpr2015.pdf
| AI based UI Development (AI-UI) | 195 | ai-based-ui-development-ai-ui-100deb28f54d | 2018-06-19 | 2018-06-19 09:24:51 | https://medium.com/s/story/ai-based-ui-development-ai-ui-100deb28f54d | false | 1,194 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Vijay Betigiri | null | 6139f9655848 | vijay.betigiri | 38 | 13 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-17 | 2017-11-17 22:46:20 | 2017-11-18 | 2017-11-18 23:06:38 | 2 | false | en | 2017-11-27 | 2017-11-27 16:43:52 | 2 | 100e4578c9fc | 3.205975 | 5 | 0 | 0 | The music industry is a business theorist’s dream. It is a great case study, because unlike many industries, it faces almost all the… | 5 | Grow your learning @ incentivetheory.com
Can Technology Replace Record Companies?
The music industry is a business theorist’s dream. It is a great case study, because unlike many industries, it faces almost all the possible issues a market could face (regulatory hurdles, high visibility, innovative, fast paced, low barriers to entry, high barriers to sustainability). It reminds me of an internet accelerated version of the 1980s version of the disk drive industry.
In the wake of my recent article, I have had a few people share articles about the emergence of the company United Masters from stealth mode. For those of you that don’t know, before I started at Sonos, I briefly worked on an application that competed in a similar space. From this experience, I learned that music analytics is a very competitive space with a laundry list of modularized niche suppliers (Swift & Next big sound for streaming analytics, MAX for paring artist for social analytics, WAVO for tour ads). Things to consider…
The Customer Experience
As noted in my last article, the record labels’ customer is the artist, not the music listener. A company’s customer experience really matters. Taking out all the inefficiencies in the supply chain might actually affect a record company’s ability to win business. If an independent artists gets big enough, they don’t want to be treated like a commodity user of a technical platform; Artists and their teams are willing to pay for a luxury experience, especially if the experience comes at the expense of uncertain future gains.
People are Loss Averse
In economics, it is known as the Prospect Theory. Prospect Theory states that people are loss averse, meaning people perceive losses more strongly than gains. If a small artist or team doesn’t go with a big label now, they may not have the opportunity to go with them in the future. A potentially big loss if the artist fails. If United Masters can quantify and mitigate the monetary value lost if an act succeeds, they could persuade some managers to adopt the risk and cost associated with staying independent. Clearly articulating a value proposition for theoretical future gains is difficult.
Conflict of Interest
In management theory, it is known as the Realtor Effect. If a realtor is selling a house, they only get a small percent of the sale. Instead of fighting for incremental gains for the seller, the Realtor’s time is better spent finding other houses to sell, despite whether the extra work is in the seller’s best interest. People on the business side of an artists career are incentivized to have artist’s work with a larger record company, because their dollar per hour inputed is more attractive. The business person is required they spend less time working with an act, but still sees attractive financial returns. The business person’s time is better spent finding and signing new acts.
Platform
It’s extremely difficult to build a platform based on assets you don’t own. Surviving in a company’s supply chain is difficult, because you are ultimately beholden to the owner of the content. United Masters doesn’t own the Streaming Services’ data or distribution network.
To The Point
If someone is going to disrupt the market with a low-end disruption, it would be an industry insider like United Masters’ founder. There are two reasons here…
Longtime industry insider is the only person familiar enough with the inefficiencies that are actually crucial to win business.
Low-end market disruption work in a B2B environment. Industry insiders know how to speak to the decision makers on the business side of an artist’s career, which makes them more qualified to explain the value proposition to them.
Conclusion
If you read my article closely, you’ll notice I don’t take a stand on the viability of United Masters. I don’t think asking the question “Do you think the company will succeed?” is the right question — I can’t tell you if the company will succeed. I can tell you how the incentives work. However, just because the incentives line up for the company does not mean they will succeed. But if a company can understand the incentives, they can make the right decisions, but that is half the battle. The ability to design and implement creative solutions to leverage these incentives is what separates success from failure.
If you enjoyed, Don’t forget to click and hold the 👏 so other people can find the article. Incentive Theory is a publication that focuses in data science and direct to consumer strategy.
| Can Technology Replace Record Companies? | 109 | can-technology-replace-record-companies-100e4578c9fc | 2018-03-17 | 2018-03-17 18:50:09 | https://medium.com/s/story/can-technology-replace-record-companies-100e4578c9fc | false | 748 | null | null | null | null | null | null | null | null | null | Music | music | Music | 174,961 | Justin Hilliard | null | 96578e045ed6 | justinhilliard | 137 | 112 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-16 | 2018-04-16 19:52:47 | 2018-04-17 | 2018-04-17 05:56:00 | 4 | false | en | 2018-04-17 | 2018-04-17 06:04:58 | 12 | 100fd9450b25 | 5.364151 | 0 | 0 | 0 | During this year’s World Economic Forum Annual Meeting in Davos, I had the privilege of spending one week with the world leaders in… | 5 | Session “Ask About: AI and Diagnosis” at the Annual Meeting 2018 of the World Economic Forum in Davos, January 23, 2018 Copyright by World Economic Forum
AI & Blockchain Predictions in Davos: Myth or Reality?
During this year’s World Economic Forum Annual Meeting in Davos, I had the privilege of spending one week with the world leaders in business, government and civil society, discussing predictions about technology and politics. Three months after that intense week spent in the Magic Mountains, now that the snow is melting, have those predictions become reality?
The coverage of the Annual Meeting of the World Economic Forum 2018 was dominated by tech and innovation, in addition to Mr. Trump’s attendance, of course. Tech issues dominated the scene and the discussion both on social media (source: Brunswick Insight) and at the event. The conference’s agenda is a roadmap to key digital transformation technologies empowering the “Fourth Industrial Revolution”, a disruptive economic and societal concept introduced at the event in 2013, predicated on the confluence of physical, digital and virtual technologies. Five years later, the embracing of digital transformation, Artificial Intelligence and blockchain were among the most discussed topics in Davos.
1. Artificial Intelligence (AI)
AI is the new technological frontier over which companies and countries are vying for control, especially the US and China. According to the latest report from McKinsey, Google’s parent company Alphabet invested roughly $30 billion in developing AI technologies. Baidu, the Chinese search giant, invested $20 billion in AI.
AI has been on the scene for many years, and it’s now evolving so fast that it is going to change our lives. Google’s CEO Sundar Pichai compared artificial intelligence to the discovery of electricity or the mastery of fire, describing it as “probably the most important thing humanity has ever worked on.” “Even more than fire, as steam AI will act as a multiplier of human work,” reinforced Christian Lanng, CEO at Tradeshift. The role of AI to leverage the power of data is key. For me it was an honor to be part of the panel for the SOLVER Series in Davos to discuss data for healthcare. With Kees Aarts, Beth Weesner and Olivier Ouiller we discussed how data from different businesses can be used to better serve people’s lives and revolutionize preventive tech. AI combined with IoT will change the rules of the game completely.
Davos 2018 Prediction: Artificial Intelligence & Tech Geopolitics
The most mentioned business leader was George Soros, the investor and chairman of Soros Fund Management, who made headlines with his speech about tech geopolitics. “It is only a matter of time before the global dominance of the US IT monopolies is broken. Davos is a good place to announce that their days are numbered,” predicted Mr. Soros, as tech giants “are poised to dominate the new growth areas that artificial intelligence is opening up.”
China’s proportion of global AI startup funding as a percentage of dollar value. Image: CB Insights.
Three Months Later: Reality
China has taken the crown in AI funding, overtaking the US: a Chinese facial recognition surveillance company is now the world’s most valuable AI startup. In April 2018, SenseTime Group has raised funding from Alibaba and other investors at a valuation of more than $3 billion, becoming the world’s most valuable artificial intelligence startup. “In China there is an advantage in areas like facial recognition because of the privacy that exists in the U.S. and elsewhere in the EU, and some of the very best facial recognition technology in the world that I’ve seen is in China,” said Breyer Capital founder Jim Breyer, an indirect investor in SenseTime through IDG.
2. Blockchain & Crypto
Everything this year in Davos was about cryptocurrencies and Bitcoin. While in 2017 the event organized by WISeKey on “Blockchain and the Internet of Value” was an exclusive meeting with 300 delegates where the leading Blockchain expert Don Tapscott presented his book Blockchain Revolution, this year Carlos Creus Moreira, founder and CEO of WISeKey, was assaulted by a huge crowd in Davos. And blockchain came up in one panel discussion after the next. Everyone was excited about blockchain technology, naturally. And even more so about Bitcoin. Bitcoin value is ten times the value of the previous year, so this is not a surprise. What is behind Bitcoin and other crypto? Blockchain is a shared ledger technology that powers cryptocurrencies but also allows encrypted data on anything from money to medical records to be shared between companies, people and institutions. This protects data from fraud while instantly updating all parties concerned. There is an incredible number of businesses outside of cryptocurrencies that are leveraging blockchain and that will change the way we work dramatically.
Davos 2018 Prediction: Blockchain & Crypto
While the potential of blockchain, the underlying technology behind cryptocurrencies, was praised, bitcoin got slammed. “Bitcoin is a fraud,” a statement made by Jamie Dimon, CEO of JPMorgan Chase, raised many discussions, and in Davos he stated, “Cryptocurrency: it’s not my interest.” “There is no intrinsic value for something like bitcoin so it’s not really an asset one can analyze. It’s just essentially speculative or gambling,” reinforced Stephen Poloz, the governor of the Bank of Canada.
Copyrights World Economic Forum
Three Months Later: Reality
We all know that after Bitcoin almost hit 20,000 USD in December 2017, it had a big drop, and the decline continued after Davos: the BTC-USD rollercoaster is now between 7,000 and 9,000 USD.
Orbis Research has just released its new report, “Blockchain Technology Market Forecasts, 2017–2025”: the blockchain technology market, valued at approximately USD 350 million in 2016, is anticipated to reach up to USD 10.5 billion, growing at a lucrative rate of more than 50% over the forecast period 2017–2025. The market’s growth is attributed to the increasing penetration of cryptocurrency and ICO, and to the growing adoption rate of blockchain-as-a-service, blockchain to enable faster transactions. Moreover, the rising rate at which the blockchain technology is being adopted for payments, smart contracts and digital identities is creating significant opportunities for the global blockchain technology market.
The bottom line? Blockchain and crypto can no longer be ignored. Banks are calling on regulators to tackle the new crypto-markets such as ICOs quickly. “We can’t deny that things are changing,” says Benoit Legrand, chief innovation officer at Dutch bank ING. “The world will include cryptocurrencies in the way we work in the next ten years.”
Conclusions
Three months after the event, the big Davos predictions about AI and blockchain have not only been realized but have also become exponential We still don’t know how the future will look, what will work or how it will work. But there is no doubt that everyone is rushing to get ready for this evolution. Companies, sectors and countries are running a critical race to invest and discover the best technologies to leverage AI and blockchain. The time is now.
What were your Prediction? Have they become reality? Comment below with your perspective or connect with me here
About the Author: Giulia Zanzi is passionate about combining IoT and mobile technologies with science to improve people’s lives. As Head of Marketing Fertility in Swiss Precision Diagnostics, a Procter & Gamble JV, she led the launch of the first Connected Ovulation Test System that helps women to get pregnant faster by detecting two hormones and syncing with their phone. A former member of the European Youth Parliament, Giulia is currently serving on the Advisory Council of the World Economic Forum Global Shapers and she is a Lean In Partner Champion. Giulia graduated with honors at Bocconi University in Milan and holds a Masters at Fudan University in Shanghai.
| AI & Blockchain Predictions in Davos: Myth or Reality? | 0 | https-medium-com-giuliazanzi-ai-blockchain-predictions-in-davos-myth-or-reality-100fd9450b25 | 2018-04-17 | 2018-04-17 06:04:58 | https://medium.com/s/story/https-medium-com-giuliazanzi-ai-blockchain-predictions-in-davos-myth-or-reality-100fd9450b25 | false | 1,236 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | Giulia Zanzi | Passionate about combining IoT and mobile technologies with science to improve people’s lives. Head of Marketing Fertility @Clearblue @P&G JV @GlobalShapers | 83dbd5d9a6d5 | giuliazanzi | 30 | 223 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-06 | 2018-05-06 11:13:25 | 2018-04-22 | 2018-04-22 00:00:00 | 1 | false | en | 2018-05-06 | 2018-05-06 11:18:59 | 1 | 101080780ce7 | 1.90566 | 0 | 0 | 0 | To understand the sea change currently happening in the world of manufacturing, it is important to look at the historical perspective. We… | 5 | A Historic Phase Change in the Way We Build Things
Pavillion of a Chinese construction company at the 2015 World Expo in Milan / Photo by author
To understand the sea change currently happening in the world of manufacturing, it is important to look at the historical perspective. We can split history into the pre-industrial epoch, the time after the Industrial Revolution, and a new era, that we are currently entering.
Until the 19th century, the production of goods was a manual process. Even though craftsmen sometimes had simple machines at their disposal, each item was built by hand and became one individual, often unique, object.
This changed during the Industrial Revolution, which caused a dramatic shift to the manufacturing of large quantities of identical items. Many objects became standardized and the focus moved to the assembly of objects from as many off-the-shelf parts as possible, while trying to minimize the number of custom components and manual work. Engineers constantly strived to reduce complexity to bring down cost.
With the advent of Additive Manufacturing on an industrial level, we are adopting a new paradigm, where complex, highly customized objects are becoming the norm. A printer aggregates small pieces of matter according to a blueprint, indifferent to the simplicity or complexity of the instructions. The resulting object can almost be arbitrarily sophisticated with little impact on cost and manufacturing time.
3D printers were first used for applications like rapid prototyping, where fast turnaround times allowed designers to work iteratively. With the introduction of better materials and the increased sophistication of the output, printers started to be used in highly individualized end-product manufacturing, such as prosthetics.
This shift to Additive Manufacturing of end-use-parts is starting to give designers and engineers newfound freedom, to design objects that cannot be produced through traditional manufacturing. In these applications, the additive aspect of the printers is the key element to the production of completely enclosed parts or objects that use complex internal substructures to reduce weight or that contain functional elements. This transition is going to speed up in the coming years as printers will start to include multiple diverse materials and are able to incorporate the placing of electronics, sensors and actuators into the printed product. The results will be highly sophisticated objects with little or no assembly required.
With this phase change happening, the focus now shifts to the software side, which is the key element to enabling objects of significantly higher complexity.
— — — — — —
Lin Kayser is the CEO of Munich-based Hyperganic, where he and his team are reinventing how we design and engineer objects in an age of digital manufacturing and synthetic biology.
— — — — — —
This article was originally published on LinkedIn on April 22, 2018
| A Historic Phase Change in the Way We Build Things | 0 | a-historic-phase-change-in-the-way-we-build-things-101080780ce7 | 2018-05-06 | 2018-05-06 11:19:00 | https://medium.com/s/story/a-historic-phase-change-in-the-way-we-build-things-101080780ce7 | false | 452 | null | null | null | null | null | null | null | null | null | 3D Printing | 3d-printing | 3D Printing | 9,416 | Lin S. Kayser | Serial Entrepreneur - Speaker - Environmentalist. Working on the future of manufacturing. linkayser.com | 27d642e31d85 | linkayser | 10 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-04 | 2017-12-04 11:46:58 | 2017-12-04 | 2017-12-04 13:44:49 | 7 | false | en | 2017-12-04 | 2017-12-04 15:16:58 | 7 | 101092e5c3bc | 4.416038 | 21 | 1 | 0 | “The historian is a prophet looking backwards.” ― Friedrich Schlegel | 5 | Five tech trends that shaped 2017
“The historian is a prophet looking backwards.” ― Friedrich Schlegel
This post was originally published on VC Cafe. As we approach the last stretch of 2017, I wanted to take stock of the tech trends that shaped our year. In the next post, I’ll cover my predictions for 2018.
1. Decentralisation
Perhaps the most impactful trend this year is the proliferation of Blockchain technologies and cryptocurrencies into the mainstream.
On the Blockchain front we’ve seen a wide array of potential applications from real estate to art dealing and diamond trade.
On Crypto, we moved from Voice Over IP to Money Over IP, and saw Bitcoin cross the $10,000 line. ICOs (initial coin offerings) became a ‘thing’ — tokenise everything, with people spending over $1M to buy virtual cats with Ethereum on CryptoKitties (it’s been acquired since).
For a second it looked like White Papers were replacing the fundraising deck, with startups that struggled raising traditional funding completing multi millions ICOs seemingly overnight. Unfortunately, a large percent of ICOs feel like a potential scam, money gets taken off the table quickly and almost with no supervision, and with the only collateral at risk being reputation (in some cases, not even that).
This is just the beginning in my opinion, but regulation is likely to step in here very soon.
2. AI is the new UI
The hype around AI reached new heights in 2017. Using a decision tree to apply a set of rules, or operating a chatbot don’t necessarily qualify as using AI, but it is almost inevitable to avoid having some form of machine learning, deep leaning, NLP etc today’s tech startups.
As a field, AI made major breakthroughs this year, namely DeepMind’s AlphaGo decisive victory over the Go world champion, and then the improved version AlphaGo Zero, which was self taught and even better.
There’s no doubt that AI will continue to penetrate entire industries, in particular Automotive (self driving vehicles), robotics, drones, healthcare and marketing tech — from advertising to customer service.
Another aspect of the rise of AI is the infrastructure side: new chips from Nvidia, Google and Graphcore to fuel our growing need for fast data processing.
The Artificial Intelligence Index 2017, a Stanford report by AI Index (pdf) has some fantastic nuggets on the number of AI academic papers published, the number of enrolled students into AI courses, the growth rate of AI startups etc.
3. Data is the new oil
There’s one big problem with the perception of data being the new oil, the CEO of a successful AI startup told me. Large corporates are sure they are sitting on an oil field, and so spend millions to pour their data over to expensive data lakes, only to find that’s it’s hard to refine that crude oil (took the analogy all the way, I guess). Organisations are simply ‘sitting’ on their data, or paying for unproven expensive solutions. We are producing more data than ever in human history, and are getting better at understanding the patterns and the meaning of that data, but there’s still a lot of friction in getting that data and using it wisely.
For example, researchers can now predict the face of person based on a tiny sample of DNA. We are able to predict what customers will churn or upgrade simply by watching a small sample of their behaviour, and soon, we should be able to predict where/when a crime is about to happen, by applying models to surveillance data and past crime statistics.
Where does the line cross? Ethical considerations are becoming a major part of big data and machine learning startups, with several companies and industry bodies formed to tackle these questions.
4. Cyber is here to stay
Almost no weeks go by without the headline of a major hack. It seems like the Cyber security industry will only get bigger with more and more devices getting online, from our cars to our appliances.
We saw the rise of ‘Dark Marketing’, where advertisers are able to target individuals based on increasingly granular attributes (including race, religion, beliefs) and as Prof Scott Galloway said, “weaponise Facebook” as a platform to change public opinion.
Israeli startups attracted about 20% of the global funding for the security sector and saw the IPO of ForScout, reaching an $897M market cap.
5. GAFAM
5 companies now dominate tech (Google, Apple, Facebook, Amazon and Microsoft), or 7 if you add Alibaba and Tencent. Their power in the market is almost absolute for example, 99% of digital advertising growth is going to Facebook and Google. Just look at the size of Amazon compared to ALL OF RETAIL.
Their power is creating a public backlash — calling for tighter regulation on these companies dealings with privacy, data transparency and competition scrutiny.
It’s also getting increasingly hard to find a niche to compete with these giants, as they expand into to every major area from Cloud, messaging, hardware, enterprise, etc, adopting an AI first strategy. As an example, take a look at everything that Amazon announced at AWS re: Invent 2017.
In my next posts I will cover additional trends that dominated 2017, including Fake News, the Seed Slump, digital health, etc as well as some predictions for 2018. In the meanwhile, take a moment to sign up to my newsletter.
| Five tech trends that shaped 2017 | 102 | five-tech-trends-that-shaped-2017-101092e5c3bc | 2018-04-11 | 2018-04-11 05:48:34 | https://medium.com/s/story/five-tech-trends-that-shaped-2017-101092e5c3bc | false | 892 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Eze Vidra | Managing Partner at Remagine Ventures. Founder of Techbikers, Campus London and VC Cafe, proud Xoogler. Non exec director at Chargifi and UK Israel Business. | efb489ee7fe9 | ediggs | 7,926 | 1,389 | 20,181,104 | null | null | null | null | null | null |
0 | Predictions = Matrix.Multiply(Weights, Inputs)
Error = Matrix.Substract(Predictions, Outputs)
ΔR = ΔS x Transpose(P)
ΔP = Transpose(R) x ΔS
| 3 | 83d07eece3c7 | 2018-05-31 | 2018-05-31 18:44:52 | 2018-06-01 | 2018-06-01 11:21:17 | 8 | true | en | 2018-06-01 | 2018-06-01 15:49:19 | 5 | 1011884df84 | 5.348428 | 6 | 1 | 0 | In a previous post, we explained the basic principles behind back-propagation and how Neural Networks work. In this post, we will explain… | 5 | Vectorized implementation of back-propagation
In a previous post, we explained the basic principles behind back-propagation and how Neural Networks work. In this post, we will explain how to leverage optimised math libraries to speed-up the learning process.
What is vectorization and why it matters?
“Vectorization” (simplified) is the process of rewriting a loop so that instead of processing a single element of an array N times, it processes several or all elements of the array simultaneously.
Let’s start by an example of a dataset with 1000 houses sold in a specific city. For each house we have 5 information: its area, the number of rooms, the construction year, the price paid and the agency fees. The goal is to train a model that predicts the price and the agency fees from the first 3 features.
Dataset example
Let’s consider a simple linear feed-forward model with 6 weights (W11,W12,W13,W21,W22,W23) where:
Price = W11.Area + W12.NbRooms + W13.Year
Fees = W21.Area + W22.NbRooms + W23.Year
As explained in more details previously, the goal of the machine learning, is to find which values for these 6 weights fit the model’s output the closest to the dataset’s real output. We start by initialising the weights randomly. Then, we forward-propagate to calculate the predicted price and agency fees. By comparing the results with the real price and fees from the dataset, we can get a gradient of the error to back-propagate later and update the weights accordingly.
A simple implementation for this, would look something like this:
This sequential for loop on the dataset, is however too slow, and does not take advantage of modern parallelism in CPU and GPU.
Vectorizing forward-propagation
In order to achieve high performance, we need to transform the dataset into a matrix representation. If we take the column-based representation, every input from our dataset is copied to a column in the matrix.
Our weight matrix will be a matrix of 2 rows x 3 columns.
Our input matrix will be a matrix of 3 rows x 1000 columns.
Our output matrix will be a matrix of 2 rows x 1000 columns.
Our linear model that we are searching to solve, can then be presented in the following matrix-based form:
Matrix or a vectorized-form of: Weights x Inputs = Outputs
The reason why this representation works, is because this is exactly how matrix multiplication operates:
Matrix multiplication
Matrix-multiplication: a row i of the first matrix is multiplied by a column j of the second matrix to calculate the value of the cell (i , j) of the output
With the vectorized implementation, the previous for loop with 1000 iterations can now be done with very few, high-performance vectorized operations as following:
On big datasets, and using GPUs (some have 1000 cores), we can expect in thousands of times of speedup! On CPUs, there are many advanced math libraries that implement high-performance matrix operations such as openBLAS.
Vectorizing back-propagation
Vectorizing forward-propagation is easy and straightforward, it follows the model definition. The challenge is in vectorizing the back-propagation of errors.
With numbers, if we pass a number x through a function f to get y=f(x), the derivative f’ of f gives us the rate of change on y, when x changes.
With Matrices, we need to use the Jacobian, which is a Matrix made of the partial-derivatives in respect to the different elements of the input matrix.
The rational behind, is to fix all the elements in the input matrix except one, where we add a small delta 𝛿 to it and see what elements in the output matrix are affected, to which rate, and add them together. We do this for all elements of the input matrix, we get its gradient matrix at the input. Thus it has the same shape (number of rows and columns).
Consider the following matrix operation. RxP=S, (And the first 3 equations that calculates the first 3 elements of the output S)
Equation 1: s11 = r11.p11 + r12.p21 + r13.p31 (red output)
Equation 2: s12 = r11.p12 + r12.p22 + r13.p32 (green output)
Equation 3: s21 = r21.p11 + r22.p21 + r23.p31 (yellow output)
Let’s say we already have the gradient matrix ΔS at the output S, and we want to back-propagate it to the input R (respectively P) to calculate ΔR (resp. ΔP). Since r11 is only involved in the calculation of s11 and s12 (red and green but not yellow), we can expect that only 𝛿s11 and 𝛿s12 back-propagate to 𝛿r11.
In order to find the rate of back-propagation of 𝛿s11, we partially derivate equation 1 in respect to r11 (and consider everything else as constant), we get the rate of p11 (Another way to explain this: a small change in r11, will be amplified by a p11 factor in s11).
By doing the same for 𝛿s12 and equation 2, we get the rate of p12.
If we try to back-propagate 𝛿s13 to r11, and derivate equation 3 in respect to r11, we get 0 since equation 3 does not depend at all on r11. Another way to see this: if we have an error or s21, there is nothing that can be done on r11 to reduce this error. Since r11 is not involved in the calculation of s21! The same applies for all the other elements of the S matrix (s21,s22,s31,s32).
Finally, by adding up, we get 𝛿r11=𝛿s11.p11 + 𝛿s12.p12
By doing the same for all the elements of matrix R, we get the following:
𝛿r11=𝛿s11.p11 + 𝛿s12.p12
𝛿r12=𝛿s11.p21 + 𝛿s12.p22
𝛿r13=𝛿s11.p31 + 𝛿s12.p32
𝛿r21=𝛿s21.p11 + 𝛿s22.p12
𝛿r22=𝛿s21.p21 + 𝛿s22.p22
𝛿r23=𝛿s21.p31 + 𝛿s22.p32
𝛿r31=𝛿s31.p11 + 𝛿s32.p12
𝛿r32=𝛿s31.p21 + 𝛿s32.p22
𝛿r33=𝛿s31.p31 + 𝛿s32.p32
If we look closely to the pattern, we see we can put it in a vectorized matrix multiplication way as following:
Similarly, if we follow the same procedure to back-propagate to P, we get the following equation:
Vectorizing everything
Each Neural network layer is made of several mathematical operations. If we manage to define each mathematical operation in term of matrix operations in both forward and backward passes, we get maximum speedup in learning.
In a first step, each Matrix M has to be augmented by a companion Matrix ΔM to hold its gradient on the way back.
In a second step, each Matrix operation, has to define its own forward and backward operations. For instance:
After creating this vectorized library of Mathematical operations, we can use this library to chain operations and create layers, activation functions, loss functions, optimizers. The back-propagation will be defined automatically as a callback stack (or calculation graph) of the mathematical functions used in each layer. This is how Tensorflow works to a certain extent.
We can see the full machine learning process as a stack of abstractions:
An abstraction of machine learning library stack
| Vectorized implementation of back-propagation | 38 | vectorized-implementation-of-back-propagation-1011884df84 | 2018-10-23 | 2018-10-23 14:00:05 | https://medium.com/s/story/vectorized-implementation-of-back-propagation-1011884df84 | false | 1,117 | DataThings blog is where we post about our latest machine learning, big data analytics and neural networks experiments. Feel free to visit our website: www.datathings.com | null | datathingslu | null | DataThings | datathings | MACHINE LEARNING,NEURAL NETWORKS,BIG DATA,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN | DataThingsLu | Machine Learning | machine-learning | Machine Learning | 51,320 | Assaad MOAWAD | null | 3f743663cf67 | assaad.moawad | 178 | 99 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 32881626c9c9 | 2018-05-15 | 2018-05-15 15:58:15 | 2018-05-15 | 2018-05-15 16:08:53 | 4 | false | en | 2018-07-25 | 2018-07-25 20:32:01 | 5 | 10156fa9a37d | 6.069811 | 10 | 0 | 0 | Organizations that deploy cognitive services will work more efficiently, safely, and sustainably. | 5 | Building Smarter Businesses With Cognitive Services
Today’s successful businesses aren’t just fast and efficient. They’re becoming truly smart thanks to a new breed of technology called “cognitive services.”
The term “cognitive services” describes machine learning, artificial intelligence, and distributed algorithms that make it easy to integrate vision, speech, language, knowledge, problem-solving, analysis, categorization, moderation, and more into apps and businesses.
Cognitive services enable applications to evolve and adapt rather than simply following prewritten rules. They augment and expand human capabilities, allowing us to do our jobs faster, more efficiently, and more sustainably.
Cognitive computing doesn’t aim to replace the human element but to extend human capabilities. Humans can think deeply and use reason to solve complex problems, but we lack the ability to analyze and process massive amounts of data. That’s where computers excel. Cognitive computing era makes the most of both strengths: the human’s and the machine’s.
As cognitive systems solve complex problems, they improve their efficiency and accuracy building and acting upon sophisticated pattern recognition models. These systems aren’t explicitly programmed to work in a fully prescribed way but to naturally interact with human data inputs, then learn and grow based on the data they accumulate.
Big players like Watson, AWS, and Microsoft, as well as fast-moving startups like SiftNinja and Clarifai, have released a massive number of cognitive services, delivering them through APIs that make them a snap to fold into new applications.
Examples of Cognitive Services
Translation: Enable two users to chat in their own — different! — languages by translating their messages in real-time.
Natural language processing: Analyze massive amounts of data inputs and gauge the sentiment of the messages.
Chatbots: Create an intelligent bot that parses natural language from a human and responds as accurately as another human could.
Facial recognition: Detect human faces and organize them into groups based on predetermined categories.
Machine learning: Intelligently sense, process, and act on information delivered by sensors to control devices in response to environmental factors like temperature, rain, or earthquakes.
The Impact of Cognitive Services
Cognitive services are used to create new types of customer engagement, build smarter products, improve internal operations, and make smarter decisions. Cognitive services have already made a significant impact on three areas of business.
Discovery
With the vast amounts of data and information they have at their disposal, applications can use cognitive services to find patterns, insights, and connections that the hardest-working human might never identify. And having found patterns once, they can create new and unanticipated ways to adapt and grow, making discovery a more accurate and efficient proposition.
Engagement
Cognitive services empower businesses to see, hear, speak, understand, and interpret natural language and information sets, enabling them to create new, engaging experiences for users, customers and themselves. By understanding and responding to the ways users interact with apps and each other, cognitive systems are changing the way humans and systems interact.
Decision
The most challenging but potentially revolutionary impact of cognitive services is on the decision-making process. Intelligent systems can rapidly weigh evidence and analyze information, then make a decision based on data, not hunches. They can consider and act on complex sets of information — something as simple as recommending a product on an e-commerce site or as complex as optimizing smart devices in an industrial setting.
The Rise of Cognitive Business
Organizations that deploy cognitive services will work more efficiently, safely, and sustainably, and deliver more engaging and immersive experiences to their customers. From the way we buy goods to the way our children learn to the food that we eat, they will drive the innovation of industries and organizations into the future.
The question is, How can you get started implementing dynamic cognitive services into your business today? To start, look at a couple things:
What are the biggest inefficiencies affecting your workflow today?
What are your most significant customer complaints?
What processes represent the biggest bottlenecks?
With answers in hand, you’ll be ready to explore the vast range of cognitive services at your fingertips and discover how they can transform your business.
Intelligence at the Edge: Event-Driven Architecture
Event-driven architecture provides an efficient way to carry out cognitive tasks. It applies basic business logic while data is in motion and can decide whether to involve back-end processes.
Cognitive services are quickly changing applications and the businesses that deploy them. Using APIs from companies like IBM, AWS, and Microsoft, developers can leverage some of the world’s most sophisticated technology for computer vision, translation, sentiment analysis, and much more with just a few lines of code.
To get the most out of cognitive services, many developers are adopting a design pattern called event-driven architecture.
As the name suggests, event-driven architecture makes software change its behavior in response to events in real-time. Event-driven architecture is different from traditional request-response architectures such as REST in that an event-driven system broadcasts a notification when a predefined event occurs rather than following along a set path of subsequent subroutines.
This notification may be picked up by any number of other systems, whose use of the information is decoupled from the original event. It’s a way to create faster, more dynamic, more distributed, and independent applications, allowing you to trigger and execute business logic at the edge, with each system informed by, but not necessarily reliant upon, the next.
Image Source: Moving the Cloud to the Edge
What Is the Connection With Cognitive Services?
The event-driven design pattern provides a fast and efficient way to carry out cognitive tasks. Instead of sending all your data to an external server, having that server parse the data, and figuring out what action to take, it applies basic business logic while the data is in motion, directly in your network, and can decide whether to involve back-end processes. This way, you aren’t wasting valuable bandwidth or computational power sending data that never needed to travel back to home base for processing.
As a result, it becomes possible to build powerful cognitive applications right where the intelligence is applied: at the edge of the network.
Because cognitive services are delivered as discrete components, you can add them via serverless microservices and process data in real-time without the need for ingestion by a centralized data center unless it is truly necessary.
A Case Study: Yummy Cola
Let’s take one example. Say a beverage company called Yummy Cola is launching a new line of flavored colas leading up to, and during, this year’s Super Bowl. It wants to monitor brand reaction through social media channels but knows the #superbowl hashtag will be incredibly busy with game analysis and the activity of other brands. It needs a way to filter their brand mentions and gauge the sentiment of how users feel about their product launch.
To do this at scale would cost a fortune, and without an event-driven architecture, sending every user’s message to a central server or data center to process and analyze would be incredibly slow. An event-driven system will be much more efficient, using cognitive services to carry out basic business logic at the edge.
In this way, the brand can monitor each message, determine whether it refers to the new colas, parse the sentiment of the relevant ones, and only pass the relevant information to the back end.
To do this, it could deploy edge computing resources to filter the messages with a natural language processing service, identifying which messages mentioned the brand and which were unrelated. From there, it could use a different cognitive service to analyze people’s feelings about the different colas. It could even publish the popularity of the different products. And it could do all this without bringing the back-end servers into play.
Architecture for the Edge — and Beyond
RESTful architectures were well-suited to an earlier, simpler generation of web applications. Modern applications demand a different approach, with their dense mesh of microservices, edge-computing nodes, and streams of data from sensors and devices.
What applications need most now is an architecture that is light, flexible, and decentralized. Event-driven architecture satisfies on all counts — an elegant example of form following function.
Want in-depth analysis of how cognitive services are changing everything? Check out our full eBook: A World Transformed: Building Smarter, Next Generation Apps with Cognitive Services. In it, we cover:
What are cognitive services?
How cognitive services are transforming business
Cognitive services and edge computing
Use cases of today and tomorrow
Originally published at dzone.com.
| Building Smarter Businesses With Cognitive Services | 138 | building-smarter-businesses-with-cognitive-services-10156fa9a37d | 2018-07-25 | 2018-07-25 20:32:01 | https://medium.com/s/story/building-smarter-businesses-with-cognitive-services-10156fa9a37d | false | 1,423 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Joe Hanson | Dev Rel @PubNub | a12a4aa34693 | joehanson | 2,188 | 1,208 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 22a2beb5a88a | 2017-11-07 | 2017-11-07 15:17:57 | 2017-11-13 | 2017-11-13 20:22:38 | 7 | false | en | 2017-12-18 | 2017-12-18 22:53:50 | 12 | 1015a273f75d | 4.936792 | 9 | 0 | 0 | Since the launch of the Watson Visual Recognition API, we’ve seen users help California save water, perform infrastructure inspections with… | 5 | Best Practices for Custom Models in Watson Visual Recognition
Since the launch of the Watson Visual Recognition API, we’ve seen users help California save water, perform infrastructure inspections with drones, and even find Pokemon. Powering many of these use cases are custom classifiers, a feature within Visual Recognition that allows users to train Watson on almost any visual content.
To create custom classifiers, users define categories they want to identify and upload example images for those categories. For example, a user wishing to identify different dog breeds may create 4 classes (golden retrievers, huskies, dalmatians, and beagles) and upload training images for each class. You can find this exact example in the Watson Visual Recognition demo or explore other tutorials on custom classifiers.
Custom classifiers can be highly powerful but require careful training and content considerations to be properly optimized. Through our user conversations, we’ve assembled a best practices guide below to help you get the most out of your custom classifiers.
How training can increase Watson Visual Recognition’s quality
The accuracy you will see from your custom classifier depends directly on the quality of the training you perform. Clients in the past who closely controlled their training processes have observed greater than 98% accuracy for their use cases. Accuracy — different from confidence score — is based on a ground truth for a particular classification problem and particular data set.
“Clients who closely control their image training processes observed greater than 98% accuracy
As a best practice, clients often create a ground truth to benchmark against human classification. Note that often humans make mistakes in classifications due to fatigue, reputation, carelessness, or other problems of the human condition.
On a basic level, images in training and testing sets should resemble each other. Significant visual differences between training and testing groups will result in poor performance results.
There are a number of additional factors that will impact the quality of your training beyond the resolution of your images. Lighting, angle, focus, color, shape, distance from subject, and presence of other objects in the image will all impact your training. Please note that Watson takes a holistic approach when being trained on each image. While it will evaluate all of the elements listed above, it cannot be tasked to exclusively consider a specific element.
The API will accept as few as 10 images per class, but we strongly recommend using a significantly greater amount of images to improve the performance and accuracy of your classifier. 100+ images per class is usually a good starting point to get more robust levels of accuracy.
What is the score that I see for each tag?
Each returned tag will include a confidence score between 0 and 1. This number does not represent a percentage of accuracy, but instead indicates Watson’s confidence in the returned classification based on the training data for that classifier. The API will classify for all classes in the classifier, but you can adjust the threshold to only return results above a certain confidence score.
The custom classifier scores can be compared to one another to compare likelihoods, but they should be viewed as something that is compared to the cost/benefit of being right or wrong, and then a threshold for action needs to be chosen. Be aware that the nature of these numbers may change as we make changes to our system, and we will communicate these changes as they occur.
Further details about scores can be found here.
Examples of difficult use cases
While Watson Visual Recognition is highly flexible, there have been a number of recurring use case that we’ve seen the API either struggle on or require significant pre/post-work from the user.
Face Recognition: Visual Recognition is capable of face detection (detecting the presence of faces) not face recognition (identifying individuals).
Detecting details: Occasionally, users want to classify an image based on a small section of an image or details scattered within an image. Because Watson analyzes the entire image when training, it may struggle on classifications that depend on small details. Some users have adopted the strategy of breaking the image into pieces or zooming into relevant parts of an image. See this guide for image pre-processing techniques.
Emotion: Emotion classification (whether facial emotion or contextual emotion) is not a feature currently supported by Visual Recognition. Some users have attempted to do this through custom classifiers, but this is an edge case and we cannot estimate the accuracy of this type of training.
Examples of good and bad training images
GOOD: The following images were utilized for training and testing by our partner OmniEarth. This demonstrates good training since images in training and testing sets should resemble each other in regards to angle, lighting, distance, size of subject, etc. See the case study OmniEarth: Combating drought with IBM Watson cognitive capabilities for more details.
Training images:
Testing image:
BAD: The following images demonstrate bad training since the training image shows a close-up shot of a single apple while the testing image shows a large group of apples taken from a distance with other visual items introduced (baskets, sign, etc). It’s entirely possible that Watson may fail to classify the test image as ‘apples,’ especially if another class in the classifier contains training images of a large group of round objects (such as peaches, oranges ,etc).
Training image:
Testing image:
BAD: The following images demonstrate bad training since the training image shows a close-up shot of a single sofa in a well-lit, studio-like setting while the testing image show a sofa that is partially cut off, farther away, and situated among many other objects in a real world setting. Watson may not be able to properly classify the test image due to the number of other objects cluttering the scene.
Training image:
Testing image:
Need help or have questions?
We’re excited to see what you build with Watson Visual Recognition, and we’re happy to help you along the way. Try the custom classifiers feature, share any questions or comments you have on our developerWorks forums, and start building with Watson for free today.
Originally published at www.ibm.com on October 24, 2016.
| Best Practices for Custom Models in Watson Visual Recognition | 33 | best-practices-for-custom-classifiers-in-watson-visual-recognition-1015a273f75d | 2018-04-27 | 2018-04-27 20:40:43 | https://medium.com/s/story/best-practices-for-custom-classifiers-in-watson-visual-recognition-1015a273f75d | false | 1,030 | AI Platform for the Enterprise | null | ibmwatson | null | IBM Watson | ibm-watson | IBM WATSON,ARTIFICIAL INTELLIGENCE,CLOUD SERVICES,MACHINE LEARNING,DEEP LEARNING | ibmwatson | Machine Learning | machine-learning | Machine Learning | 51,320 | Kevin Gong | Product manager @IBMWatson. Photographer. UX/UI designer. DIYer. Data tinkerer. Social good supporter. Formerly @McKinsey, @TEDx, @Cal, @ColumbiaSIPA | 8022025e9700 | kmgong | 533 | 568 | 20,181,104 | null | null | null | null | null | null |
End of preview. Expand
in Data Studio
Medium Articles Dataset Generator
This project combines multiple datasets from Kaggle and Hugging Face to create a comprehensive collection of Medium articles. The combined dataset is available on Hugging Face Hub.
Dataset Description
This dataset is a unique compilation that not only combines multiple sources but also ensures data quality through normalization and deduplication. A key feature is that all entries in the text
column are unique - there are no duplicate articles in the final dataset.
Data Sources:
Kaggle Sources:
- aiswaryaramachandran/medium-articles-with-content
- hsankesara/medium-articles
- meruvulikith/1300-towards-datascience-medium-articles-dataset
Hugging Face Sources:
- fabiochiu/medium-articles
- Falah/medium_articles_posts
Features
- Combines multiple data sources into a single, unified dataset
- Ensures uniqueness: Each article appears only once in the dataset
- Quality control:
- Removes duplicate entries based on article text
- Handles missing values
- Normalizes data format
- Saves the final dataset in efficient Parquet format
- Publishes the dataset to Hugging Face Hub
Requirements
pip install datasets
pip install kagglehub huggingface_hub tqdm
Usage
- Set up your Hugging Face authentication token
- Run the script:
python combined_medium_ds_generator.py
Data Processing Steps
- Downloads datasets from Kaggle and Hugging Face
- Normalizes each dataset by:
- Removing null values
- Eliminating duplicates
- Standardizing column names
- Combines all datasets into a single DataFrame
- Saves the result as a Parquet file
- Uploads the final dataset to Hugging Face Hub
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
Acknowledgments
Special thanks to the original dataset creators:
- aiswaryaramachandran
- hsankesara
- meruvulikith
- fabiochiu
- Falah
- Downloads last month
- 50