audioVersionDurationSec
float64 0
3.27k
⌀ | codeBlock
stringlengths 3
77.5k
⌀ | codeBlockCount
float64 0
389
⌀ | collectionId
stringlengths 9
12
⌀ | createdDate
stringclasses 741
values | createdDatetime
stringlengths 19
19
⌀ | firstPublishedDate
stringclasses 610
values | firstPublishedDatetime
stringlengths 19
19
⌀ | imageCount
float64 0
263
⌀ | isSubscriptionLocked
bool 2
classes | language
stringclasses 52
values | latestPublishedDate
stringclasses 577
values | latestPublishedDatetime
stringlengths 19
19
⌀ | linksCount
float64 0
1.18k
⌀ | postId
stringlengths 8
12
⌀ | readingTime
float64 0
99.6
⌀ | recommends
float64 0
42.3k
⌀ | responsesCreatedCount
float64 0
3.08k
⌀ | socialRecommendsCount
float64 0
3
⌀ | subTitle
stringlengths 1
141
⌀ | tagsCount
float64 1
6
⌀ | text
stringlengths 1
145k
| title
stringlengths 1
200
⌀ | totalClapCount
float64 0
292k
⌀ | uniqueSlug
stringlengths 12
119
⌀ | updatedDate
stringclasses 431
values | updatedDatetime
stringlengths 19
19
⌀ | url
stringlengths 32
829
⌀ | vote
bool 2
classes | wordCount
float64 0
25k
⌀ | publicationdescription
stringlengths 1
280
⌀ | publicationdomain
stringlengths 6
35
⌀ | publicationfacebookPageName
stringlengths 2
46
⌀ | publicationfollowerCount
float64 | publicationname
stringlengths 4
139
⌀ | publicationpublicEmail
stringlengths 8
47
⌀ | publicationslug
stringlengths 3
50
⌀ | publicationtags
stringlengths 2
116
⌀ | publicationtwitterUsername
stringlengths 1
15
⌀ | tag_name
stringlengths 1
25
⌀ | slug
stringlengths 1
25
⌀ | name
stringlengths 1
25
⌀ | postCount
float64 0
332k
⌀ | author
stringlengths 1
50
⌀ | bio
stringlengths 1
185
⌀ | userId
stringlengths 8
12
⌀ | userName
stringlengths 2
30
⌀ | usersFollowedByCount
float64 0
334k
⌀ | usersFollowedCount
float64 0
85.9k
⌀ | scrappedDate
float64 20.2M
20.2M
⌀ | claps
stringclasses 163
values | reading_time
float64 2
31
⌀ | link
stringclasses 230
values | authors
stringlengths 2
392
⌀ | timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | null | 0 | null | 2018-01-03 | 2018-01-03 14:49:20 | 2018-02-07 | 2018-02-07 12:31:01 | 1 | false | en | 2018-02-12 | 2018-02-12 17:02:40 | 1 | 105fef831b39 | 0.70566 | 0 | 0 | 0 | Born out of 5 years of Red Ninja research, supported by the UK government, in collaboration with the NHS, Future Cities Catapult, Transport… | 5 | The LiFE Project
Born out of 5 years of Red Ninja research, supported by the UK government, in collaboration with the NHS, Future Cities Catapult, Transport Systems Catapult, Dynniq and Siemens, the LiFE project seeks to develop an innovative application for an intelligent transport system. It operates in real-time to enable ambulances to reach life-threatening emergency cases quicker by integrating ambulance route finder applications with traffic management systems.
We believe that cooperation between smart technologies, real-world knowledge and human experience can save people’s lives.
Using Big Data principles, the LiFE algorithm integrates real-time city congestion data and ambulance location data to clear congestion through traffic light control, getting paramedics to life threatening calls as quickly and smoothly as possible.
To read about the reinforcement learning research behind LiFE click here
| The LiFE Project | 0 | the-life-project-105fef831b39 | 2018-05-24 | 2018-05-24 17:53:44 | https://medium.com/s/story/the-life-project-105fef831b39 | false | 134 | null | null | null | null | null | null | null | null | null | Smart Cities | smart-cities | Smart Cities | 5,072 | The LiFE Project | Helping ambulances to reach their 7 minute target through machine learning. A Red Ninja venture. | 1a373d363156 | thelifeproject | 85 | 35 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-28 | 2018-02-28 14:58:12 | 2018-02-03 | 2018-02-03 14:11:32 | 1 | false | en | 2018-03-03 | 2018-03-03 07:47:24 | 2 | 10616c894580 | 1.50566 | 0 | 0 | 0 | With the recent update to its Flights app, Google has redressed the situation of the weary travelers get to grips with the next trip to the… | 5 | Keeping Air Travel Vexations At Bay The ML Way
With the recent update to its Flights app, Google has redressed the situation of the weary travelers get to grips with the next trip to the airport. Trying to play the clairvoyant, this new feature in its latest update will try to predict when your flight will be delayed before the delay is even announced!
This exciting new feature uses machine learning to predict upcoming flight delays, while an equally helpful second feature gives more insight to what different airlines mean by “basic economy”.
According to a blog post from Google, the predictions won’t solely be pulling in information from the airlines, rather it will also comb through historical data of flight delays and look for common patterns in late departures and feed this data to its machine learning algorithms which will role out these predictions. It will also provide reasons for the delays, like weather or an aircraft arriving late. Google will only show you the prediction if the computer is at least 80 percent certain, but the company still recommends getting to the airport on time.
The second feature is aimed at new fare types such as “Basic Economy” which are cheaper alternatives as compared to other classes but also limit your access to some basic conveniences and occasionally restrict your ability to make any changes to your flight. Sometimes, airlines artfully belie to the naive travellers what these “cheaper” alternatives exactly include and exclude in their ticket prices. Google wants to help these guileless customers by providing them what exactly is restricted with their purchase for extra transparency.
These changes come only a month after Google Flights added price tracking and deals to Google Flights as well as hotel search features for web searchers.
These features are a testament to Google’s machine learning and big data prowess, especially in the case of predicting flight delays. There’s no telling what Google’s next venture would be, but one thing is guaranteed that we will benefit from it.
Related
Originally published at cyanotech.in on February 3, 2018.
| Keeping Air Travel Vexations At Bay The ML Way | 0 | keeping-air-travel-vexations-at-bay-the-ml-way-10616c894580 | 2018-03-03 | 2018-03-03 07:47:25 | https://medium.com/s/story/keeping-air-travel-vexations-at-bay-the-ml-way-10616c894580 | false | 346 | null | null | null | null | null | null | null | null | null | Travel | travel | Travel | 236,578 | harsh jain | null | 97e4f24f7f67 | harshrjain98 | 2 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-24 | 2018-01-24 08:41:18 | 2018-01-24 | 2018-01-24 08:57:43 | 1 | false | en | 2018-01-24 | 2018-01-24 09:02:35 | 4 | 10619cf2140d | 0.864151 | 3 | 0 | 0 | What is Big Data? | 5 | Overview of Big Data Tools !
Big Data
What is Big Data?
Big data is the growth in the volume of structured and unstructured data, the speed at which it is created and collected, and the scope of how many data points are covered.
Big Data refers to very large data sets or volumes of data coming at a high velocity either in a structured or an unstructured format. Big data often comes from multiple sources, and arrives in multiple formats.
Read More
Why Big Data?
Every organization generates large volumes of data from their business.Everywhere you go today, people are looking down at a mobile device. They are online, browsing, collaborating, shopping for goods and services and transacting business. And not just consumers are using them. Mobile devices are also being used extensively in business-to-business transactions. Big Data is stored in a separate data store such as HDFS (Hadoop Distributed File System), NoSQL databases etc.
Read More
Questions to be Answered
How to make the leap to Big Data?
Big Data Processing Tools?
Read More
| Overview of Big Data Tools ! | 21 | overview-of-big-data-tools-10619cf2140d | 2018-04-08 | 2018-04-08 09:32:18 | https://medium.com/s/story/overview-of-big-data-tools-10619cf2140d | false | 176 | null | null | null | null | null | null | null | null | null | Big Data | big-data | Big Data | 24,602 | Tech Hunt | #Technology #Freak - BigData, Digital Transformation, Cyber Security, Cloud Computing, Internet of Things | 33f515b691ce | techhunt2195 | 388 | 740 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 85c57d1507e7 | 2018-04-30 | 2018-04-30 20:11:55 | 2018-05-18 | 2018-05-18 14:46:08 | 1 | false | en | 2018-05-18 | 2018-05-18 14:46:08 | 0 | 10634f17b633 | 3.298113 | 4 | 0 | 0 | Algorithm-based sentiment analysis is a game changer for customer experience initiatives. | 5 | Your Customers Are Talking. Are You Listening?
Algorithm-based sentiment analysis is a game changer for customer experience initiatives.
By John Godwin
We see it with our clients and potential clients all the time: customer experience (CX) initiatives fall short or even fail to launch because their sponsors can’t build a case for change within the organization. What they need is a compelling storyline. Hunches and guesses, after all, are not enough to earn buy-in, secure funding and effect change. And when budgets are tight, skepticism flourishes. So where can you find compelling storylines to jumpstart your CX initiative? The answer is sentiment analysis.
Sentiment analysis is the collection and analysis of opinions about a product, service or brand. (Although it’s not exactly the same thing as emotion artificial intelligence — emotion AI — or data mining, people tend to use these phrases interchangeably and the subtle differences aren’t worth dwelling on here.) Traditional sentiment analysis can produce what feels like anecdotal evidence. It’s a time-consuming, monotonous task that’s prone to error. But machine learning’s ability to analyze large amounts of consumer opinion to reveal patterns radically transforms this process.
The more business interactions shift into the hands of the consumer — quite literally — and the more consumers use social media posts, product reviews and other expressions of opinion as their personal megaphone, the more important this kind of “super” sentiment analysis becomes. It is more accurate, faster and scalable.
More Accurate
Brandseye, a small firm based in Capetown that tracks real-time social media sentiment for the likes of Uber and Pizza Hut, used machine learning and sentiment analysis to do what polls and pundits couldn’t: predict Britain’s exit from the European Union and forecast Trump’s victory over Clinton. How did they do it? By using AI to scrape 37 million public social media conversations (read: consumer sentiment). To improve its accuracy, Brandseye crowdsourced human analysis of individual messages. (AI is limited when it comes to understanding context or detecting tone — sarcasm, hope, etc. — but there are, as this example illustrates, ways to compensate for this.)
Faster
In 2014, machine learning quickly enabled Expedia Canada to see that more than half of the people commenting on its “Escape Winter: Fear” commercial, which had performed well in focus groups, hated the violin soundtrack. Meant to be slightly annoying, it became insufferable for those who saw it play in “bonus” spots over and over during the World Junior Hockey Championships.
When the Twittersphere demanded revenge, Expedia complied.
The company created three new videos in response, including one where an angry citizen who had Tweeted that someone should smash the shrill violin got to do just that. “We’re listening,” the voiceover declared. “And hopefully that’s music to your ears.” Expedia’s ability to pivot quickly — and to do so with a sense of humor — was a win for the brand.
Scalable
Did I mention that Brandseye scraped 37 million public social media conversations? That’s six zeroes, folks.
Social media conversations — about both you and your competition — are a gold mine of consumer sentiment.
But what else can companies study with algorithm-based sentiment analysis?
Product reviews
Survey responses
Web pages and forums
Call center transcripts
To get started, you have to choose the best model and toolkit for your purposes.
Here are a few of the questions you’ll need to answer, probably with the help of a customer experience expert:
Which is the best algorithm to capture the info you need?
How do you identify the correct phrases to analyze?
How do you convert the insights you gather into better products, services and experiences for your customer?
Sentiment analysis can help you identify:
Areas of strength (so you know what’s working) and opportunities (so you know what isn’t). The latter are your opportunities, some of which can be surprisingly low-hanging fruit.
Training opportunities for call center and help desk employees or other agents of the company. When working with a Fortune 100 property and casualty insurer, sentiment analysis led us to suggest customer service reps address callers by name and ask the caller whether they are in a safe location before starting a conversation. Sometimes a simple tweak on the CX frontline is all you need.
Themes for new sales and marketing campaigns. Because sometimes you have to smash the violin, no matter how well it performed in focus groups.
Keep in mind that though I’ve talked about sentiment analysis as a great way to jumpstart a CX campaign, this should be an ongoing effort. Continuous sentiment analysis fuels an iterative test-and-learn strategy, the agile approach we should all be taking.
No matter what your goals, informed business decisions are better decisions. And the availability of algorithm-based sentiment analysis means there are no more excuses for not knowing how your customers feel about you. Start listening: it’s music to their ears.
| Your Customers Are Talking. Are You Listening? | 4 | your-customers-are-talking-are-you-listening-10634f17b633 | 2018-05-21 | 2018-05-21 17:33:03 | https://medium.com/s/story/your-customers-are-talking-are-you-listening-10634f17b633 | false | 821 | We are a technology consulting company. Our mission-oriented approach to consulting is driven by the belief that there's a smarter way to do business and a better way to serve customers. | null | SingleStoneConsulting | null | SingleStone | singlestone | CLOUD,DEVOPS,TECH,WEBSITE DEVELOPMENT,SOFTWARE DEVELOPMENT | SingleStoneTech | Machine Learning | machine-learning | Machine Learning | 51,320 | John Godwin | Customer Service Solution Lead at SingleStone | 6459e0baad2f | john.godwin | 3 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-24 | 2018-03-24 08:15:03 | 2018-03-24 | 2018-03-24 08:30:50 | 2 | true | en | 2018-03-24 | 2018-03-24 08:42:56 | 1 | 10664231f61c | 0.688994 | 0 | 0 | 0 | Artificial Intelligence is promising a lot of things, including great advances in medicine. This holy-chip moment remind us that AI systems… | 5 | The cure for all diseases — holy-chip #1
Artificial Intelligence is promising a lot of things, including great advances in medicine. This holy-chip moment remind us that AI systems can also screw up other things while trying to fix a particular problem.
Have a smart laugh!
Next
The holy-chip series is a narrative between 2 Artificial Intelligence characters. They do not have names. They are black and white. The date and place posted on the header are absolutely real.
| The cure for all diseases — holy-chip #1 | 0 | the-cure-for-all-diseases-holy-chip-1-10664231f61c | 2018-03-29 | 2018-03-29 20:36:01 | https://medium.com/s/story/the-cure-for-all-diseases-holy-chip-1-10664231f61c | false | 81 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Ricardo Mello | null | 6df18de4f6fd | rm_90700 | 18 | 19 | 20,181,104 | null | null | null | null | null | null |
0 | %matplotlib inline
import os
import pandas as pd
import numpy as np
from fbprophet import Prophet
path = os.path.dirname(os.path.dirname(os.getcwd())) + '/data/manning.csv'
data = pd.read_csv(path)
data['ds'] = pd.to_datetime(data['ds'])
data.head()
data.set_index('ds').plot(figsize=(12, 9))
data['y'] = np.log(data['y'])
data.set_index('ds').plot(figsize=(12, 9))
m = Prophet()
m.fit(data)
m.params
{u'beta': array([[ 0. , -0.03001147, 0.04819977, 0.00999481, -0.00228437,
0.01252909, 0.01559136, 0.00950633, 0.00075704, 0.00391209,
-0.00586589, 0.0075454 , -0.00524287, 0.00208091, -0.00477578,
-0.00410379, -0.0077744 , -0.00081338, 0.00125811, 0.00187115,
0.0069828 , -0.01233829, -0.01057246, 0.00938595, 0.00847051,
0.00088024, -0.00352237]]),
u'delta': array([[ 1.62507395e-07, 1.29092081e-08, 3.48169254e-01,
4.57815903e-01, 1.61826714e-07, -5.66144938e-04,
-2.34969389e-01, -2.46905754e-01, 9.96595883e-08,
-1.82605683e-07, 6.12381739e-08, 2.78653912e-01,
2.30631082e-01, 2.83118248e-03, 1.55276178e-03,
-8.61134360e-01, -3.14239669e-07, 5.54456073e-09,
4.91423429e-07, 4.71475093e-01, 7.93935609e-03,
1.36547372e-07, -3.38274613e-01, -3.20008088e-07,
1.16410210e-07]]),
u'gamma': array([[ -5.37486490e-09, -8.40863029e-10, -3.59567303e-02,
-6.19588853e-02, -2.69802216e-08, 1.12158987e-04,
5.44799089e-02, 6.53304459e-02, -2.95648930e-08,
6.03344459e-08, -2.21556944e-08, -1.09561865e-01,
-9.78411305e-02, -1.28994139e-03, -7.57253043e-04,
4.47568989e-01, 1.73293155e-07, -3.23167613e-09,
-3.01853068e-07, -3.04398195e-01, -5.37507537e-03,
-9.67767399e-08, 2.50366597e-01, 2.46999155e-07,
-9.35053320e-08]]),
u'k': array([[-0.35578215]]),
u'm': array([[ 0.62604285]]),
u'sigma_obs': array([[ 0.03759107]])}
future_data = m.make_future_dataframe(periods=365)
future_data.tail()
forecast = m.predict(future_data)
forecast.columns
Index([u'ds', u't', u'trend', u'seasonal_lower', u'seasonal_upper',
u'trend_lower', u'trend_upper', u'yhat_lower', u'yhat_upper', u'weekly',
u'weekly_lower', u'weekly_upper', u'yearly', u'yearly_lower',
u'yearly_upper', u'seasonal', u'yhat'],
dtype='object')
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
m.plot(forecast);
m.plot_components(forecast);
playoffs = pd.DataFrame({
'holiday': 'playoff',
'ds': pd.to_datetime(['2008-01-13', '2009-01-03', '2010-01-16',
'2010-01-24', '2010-02-07', '2011-01-08',
'2013-01-12', '2014-01-12', '2014-01-19',
'2014-02-02', '2015-01-11', '2016-01-17',
'2016-01-24', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
superbowls = pd.DataFrame({
'holiday': 'superbowl',
'ds': pd.to_datetime(['2010-02-07', '2014-02-02', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
holidays = pd.concat((playoffs, superbowls))
m = Prophet(holidays=holidays)
forecast = m.fit(data).predict(future_data)
m.plot_components(forecast);
from sklearn.metrics import mean_absolute_error
data = m.predict(data)
mean_absolute_error(np.exp(data['y']), np.exp(data['yhat']))
2436.9620410194648
| 19 | null | 2017-09-02 | 2017-09-02 02:09:17 | 2017-09-02 | 2017-09-02 02:14:29 | 5 | false | en | 2017-09-02 | 2017-09-02 02:14:29 | 6 | 1067e9f8169 | 7.289937 | 20 | 1 | 0 | This content originally appeared on Curious Insight | 2 | Time Series Forecasting With Prophet
This content originally appeared on Curious Insight
Prophet is an open source forecasting tool built by Facebook. It can be used for time series modeling and forecasting trends into the future. Prophet is interesting because it’s both sophisticated and quite easy to use, so it’s possible to generate very good forecasts with relatively little effort or domain knowledge in time series analysis.
There are a few requirements you’ll need to meet in order to use the library. It uses PyStan to do all of its inference, so PyStan has to be installed. PyStan has its own dependencies, including a C++ compiler. Python 3 also appears to be a requirement. Full installation instructions are here.
Let’s take a quick tour through Prophet’s capabilities. We can start by reading in some sample time series data. In this case we’re using Wikipedia page hits for Peyton Manning, which is the data set that Facebook collected for the library’s example code.
(Note: Medium can’t render tables — the full example is here)
There are only two columns in the data, a date and a value. The naming convention of using ‘ds’ for the date and ‘y’ for the value is apparently a requirement to use Prophet; it’s expecting those exact names and will not work otherwise!
Let’s examine the data by plotting it using pandas’ built-in plotting function.
The data is highly volatile with order-of-magnitude differences between a typical day and a high-traffic day. This will be hard to model directly. Let’s try applying a log transform to see if that helps.
Much better! Not only is it stationary, but we’ve also revealed what looks like some cyclical patterns in the data. We can now instantiate a Prophet model and fit it to our data.
That was easy! This is one of the most attractive features of Prophet. It essentially does all of the model selection work for you and gives you a result that works well without much user input required. In this case we didn’t have to specify anything at all, just give it some data and we get a model.
We’ll explore below what the model looks like but it’s worth spending a moment first to explain what’s going on here. Unlike typical time-series methods like ARIMA (which are considered generative models), Prophet uses something called an additive regression model. This is essentially a sophisticated curve-fitting model. I haven’t dug into any of the math, but based on the description in their introductory blog post, Prophet builds separate components for the trend, yearly seasonality, and weekly seasonality in the time series (with holidays as an optional fourth component). We can witness this directly by looking at one of the undocumented properties on the model object that shows the fitted parameters.
I think the beta, delta, and gamma arrays correspond to the distributions for the three different components. The way I think about this is we’re saying we have three different regression models with some unknown set of parameters, and we want to find the combination of those models that best explains the data. We can attempt to do this using maximum a-priori (MAP) estimation, where our priors are the equations for the regression components (piecewise linear for the trend, Fourier series for the seasonal component, and so on). This appears to be what Prophet is doing. I can’t say I’ve looked at it in any great detail so part of that explanation could be wrong, but I think it’s broadly correct.
Now that we have a model, let’s see what we can do with it. The obvious place to start is to forecast what we think our value will be for some future dates. Prophet makes this easy with a helper function.
(Note: Medium can’t render tables — the full example is here)
That gives us a data frame with dates going one year forward from where our data ends. We can then use the “predict” function to populate this data frame with forecast information.
The point estimate forecasts are in the “yhat” column, but note how many columns got added. In addition to the forecast itself we also have point estimates for each of the components, as well as upper and lower bounds for each of these projections. That’s a lot of detail provided out-of-the-box just by calling a single function!
Let’s see an example.
(Note: Medium can’t render tables — the full example is here)
Prophet also supplies several useful plotting functions. The first one is just called “plot”, which displays the actual values along with the estimates. For the forecast period it only displays the projections since we don’t have actual values for this period.
I found this to be a bit confusing because the data frame we passed in only contained the “forecast” date range, so where did the rest of it come from? I think the model object is storing the data it was trained on and using it as part of this function, so it looks like it will plot the whole date range regardless.
We can use another built-in plot to show each of the individual components. This is quite useful to visually inspect what the model is capturing from the data. In this case there are a few clear takeaways such as higher activity during football season or increased activity on Sunday & Monday.
In addition to the above components, Prophet can also incorporate possible effects from holidays. Holidays and dates for each holiday have to be manually specified over the entire range of the data set (including the forecast period). The way holidays get defined and incorporated into the model is fairly simple. Below are some holiday definitions for our current data set that include Peyton Manning’s playoff and Superbowl appearances (taken from the example code).
Once we have holidays defined in a data frame, using them in the model is just a matter of passing in the data frame as a parameter when we define the model.
Our component plot now includes a holidays component with spikes indicating the magnitude of influence those holidays have on the value.
While the Prophet library itself is very powerful, there are some useful features that we’d typically want when doing time series modeling that it currently doesn’t provide. One very simple and obvious thing that’s needed is a way to evaluate the forecasts. We can do this ourselves using scikit-learn’s metrics (you could also calculate it yourself). Note that since we took the natural log of the series earlier we need to reverse that to get a meaningful number.
That works fine as a very simple example, but for real applications we’d probably want something more robust like cross-validation over sliding windows of the data set. Currently in order to accomplish this we’d have to implement it ourselves.
Another limitation is the lack of ability to incorporate additional information into the model. One can imagine variables that could be used along with the time series to further improve the forecast (for example, a variable indicating if Peyton Manning had just won a game, or had a particularly good performance, or appeared in some news articles). We can’t do anything like this with Prophet directly. However, one idea I’ve experimented with in the past that may get around this limitation is building a two-stage model. The first stage is the Prophet model, and we use that to generate predictions. The second stage is a normal regression model that includes the additional signals as independent variables. The wrinkle is that instead of predicting the target directly, we predict the error from the time series model. When you put the two together, this may result in an even better overall forecast.
All things considered, Prophet is a great addition to the toolbox for time series problems. There are a number of knobs and dials that one can tweak that I didn’t get into because I still haven’t tried them out, but they provide options for advanced users to improve their forecasts even further. It’s worth cautioning that this software is fairly immature so proceed carefully if using it for any serious tasks. That said, the authors claim Facebook uses it extensively so take that for what it’s worth.
Follow me on twitter to get new post updates
| Time Series Forecasting With Prophet | 131 | time-series-forecasting-with-prophet-1067e9f8169 | 2018-05-24 | 2018-05-24 16:04:38 | https://medium.com/s/story/time-series-forecasting-with-prophet-1067e9f8169 | false | 1,711 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | John Wittenauer | Data scientist, engineer, author, investor, entrepreneur | 8e0612f4b153 | jdwittenauer | 155 | 102 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 5d1fa6653fc1 | 2018-01-19 | 2018-01-19 15:08:31 | 2018-01-22 | 2018-01-22 13:15:32 | 3 | false | en | 2018-01-22 | 2018-01-22 13:43:00 | 10 | 1067f51b04dd | 2.734906 | 4 | 0 | 0 | Building a FinTech startup is like riding a carriage on a dirt road. Sure it’s exciting to follow the path less traveled, but say hello to… | 5 | PreSeries joins FinTech Sandbox
Building a FinTech startup is like riding a carriage on a dirt road. Sure it’s exciting to follow the path less traveled, but say hello to the bumpiest ride of your life. In this analogy, let’s imagine that PreSeries, our machine-learning platform for startup investors, is a FinTech carriage that needs to find its way through the “data potholes”. With practice, navigating through the uncharted territory of startup data becomes a second nature, but the dream of a road paved with better data remains strong.
The 4 steps of working with startup data
But why is working with startup data such a challenge? At PreSeries, we are building an automated platform to scout and assess startups from around the globe in few clicks. It goes without saying that startup data is our lifeblood but is … well … scarce, often outdated, expensive to source, and you encounter missing data as often as the word “disrupt” at a tech conference. That’s the nature of working with early-stage private companies, they’re not really open books. But hey, hate the game not the players, right?
This is why we are very happy to announce that PreSeries is joining the FinTech Sandbox program. FinTech Sandbox is a Boston-based nonprofit that drives global FinTech innovation and collaboration. Their 6-month program provides access to data feeds and APIs from industry leading data partners, top quality cloud hosting from infrastructure partners, and much more. FinTech Sandbox is a thriving community of 2,200+ members, 70+ startups, and 40+ partners. We are thrilled to join this growing digital family!
This is an important step for us!
Being part of such an amazing community of FinTech passionate experts makes us really proud. If you are amazed by the team running FinTech Sandbox (jean donnelly, David Jegen, Sarah Biller or Mona M. Vernon to name just some), or the data partners (ThomsonReuters, S&P Global, Dun&Bradstreet or Edgar to name a few), you would also like to check the startup alumni section: Quantopian, CircleUp or Nutonian among others.
Access to new premium data streams will help us increase the quality of our machine learning models. We want to develop the right models and tools so that our users are later on able to access and customize depending on their preferences.
Lastly, we are excited to work with the FinTech Sandbox data partners and explore ways to develop long-standing relationships with them. We are advocates for more data to find and assess startups and are excited to open a whole new market in terms of data consumption with the venture capital community.
The PreSeries Dashboard
Our mission is to build the long-awaited crawling & machine-learning infrastructure needed for better startup scouting and analysis, so startup investors don’t have to! For venture capitalists, our SaaS platform is eliminating the time and cost of building their own machine-learning solution by democratizing access to predictive technologies. We are saving investors an estimated 2 to 5 years of development and between $6 to $10 million a year in development and maintenance cost (infrastructure, data providers, engineers and analysts salaries, etc.).
On a last note, I want to stress the fact that PreSeries is growing and looking for passionate people to join the team. If you want to help us make venture capital a more data-driven practice, fill out our application form! We’re looking for data engineers, data scientists, designers, front-end developers, as well as sales & marketing people. Looking forward to your application!
| PreSeries joins FinTech Sandbox | 10 | preseries-joins-fintech-sandbox-1067f51b04dd | 2018-05-11 | 2018-05-11 05:01:22 | https://medium.com/s/story/preseries-joins-fintech-sandbox-1067f51b04dd | false | 579 | PreSeries is a Machine-Learning-as-a-Service (MLaaS) company that generates automated real-time insights and scores about startups and their industries in order to guide investment decisions for investors big and small. 400k+ startups analyzed with A.I. | null | preseriestech | null | PreSeries | preseries | VENTURE CAPITAL,ARTIFICIAL INTELLIGENCE,MACHINE LEARNING,STARTUP,INVESTING | preseries | Fintech | fintech | Fintech | 38,568 | Fabien Durand | Business Swiss Army knife @preseries. Also, team @bigmlcom & @papisdotio. Co-organizer of #AIStartupBattle | d0b869bffd83 | TheFabienDurand | 97 | 414 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-12-09 | 2017-12-09 23:27:45 | 2017-12-10 | 2017-12-10 00:24:40 | 11 | false | en | 2017-12-10 | 2017-12-10 00:25:05 | 11 | 106872efeb82 | 3.00566 | 4 | 0 | 0 | Day 4 was rich for think-provoking talks instead of some short-term-useful papers. | 3 | NIPS 2017, Day 4 (orals + symposium)
Day 4 was rich for think-provoking talks instead of some short-term-useful papers.
First part of the day I was at Neuroscience track. It was quite refreshing in that sense, that Machine learning there is used for modeling, instead of solving some practical tasks. “Better statistics”, if you want.
“Model based …” talk presented a problem of “how to know, which brain neuron connected to which”, and solution which includes laser stimulation, measuring fluorescence as outcome and processing the results with quite complicated model. May be after finishing a PhD I consider such application of my ML-skills instead of building computer vision algorithms.
Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit
“Shape and Material from Sound” and “Scene Physics Acquisition via Visual De-animation” both from Josh Tenenbaum group presented nice way of teaching CNN some intuition about physics of world
Then I went to Symposium, the most vague and abstract topic: “Kinds of intelligence”.
Lucia Jacobs talked about types of navigation systems, which animals use: scentic, the most ancient, and visual. From other side: detection-based (I feel prey there) vs. prediction based (Rabbit will jump there) and their implications. She argued a lot of brain is about navigation in some environment structure and studying evolution of natural navigation systems could help in AI-creation work.
Alison Gopnik spoke about child brain and that is much more relevant for ML than adult brain. Slides are here
Alison Gopnik quotes Turing
Frequent terms in children cognitive science are much closer to ML, than in adult cogscience
Several take-aways:
More intelligent creature is — the longer childhood it haves. It is somehow necessary for future life.
Children are on “exploration” side of exploration-exploitation trade-off. “Bugs” of children behavior are features for learning.
3. Children generate more complex and non-trivial hypothesis about underlying process than adults are.
Children do “high-temperature search”
Key take-away: give your children safe and loving childhood :)
Demis Hassabis presented DeepMind philosophy and AlphaZero algorithm.
Principles are: learning, generic, grounded, general, active
“Grounded-vs-logic-based” is the most interesting and surprising to me dichotomy. Never thought about it.
AlphaZero takeways are: resulting system is long-term based, no concept of materiality, flexible and patience. Good advices for life, actually.
Next talk was from Gary Marcus and he devoted the whole talk to pointing why AlphaZero is not Zero, e.g. MonteCarlo tree search is quite improtant hand-crafted concept, and why we are far from solving AI. Also recommended Jerry Fodor books
Cognition is function of alknowledge, experience, algorithms
The rest of talks I half-missed. The most important take away from them is:
Beliefs and values can be infered from actions person take. Children are quite good at it, although it is possible for machine learning as well.
That is roughly all for me :)
Day 1 is here, days 2–3 TBD
| NIPS 2017, Day 4 (orals + symposium) | 70 | nips-2017-day-4-orals-symposium-106872efeb82 | 2018-02-12 | 2018-02-12 15:37:36 | https://medium.com/s/story/nips-2017-day-4-orals-symposium-106872efeb82 | false | 452 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Dmytro Mishkin | Computer Vision PhD student in Prague | 4785aab5d5ac | ducha.aiki | 92 | 23 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-22 | 2018-07-22 02:41:35 | 2018-07-22 | 2018-07-22 04:22:20 | 18 | false | en | 2018-07-22 | 2018-07-22 04:22:20 | 5 | 1068a7abc09b | 4.751887 | 0 | 0 | 0 | This blog post a continuation to Oh My Guudness….. Machine Learning. | 1 | Oh My Guudness….. Machine Learning II
This blog post a continuation to Oh My Guudness….. Machine Learning.
Please note that this post is for my future self to review the materials on this book without reading it all over again.
There are many statistical tools that help us to achieve machine learning goal of solving a task not only on training data but also to generalize. Estimators, bias and variance are foundational concepts useful to characterize notions of generalization, underfitting and overfitting.
Point estimation is the relation between input and target variables. We want to estimate a parameter. The true value of the parameter : 𝜽 (fixed and unknown). The estimated value of parameter : 𝜽 (a function of the data, random variable). Let { 𝑥 1 , … , 𝑥 𝑚 } be a set of 𝑚 independent and identically distributed (i.i.d.) data points . A point estimator or statistic is any function of the data: 𝜃 𝑚 = 𝑔(𝑥(1) , … , 𝑥(𝑚) ). Function estimation trys to predict a variable y given an input vector x. It describes the realtionship between y and x.
Bias of a model is the difference between the expected value of the estimator and the actual value. The difference between the expectation of the means of the samples we get from a population with mean θ and that population parameter, θ, itself is zero, because the sample means will be all distributed around the population mean. None of them will be the population mean exactly, but the mean of all the sample means will be exactly the population mean.
This is not the case for other parameters, such as the variance, for which the variance observed in the sample tends to be too small in comparison to the true variance. So if we want to estimate the population variance from the sample we divide by n−1, instead of n (Bassel’s correction) to correct the bias of the sample variance as an estimator of the population variance:
Variance and standard error is another important property of an estimator.
If we resampled the dataset and repeat the learning process, what is the variance in each of our learned parameters? A high variance implies that the parameters we learn are highly dependent on the dataset, which may indicate overfitting. A low variance implies that the parameters don’t change much when the dataset they train on is different, which implies that there’s less overfitting. We often compute the generalization error by takin the sample mean of the error on the test set.
Bias-Variance is a tradeoff between a model’s ability to minimize bias and variance. It is essential to understand how different sources of error that lead to bias and variance, which helps us to improve the data fitting process resulting in more accurate models. Bias-variance are actually side effects of one factor: the complexity of our model.
Consistency is the number of data points m in the dataset increase, the point estimates converge to the true value of the corresponding parameters. It ensures that the bias induced by the estimator diminishes as the number of data examples grows.
Maximum likelihood estimation is a method in statistics for estimating parameter(s) of a model for given data. The basic intuition behind MLE is that the estimate which explains the data best, will be the best estimator.
The main advantage of MLE is that it has asymptotic property. It means that when the size of the data increases, the estimate converges faster towards the population parameter. We use MLE for many techniques in statistics to estimate parameters.
Another interpretation of the MLE is that it minimizes teh ference between 2 probability distributions: one which si given by your learned parameters θ and the other which is the true underlying data generating distribution. The KL divergence is a measure of difference between two probability distributions, and is closely related to the cross entropy loss. Minimizing this is equivalent to maximizing the likelihood.
Conditional Log-Likelihood and MSE
Supervised Learning Algorithms are learning algorithms that learn to associate some input with some output, given an training set of examples of inputs x and outputs y.
Unsupervised Learning Algorithms s cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Stochastic Gradient Descent is an iterative optimization algorithm that can be applied to functions that are a linear combination of differentiable functions. Main idea is that computing the true gradient may be inefficient or impractical, especially on a very large deep learning dataset, since we need to sum across all of the training examples, and there could be millions.
Conclusion
This wraps up Chapter 5. If any errors are found, please email me at [email protected]. Meanwhile follow me on my twitter here. The log for journey can be found here.
Reference
Ian Goodfellow, Yoshua, Bengio, and Aaron Courvlle. Deep Learning MIT Press, 2016. http://www.deeplearningbook.org
| Oh My Guudness….. Machine Learning II | 0 | oh-my-guudness-machine-learning-ii-1068a7abc09b | 2018-07-25 | 2018-07-25 11:56:13 | https://medium.com/s/story/oh-my-guudness-machine-learning-ii-1068a7abc09b | false | 822 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Willam Green | null | 1ed8f83521bf | dskswu | 25 | 46 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-28 | 2018-06-28 14:15:38 | 2018-06-28 | 2018-06-28 14:20:26 | 4 | false | en | 2018-06-28 | 2018-06-28 14:20:26 | 1 | 106c05d9b72c | 3.650943 | 1 | 0 | 0 | I first moved to NYC in 2006 to attend college at the age of 18. I was very privileged in that my parents paid for everything. After… | 5 | All the Money I Spent in NYC in 2011–12 (And Why I’ll Never Live There Again)
I first moved to NYC in 2006 to attend college at the age of 18. I was very privileged in that my parents paid for everything. After college, I set out to become an artist and the first thing I did was moved from the Upper West Side to Flatbush, Brooklyn.
I worked as a babysitter and I paid for all of my expenses. No more help from mom and dad. I had a one-bedroom apartment that I shared with a friend who lived in the living room. I also rented a studio space to paint in. I didn’t make enough money to survive so I had to rely on other (sporadic) forms of income and being extremely cheap.
My exit strategy was to go to graduate school. I spent the entire summer and fall preparing my applications to graduate schools. In the winter and spring of 2012, I was struggling financially and looking forward to moving out of Brooklyn. After getting accepted into graduate school, I finally left Brooklyn in July of 2012. I lived in Brooklyn for just over a year.
So here is the data… I recorded every single penny that I spent while I lived in Brooklyn from October 2011 — July 12, although the time I actually lived there was May ’11 — July ’12. This data focuses only on expenses, not earnings, because I earned some of this money through very shady means, which I am not proud of, because one of my jobs was suddenly cut because of layoffs. Nevertheless, the good things that came out of this period are that I learned how to manage my own finances on a very tight budget and I learned the NYC hustle.
Rent was my biggest expense at $500 a month. I shared a one-bedroom apartment with a friend. It cost $1000 total in Sunset Park, Brooklyn in 2011. I lived in the bedroom and my friend lived in the living room. We split the rent evenly because my friend was generous. But it was not the most comfortable living arrangement. The second biggest expense was food, including cost of groceries, restaurants/eating out, and snacks/food on the go. This accounted for between 10–30% of my monthly expenses. Next, a monthly unlimited metro ticket was $104 and sometimes I had to spend more if I lost it. Finally, I spent a good amount of my income on both my art studio and art supplies, sometimes up to 16% of my monthly expenses but not any more than that.
Monthly Expenses
Monthly Expenses by Percentage of Total
Over the course of 10 months, the two biggest anomalies occurred during Christmas holidays and during my move out of Brooklyn in July. In December, I spent extra money on a plane ticket home, gifts and mailing gifts. Similarly, in July, I spent money on an airplane ticket, mailing all of my belongings (about 30 boxes) via USPS to my new home, and hotel costs.
Total Expenses Over Time
Overall, I was able to consistently keep my monthly expenses below $1800. But there was an upward trend to spend more as I lived in Brooklyn longer excluding the month of July when I moved. This was accounted for by a change in my living situation. I got a new roommate and I elected to pay more for rent because I lived in the private bedroom. Perhaps I also got better at tracking my expenses too.
Granted, if I stayed in Brooklyn, I could have found a better job to live more securely and earn more income. But this was in 2012. Cost of living has sky rocketed since then. For a single artist with no debt, living so cheaply in NYC is possible but, let’s be honest, living in purely survival mode is no way to live.
It was beautiful to live in an artistic epicenter like Brooklyn. I learned a lot about myself and about making a living. But I would not choose to live there again because of the financial struggles. First, the cost of living is exorbitantly high. Second, the quality of life is poor — read: smelly, loud, dangerous and stressful. Third, I was far from my family. Fourth, I didn’t have reliable income. Fifth, the weather sucked.
Again, I love Brooklyn but I would never live here again — not even if I were making boat loads of money. Why? Because I can live on a similar budget very comfortably in many different places. The costs of living in NYC are just too many.
Monthly Expenses with Expense Categories
https://dataqueery.wordpress.com/2018/06/28/all-the-money-i-spent-in-nyc-in-2011-12-and-why-ill-never-live-there-again/
| All the Money I Spent in NYC in 2011–12 (And Why I’ll Never Live There Again) | 1 | all-the-money-i-spent-in-nyc-in-2011-12-and-why-ill-never-live-there-again-106c05d9b72c | 2018-07-04 | 2018-07-04 16:27:46 | https://medium.com/s/story/all-the-money-i-spent-in-nyc-in-2011-12-and-why-ill-never-live-there-again-106c05d9b72c | false | 782 | null | null | null | null | null | null | null | null | null | Money | money | Money | 35,618 | Jesse Ruiz | Artist, Queer, San Antonio, TX | c9d02c6dbd3a | jjr8888 | 1 | 4 | 20,181,104 | null | null | null | null | null | null |
0 | resource "aws_vpc" "main" {
cidr_block = "172.19.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags { Name = "VPC_name" }
}
resource "aws_subnet" "public-subnet" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "172.19.0.0/21"
availability_zone = "eu-central-1a"
tags { Name = "example_public_subnet" }
}
resource "aws_internet_gateway" "gateway" {
vpc_id = "${aws_vpc.main.id}"
tags { Name = "gateway_name" }
}
resource "aws_route_table" "public-routing-table" {
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
tags { Name = "gateway_name" }
}
resource "aws_route_table_association" "public-route-association" {
subnet_id = "${aws_subnet.public-subnet.id}"
route_table_id = "${aws_route_table.public-routing-table.id}"
}
resource "aws_iam_role" "spark_cluster_iam_emr_service_role" {
name = "spark_cluster_emr_service_role"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [ {
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole"
} ]
} EOF
}
resource "aws_iam_role_policy_attachment" "emr-service-policy-attach" {
role = "${aws_iam_role.spark_cluster_iam_emr_service_role.id}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceRole"
}
resource "aws_iam_role" "spark_cluster_iam_emr_profile_role" {
name = "spark_cluster_emr_profile_role"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [ {
"Sid": "",
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
} ]
} EOF
}
resource "aws_iam_role_policy_attachment" "profile-policy-attach" {
role = "${aws_iam_role.spark_cluster_iam_emr_profile_role.id}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role"
}
resource "aws_iam_instance_profile" "emr_profile" {
name = "spark_cluster_emr_profile"
role = "${aws_iam_role.spark_cluster_iam_emr_profile_role.name}" }
ssh-keygen -t rsa
ssh-keygen -f cluster-key -e -m pem
resource "aws_key_pair" "emr_key_pair" {
key_name = "emr-key"
public_key = "${file("/.ssh/cluster-key.pub")}"
}
resource "aws_s3_bucket" "logging_bucket" {
bucket = "emr-logging-bucket"
region = "eu-central-1" versioning { enabled = "enabled" }
}
resource "aws_security_group" "master_security_group" {
name = "master_security_group"
description = "Allow inbound traffic from VPN"
vpc_id = "${aws_vpc.main.id}" # Avoid circular dependencies stopping the destruction of the cluster
revoke_rules_on_delete = true # Allow communication between nodes in the VPC
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
ingress {
from_port = "8443"
to_port = "8443"
protocol = "TCP"
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
} # Allow SSH traffic from VPN
ingress {
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} #### Expose web interfaces to VPN # Yarn
ingress {
from_port = 8088
to_port = 8088
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} # Spark History
ingress {
from_port = 18080
to_port = 18080
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} # Zeppelin
ingress {
from_port = 8890
to_port = 8890
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} # Spark UI
ingress {
from_port = 4040
to_port = 4040
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} # Ganglia
ingress {
from_port = 80
to_port = 80
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
} # Hue
ingress {
from_port = 8888
to_port = 8888
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
}
lifecycle {
ignore_changes = ["ingress", "egress"]
}
tags { name = "emr_test" }
}
resource "aws_security_group" "slave_security_group" {
name = "slave_security_group"
description = "Allow all internal traffic"
vpc_id = "${aws_vpc.main.id}"
revoke_rules_on_delete = true # Allow communication between nodes in the VPC
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
ingress {
from_port = "8443"
to_port = "8443"
protocol = "TCP"
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
} # Allow SSH traffic from VPN
ingress {
from_port = 22
to_port = 22
protocol = "TCP"
cidr_blocks = ["123.123.0.0/16"]
}
lifecycle {
ignore_changes = ["ingress", "egress"]
}
tags { name = "emr_test" }
}
provider "aws" { region = "eu-central-1" }
resource "aws_emr_cluster" "emr-spark-cluster" {
name = "EMR-cluster-example"
release_label = "emr-5.9.0"
applications = ["Ganglia", "Spark", "Zeppelin", "Hive", "Hue"]
ec2_attributes {
instance_profile = "${aws_iam_instance_profile.emr_profile.arn}"
key_name = "${aws_key_pair.emr_key_pair.key_name}"
subnet_id = "${aws_vpc.main.id}"
emr_managed_master_security_group = "${aws_security_group.master_security_group.id}"
emr_managed_slave_security_group = "${aws_security_group.slave_security_group.id}" }
master_instance_type = "m3.xlarge"
core_instance_type = "m2.xlarge"
core_instance_count = 2
log_uri = "${aws_s3_bucket.logging_bucket.uri}"
tags { name = "EMR-cluster" role = "EMR_DefaultRole" }
service_role = "${aws_iam_role.spark_cluster_iam_emr_service_role.arn}" }
resource "aws_emr_instance_group" "task_group" {
cluster_id = "${aws_emr_cluster.emr-spark-cluster.id}"
instance_count = 4
instance_type = "m3.xlarge"
name = "instance_group"
}
terraform plan
terraform apply
terraform destroy
terraform {
backend "s3" {
bucket = "terraform-bucket-name"
region = "eu-central-1"
}
}
| 23 | null | 2018-09-18 | 2018-09-18 19:03:20 | 2017-11-19 | 2017-11-19 16:47:00 | 0 | false | en | 2018-09-18 | 2018-09-18 19:25:53 | 5 | 106c7893f84a | 5.483019 | 0 | 0 | 0 | This post is about setting up the infrastructure to run yor spark jobs on a cluster hosted on Amazon. | 5 | Terraforming a Spark cluster on Amazon
This post is about setting up the infrastructure to run yor spark jobs on a cluster hosted on Amazon.
Before we start, here is some terminology that you will need to know:
Amazon EMR — The Amazon service that provides a managed Hadoop framework
Terraform — A tool for setting up infrastructure using code
At the end of this post you should have an EMR 5.9.0 cluster that is set up in the Frankfurt region with the following tools:
Hadoop 2.7.3
Spark 2.2.0
Zeppelin 0.7.2
Ganglia 3.7.2
Hive 2.3.0
Hue 4.0.1
Oozie 4.3.0
By default EMR Spark clusters come with Apache Yarn installed as the resource manager.
We will need to set up an S3 bucket, a network, some roles , a key pair and the cluster itself. Let’s get started.
VPC setup
A VPC (Virtual private cloud) is a virtual network to which the cluster can be assigned. All nodes in the cluster will become part of a subnet within this network.
To set up a VPC in terraform fist create a VPC resource:
Then we can create a public subnet. The availability zone is generally optional, but for this exercise you should have it as some of the settings that we choose are only compatible with eu-central-1a (such as the types of instances that we use).
We then create a gateway for the public subnet.
A routing table is then needed to allow traffic to go through the gateway.
Lastly, the routing table must be assigned to the to the subnet to allow the traffic in and out from it.
Roles
Next we need to set up some roles for the EMR cluster.
First a service role is necessary. This role defines what the cluster is allowed to do within the EMR environment.
Note that EOF tags imply content with a structure. These need to have no trailing spaces, which leads to strange indentation.
This service role needs a policy attached. In this example we will simply used the default EMR role.
Next we need a role for the EMR profile.
This role is assigned the EC2 default role, which defines what the cluster is allowed to do in the EC2 environment.
Lastly the instance profile, which is used to pass the role’s details to the EC2 instances.
Key setup
Next you will need ssh keys that will allow you to ssh into the master node.
To create the ssh key and .pem file run the following command:
Enter a key name, such as cluster-key, and enter no password. Then create a pem file from the private key.
Lastly create a key pair in terraform, linking to the key that you have created
S3
Next we need an s3 bucket. You may need more that one depending on your project requirements. In this example we will simply create one for the cluster logs.
Security groups
Next we need a security group for the master node. This security group should allow the nodes to communicate with the master node, but also to be accessed via certain ports from your personal VPN.
You can find your public IP address by simply going to this site.
Let’s assume that your public address is 123.123.123.123 with subnet /16.
We also need a security group for the rest of the nodes. These nodes should only communicate internally.
Note that when you create 2 security groups ircular dependencies are created. When destroying the terraformed infrastructure in such a case, you need to delete the associations of the security groups before deleting the groups themselves. The revoke_rules_on_delete option takes care of this automatically.
Cluster
Finally, now that we have all the components, we can set up the cluster.
First add the provider
Then we add the cluster itself
You can add task nodes as follows
Saving the file
Save the file as your prefered name with the extention `.tf`
Creating the cluster
To run the terraform script ensure the following:
Run the following to make sure that your setup is valid:
If there are no errors, you can run the following to create the cluster:
Destroying the cluster
To take down all the terraformed infrastructure run the following:
You can add the following to you file if you want the terraform state file to be saved to an S3 bucket. This file allows terraform to know the last state of terraforming your infrastructure (what has been created or destroyed)
Originally published at intothedepthsofdataengineering.wordpress.com on November 19, 2017.
| Terraforming a Spark cluster on Amazon | 0 | terraforming-a-spark-cluster-on-amazon-106c7893f84a | 2018-09-18 | 2018-09-18 19:25:53 | https://medium.com/s/story/terraforming-a-spark-cluster-on-amazon-106c7893f84a | false | 1,453 | null | null | null | null | null | null | null | null | null | Terraform | terraform | Terraform | 656 | Kristina Georgieva | Data scientist, software engineer, writer, blogger, ammature painter | 966717487dbc | kristina.s.georgieva | 5 | 8 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | cc02b7244ed9 | 2018-03-02 | 2018-03-02 07:02:03 | 2018-03-02 | 2018-03-02 07:04:35 | 0 | false | en | 2018-03-02 | 2018-03-02 07:04:35 | 11 | 106cc3429254 | 1.981132 | 0 | 0 | 0 | PRODUCTS & SERVICES | 5 | Tech & Telecom news — Mar 2, 2018
PRODUCTS & SERVICES
Video / Entertainment
Sky, currently in the middle of an exciting battle between two giants (Disney and Comcast) trying to acquire the company, just announced a new European partnership with Netflix, that now will be bundled with Sky TV subscription packages. This looks more consistent with Comcast’s strategy than with Disney’s (Story)
In their IPO prospectus, Spotify implicitly positions as a “new Netflix”, claiming they’re an agent contributing to massive change in the way people consume entertainment, and to modernise a previously “sclerotic” business. They also admit that average revenue per user fell -20% in the last 2 years, to €5.24/month (Story)
Digital ID
The top 4 US mobile carriers are deploying a new mobile authentication platform, potentially useful as a safe way for users to identify across different apps. This is based on a specification by the GSMA’s Mobile Authentication Task Force, and ensures interoperability with the association’s Mobile Connect technology (Story)
Advertising
Signs emerging of a potential “tectonic shift” in app monetisation. Procter & Gamble just announced a reduction of digital advertising budget by $200m last year, claiming it had become “wasteful”. The company has been a leading critic of lack of transparency, potential fraud and brand safety issues on YouTube or Facebook (Story)
Enterprise
VMWare, currently under evaluation by parent company Dell (which could reportedly be considering a “reverse takeover”) just published 4Q17 results, beating analysts expectations with +14% revenue growth (to $2.31bn/Q), a sign of strength amid their current moves to adapt to customers’ migration to the cloud (Story)
Regulation
EU regulators increasing pressure on internet companies, including Google, Facebook and Twitter, to deal with illegal content at their sites / apps. New published guidelines ask for the apps to remove illegal or terrorist-related content within one hour of it being flagged by local law enforcement by the EU’s police agency (Story)
HARDWARE ENABLERS
Networks
With the MWC just closed, first analysis are being published, and a common theme is that focus has shifted this year from consumer gadgets (an increasingly mature and commoditised space) to 5G networks, which now seem “closest to becoming a reality”, and the enterprise digitisation opportunities that they create (Story)
The US regulator FCC has unveiled a new plan to advance 5G deployments, focused on updating current wireless infrastructure regulations, “written for previous technology generations”, which “threaten US 5G leadership”. This could streamline procedures to deploy wireless sites and reduce regulatory financial burdens (Story)
SOFTWARE ENABLERS
Microsoft is working hard not to be left behind in the race of hyper-scale cloud providers to incorporate Artificial Intelligence features that make their services more attractive for developers. The company’s “Cognitive Services” unit just announced a new collection of cloud-hosted APIs for different AI capabilities (Story)
VENTURE CAPITAL
SoftBank’s Vision Fund keeps making big deals, with apps linking physical and digital worlds (including taxi hiring) as a key area of focus. Now they just backed the notion that food delivery apps have a bright future, by investing $535m in DoorDash, a startup which just turned into a unicorn ($1.4bn implicit valuation) (Story)
Subscribe at https://www.getrevue.co/profile/winwood66
| Tech & Telecom news — Mar 2, 2018 | 0 | tech-telecom-news-mar-2-2018-106cc3429254 | 2018-03-02 | 2018-03-02 07:04:37 | https://medium.com/s/story/tech-telecom-news-mar-2-2018-106cc3429254 | false | 525 | The most interesting news in technology and telecoms, every day | null | null | null | Tech / Telecom News | tech-telecom-news | TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE | winwood66 | Spotify | spotify | Spotify | 5,401 | C Gavilanes | food, football and tech / [email protected] | a1bb7d576c0f | winwood66 | 605 | 92 | 20,181,104 | null | null | null | null | null | null |
|
0 | import numpy as np
from sklearn import preprocessing, cross_validation
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data')
df.columns = ['id','clump_thickness','unif_cell_size','unif_cell_shape','marg_adhesion','single_epith_size','bare_nuclei','bland_chrom','norm_nucleoli','mitoses','class']
df.drop(['id'], inplace=True, axis=1)
df.replace('?', -99999, inplace=True)
df['class'] = df['class'].map(lambda x: 1 if x == 4 else 0)
X = np.array(df.drop(['class'], axis=1))
y = np.array(df['class'])
scaler = preprocessing.MinMaxScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X, y, test_size=0.2)
from __future__ import print_function
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
import tensorflow as tf
model = Sequential()
model.add(Dense(9, activation='sigmoid', input_shape=(9,)))
model.add(Dense(27, activation='sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(54, activation='sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(27, activation='sigmoid'))
model.add(Dropout(0.25))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.mean_squared_logarithmic_error)
model.fit(X_train, y_train, batch_size=30, epochs=2000, verbose=1, validation_data=(X_test, y_test))
Output:
Epoch 2000/2000
558/558 [==============================] - 0s 320us/step - loss: 0.0104 - val_loss: 0.0182
loss = model.evaluate(X_test, y_test, verbose=1, batch_size=30)
print("Final result is {}".format(100 - loss*100))
Output:
Final result is 98.18395614690546
| 19 | null | 2018-09-01 | 2018-09-01 10:24:56 | 2018-09-01 | 2018-09-01 10:53:27 | 2 | false | en | 2018-09-01 | 2018-09-01 10:53:27 | 3 | 106cf846cac0 | 1.564465 | 1 | 2 | 0 | The dataset I’m using in this project is | 5 | Classifying Breast Cancer %98.18 accurate with KERAS
Neural Network
The dataset I’m using in this project is
Breast Cancer Wisconsin (Original) Data Set
by
Dr. WIlliam H. Wolberg (physician)
University of Wisconsin Hospitals
Madison, Wisconsin, USA
Creating, reshaping, scaling and splitting the data
Link to the data can be found in here.
Imports
Reading the data
Reshaping
Adding feature colums to the dataframe
Dropping id column because is has no correlation with the class
Replacing empty data with -99999 to be a outlier
Mapping class values to binary, it is 2 and 4 in our data. (2 for benign, 4 for malignant)
Final dataframe
Final dataframe
Scaling the data
Creating X(features) and y(classes)
Creating scaler instance
Finally scaling the data
Splitting the data
Creating the model and training
Usual imports
Creating the model
Creating the model instance
Adding Layers to the model
Compiling the model
I’m using Adam as optimizer and mean squared logarithmic error as loss function.
Traning the model
Evaluating results
Final result is %98.18 accuracy.
All the code can be found in the notebook on colab.
| Classifying Breast Cancer %98.18 accurate with KERAS | 1 | classifying-breast-cancer-98-18-accurate-with-keras-106cf846cac0 | 2018-09-01 | 2018-09-01 10:53:27 | https://medium.com/s/story/classifying-breast-cancer-98-18-accurate-with-keras-106cf846cac0 | false | 313 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Tayyip Gören | SOFTWARE ENGINEER, DEVELOPER, COMPUTER ENTHUSIAST, SPORTSMAN | 625ac4ca06ec | tayyipgoren | 9 | 5 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-03 | 2018-07-03 04:54:18 | 2018-07-03 | 2018-07-03 04:55:25 | 0 | false | en | 2018-07-03 | 2018-07-03 04:55:25 | 0 | 106d1fc4999c | 0.520755 | 0 | 0 | 0 | In the editing side of photography, it’s the data that counts. Raw data will be equally accessible by photographers and AI system. Since AI… | 1 | Photo editing
In the editing side of photography, it’s the data that counts. Raw data will be equally accessible by photographers and AI system. Since AI is all about using algorithm and data processing, editing tasks can be seamlessly achieved by manipulating certain parameters. To know how AI computer vision is transforming photography, you need to look at some of the recent advancements in portrait mode. Deep learning algorithms on compact devices have made it possible to enhance picture quality by giving them an appeal that was never achieved in the past.
AI can accurately append metadata and keywords to the image for making them easily discover able. AI systems analyze keywords to perform image lookup and perform editing to satisfy the client’s needs. The resulting preview can be readied for client’s approval within minutes if not seconds.
| Photo editing | 0 | photo-editing-106d1fc4999c | 2018-07-03 | 2018-07-03 04:55:25 | https://medium.com/s/story/photo-editing-106d1fc4999c | false | 138 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Raja S | null | c9f788600f07 | rajaagnes9 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 54d8b5a09b93 | 2017-09-26 | 2017-09-26 15:50:08 | 2017-09-26 | 2017-09-26 15:56:57 | 2 | false | en | 2017-10-19 | 2017-10-19 06:31:23 | 13 | 106f2e50bd9f | 5.892767 | 5 | 0 | 0 | Grand Prize ($6K) winners at OpenEd.ai Hackathon 2017 | 5 | ReadEx: an intelligent reading assistant
Grand Prize @ OpenEd.ai Hackathon 2017
Before we begin describing the application, a quick demo of the intelligent reader ReadEx is presented below, and full source code can be found on GitHub here:
Motivation
People are generally faced with short reading & attention spans. We often find people trying to read for long hours and yet gain little apart from false feeling of gratitude.
Have you ever felt that you were too dizzy while reading a chapter only to later realize that haven’t actually understood the crux of it?
Have you cherished easily getting away with reading assignments with no questionnaire to follow?
Have you encountered a question right from the text and yet haven’t been able to answer it?
If your answer to any one or more of the above questions was yes, then you are in dire need of our app! With this simple mobile application, we aim to revolutionize the education sector by providing immense help in improving one’s reading habits.
With the advent of the social age, attention spans of individuals have significantly gone down. It has been observed in various studies that when people try to read for long hours, they seldom are able to grasp concepts. But having spent some time reading gives them a false sense of understanding it.
Our personal hardships and non-availability of a universal assisted reading platform led to making of ReadEx. We have often undergone intense reading exercises at IITD, and are faced with attention loss after a short while. ReadEx, an intelligent assistive reading application, tackles this problem at its core, while also providing a one-stop solution for all reading hardships an avid reader has faced.
Our solution: ReadEx!
App Screenshots
ReadEx helps users continuously gauge their understanding of the material by challenging them with questions generated in real time.
While you read through the material, Merlin (our intelligent chatbot) figures out your progress and crafts intelligent questions which pop up as counters at the side of your screen.
Not only this, ReadEx also tries to be your intelligent encyclopedia. You can scan part of any interesting article and it suggests you a few topics that are closely related to it. You can go ahead and tap the suggestion box to read the same.
It also comes along with a simple search engine (waiting to get sophisticated in future), which when given with a search query fetches an appropriate article for the topic.
Questions you fail to answer automatically move to the flashcards section which you can later revisit to revise concepts
How does ReadEx do it all under the hood?
The flowchart above serves as a layman explanation of the architecture of the comprehensive reading application. The current version of ReadEx requires the article, that is being read, to be in a format easily indexable by an application for text. ReadEx majorly exposes the following three fronts of reading documents or articles for people:
Search using keywords for an article on the web
Open a locally stored document (PDF, text etc)
Scan an interesting piece of article to read similar articles
Flowchart
All the above three methods of opening an article start at different ends of the pipeline and eventually coalesce to form a single workflow pipeline towards the end as indicated in the flowchart.
Reading an article
Searching the web
For supporting a keyword search query, we do a simple lookup on Wikipedia using their APIs and retrieve an article for the user.
Following is a code snippet explaining how you can retrieve an article from Wikipedia based on a short query string in your app. We write the snippet in Node which is what we had used for development:
Opening a local document
In the second method of opening an article, which essentially involves opening a locally held document, the application first uploads it to the server if the text is not so simply indexable like in a PDF. At the server end, we use OCR to process it and create a simple application-indexable text/html file that is displayed on the application end.
Scanning a document
The “scan an article” method of opening a new article uses photo OCR to extract text out of the scanned article which is then processed through the use of IBM Watson Natural Language APIs to detect concepts in it, which are then searched using an interface similar to that for the first part. A short handy snippet for concept extraction from the text in Node is shown below:
Question Generation
The question
Opening an article is followed by the phase of question generation. When a reader scrolls through the text file, we use the position of the scroll to estimate the approximate line the user is likely to be reading. The application then takes up a piece of text (of around a paragraph) above that point and send it to a rest API on the server-end, which replies with questions pertaining to the topic. We used the wiki trivia question generation code for quick prototyping. This is a simple python application which uses a rule-based mechanism to generate a blank in a given statement or a paragraph. The blanked statement then serves as our simple question.
A snippet for determining the content just read by the user in WebView is shown below:
Generating possible answers
To generate the set of possible answers to the above-generated question, we make use of the WordNet which is a large lexical database of English in layman terms. It groups nouns, verbs, adjectives and adverbs into sets of cognitive synonyms, each expressing a distinct concept. The web interface to the lexical database can be found here. The blanked word is searched over wordnet using the NLTK library and hyponyms of the hypernym of that words act as the set of all possible answers. This basically serves the purpose to find words of similar contextual meaning that fit right and complete the sentence.
Android App
The front end of this project is an Android app available here. The app is responsible for rendering all data received from the server and also generating and storing flashcards. Any document selected by the user is rendered in a WebView. This allows us to maintain the font styles of Wiki and PDF documents. It also allowed a simple way of enhancing text presentated to the user, by using HTML and CSS.
Flashcards
The flashcards are created in the app for every question answered by the user. The questions and their correct answers are stored in a local SQLite database when the user interacts with the chatbot. This database is used to populate the flashcards. We use the cardslib library to render the flashcards. The following is the code snippet with the SQL query that saves questions and answers to the local SQLite database.
Chatbot
Questions that are sent to the application from the server end and are then presented to the user via a chatbot interface. This has been primarily done to gamify and make fun the activity of reading so as to keep the interest amongst our readers alive. Number of questions available at any time appear to user as a number beside the action button on the application. When a user opens the chat, Merlin (the chatbot), asks questions as chat messages, answers to which can be given as the user chat messages.
Question incorrectly answered end up in the flash card section of our application. The flash card section has been made from the point of view of revising answers to questions user has previously made mistake in.
Where do we head with ReadEx?
We believe that ReadEx can find widespread applications in the education sector. We predict if education in countries like India is to scale to the masses, personalized and intelligent apps like ReadEx will play a major role. ReadEx can be be useful to quality question generation for instructors, helping people to gauge their own understanding of material, preparing for exams, and much more.
In the near future, we plan to make improvements in the question generation algorithm and come up with an extremely user-friendly prototype that we can test with real users to validate our assumptions. The initial code written for ReadEx was a part of OpenEd AI Hackathon, 2017 and will remain open-sourced for the community.
Thank you note
We are thankful to the OpenEd AI team for providing us with this wonderful opportunity. We are also thankful to Mr. Anshul Bhagi and Miss Nikhila Ravi, Co-Directors of OpenEd.ai, for being supportive and encouraging at all times.
About us
We are a set of four undergraduates pursuing Computer Science at IIT Delhi.
Mayank Rajoria
Harsh Arya
Prakhar Gupta
Prakhar Agrawal
| ReadEx: an intelligent reading assistant | 101 | readex-an-intelligent-reading-assistant-106f2e50bd9f | 2018-04-24 | 2018-04-24 17:08:39 | https://medium.com/s/story/readex-an-intelligent-reading-assistant-106f2e50bd9f | false | 1,460 | Open-source AI for Education Initiative | null | null | null | OpenEd.ai | opened-ai | AI,EDUCATION,EDUCATION TECHNOLOGY,MACHINE LEARNING | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Prakhar Agrawal | null | b1fa313aaf9a | prakhar0409 | 54 | 69 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-08 | 2018-08-08 09:31:17 | 2018-08-08 | 2018-08-08 09:40:11 | 1 | false | ru | 2018-08-08 | 2018-08-08 09:40:11 | 5 | 106fb9cfdcce | 1.550943 | 8 | 0 | 0 | Приятно, что в «Гардиан» читают наш канал 😉. Только недавно я рассказывал про угрозу наступления новой «зимы ИИ». И вот эстафету темы… | 1 | Самопровозглашенные “властители дум” в области ИИ накликивают его новую «зиму»
“Преувеличенные утверждения в прессе об интеллекте компьютеров не уникальны для нашего времени. Иллюстрация: Сара Роббинс. Источник https://goo.gl/JfmBLE
Приятно, что в «Гардиан» читают наш канал 😉. Только недавно я рассказывал про угрозу наступления новой «зимы ИИ». И вот эстафету темы подхватила «Гардиан».
В результате тема, как говорится, взлетела. Статья Оскара Швартца ‘The discourse is unhinged’: how the media gets AI alarmingly wrong заняла высшее место в топе ИИ новостей aiweekly.co за прошлую неделю.
Хорошая статья. Всем рекомендую.
Оттолкнувшись от того же, что и я (спекулятивный оптимизм и страшилки про ИИ ведут нас к новой «зиме ИИ»), Оскар Швартц рассмотрел роль социальных медиа в этом процессе.
Швартц показал, что главная опасность не в очевидной и понятной погоне за сенсациями технически неграмотных журналистов.
«Социальные медиа позволили самопровозглашенным “властителям дум” в области ИИ зарабатывать на шумихе вокруг этой темы, не делая ничего, кроме создания низкокачественных статей, перефразируя Илона Маска».
В результате чего тематика ИИ из «интересных исследований» превращается в «сенсационную хрень». Но это не ново. Так уже было и во 2й половине ХХ века.
Главная же опасность заключается в многократно возросшей мощи социальных медиа, которые уже не просто влияют на акценты дискурсов, а запускают настоящие «эпидемии дезинформации».
В результате мы имеет настоящую «эпидемию дезинформации об ИИ», которая, как всякая эпидемия, распространяется сама по себе, не требуя никаких дополнительных усилий и заинтересованных лиц.
Итогом этого становится тотальное смещение дискурса с реально важных вопросов для исследований и разработок ИИ:
✔️ на «крайне левый» — потребительско-оптимистический полюс (машинное обучение решит все проблемы ИИ, новые ИИ-гаджеты сделают жизнь прекрасной и т.п.)
- и на «крайне правый» — алармистский полюс (ИИ выходит из под контроля и несет смертельную угрозу человечеству).
Такое смещение дискурса сбивает фокус интересов и частного бизнеса, и государств.
Получается очередной «топор под компасом», ведущий в тупик большинство ИИ-разработок по всему миру.
А это путь к наступлению новой «зимы ИИ».
Таково резюме. А полный текст соображений на эту тему читайте здесь.
• Статья ‘The discourse is unhinged’: how the media gets AI alarmingly wrong
Мой рассказ про новую «зиму ИИ»
_________________________
Хотите читать подобные публикации? Подписывайтесь на мой канал в Телеграме, Medium, Яндекс-Дзене
Считаете, что это стоит прочесть и другим? Дайте им об этом знать, кликнув на иконку “понравилось”
| Самопровозглашенные “властители дум” в области ИИ накликивают его новую «зиму» | 13 | самопровозглашенные-властители-дум-в-области-ии-накликивают-его-новую-зиму-106fb9cfdcce | 2018-08-08 | 2018-08-08 09:40:11 | https://medium.com/s/story/самопровозглашенные-властители-дум-в-области-ии-накликивают-его-новую-зиму-106fb9cfdcce | false | 358 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Сергей Карелов | Малоизвестное интересное на стыке науки, технологий, бизнеса и общества - содержательные рассказы, анализ и аннотации | 4fa09a8333b2 | sergey_57776 | 2,156 | 86 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 5da80eb1d347 | 2018-06-22 | 2018-06-22 19:07:20 | 2018-06-22 | 2018-06-22 19:26:58 | 1 | false | en | 2018-06-26 | 2018-06-26 16:15:47 | 2 | 106fbbaf45fe | 0.898113 | 1 | 0 | 0 | Welcome to the FinTech Studios Apollo Blog. I will be posting tips, suggestions, use cases, and other articles related to the FinTech… | 4 | FinTech Studios Apollo Blog
Welcome to the FinTech Studios Apollo Blog. I will be posting tips, suggestions, use cases, and other articles related to the FinTech Studios Apollo application.
Apollo is an AI enriched intelligent search and analytics platform for Wall Street.
Leveraging artificial intelligence and natural language processing, Apollo delivers unparalleled market insights, analytics and information. Apollo uses millions of sources to gather, index and tag over a million news, research, blog, and industry documents every day focusing on Public Companies, Private Companies, People, Topics, Industries, and Regions
Apollo News Page
Apollo delivers Relevant News based on your behavior, your likes, and dislikes, along with our proprietary Relevancy Score, giving you very focused results. Once you have your results, you can filter, click on tags, analyze a company, get a quote, and look at analytics, and go very deep in your analysis.
Apollo also offers the ability to create sophisticated Channels that give you one click access to specific filtered News, and Dashboards that offer the ability to aggregate analytics, channels, and other information in one place.
Go to Apollo.fintechstudios.com to sign up for a free trial.
| FinTech Studios Apollo Blog | 23 | fintech-studios-apollo-blog-106fbbaf45fe | 2018-07-05 | 2018-07-05 21:15:59 | https://medium.com/s/story/fintech-studios-apollo-blog-106fbbaf45fe | false | 185 | Blog for all thing relating to Apollo | null | null | null | FTS Apollo Blog | fts-apollo-blog | FINTECH,TECHNOLOGY,FINANCIAL SERVICES | FinTechStudios | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Rich Taylor | null | 230a45f3e58 | rich_38190 | 1 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-30 | 2018-08-30 09:01:03 | 2018-08-30 | 2018-08-30 09:01:12 | 0 | false | en | 2018-08-30 | 2018-08-30 09:01:12 | 1 | 106fef865d0c | 1.192453 | 0 | 0 | 0 | Download ebook Natural Language Understanding Download EBOOK EPUB KINDLE By James F. Allen | 1 | Read Human Development: A Life-Span View By Robert V. Kail EPUB PDF #Mobi
Download ebook Natural Language Understanding Download EBOOK EPUB KINDLE By James F. Allen
Read Online : https://downloadebooks.us/?q=Natural+Language+Understanding
From a leading authority in artificial intelligence, this book delivers a synthesis of the major modern techniques and the most current research in natural language processing. The approach is unique in its coverage of semantic interpretation and discourse alongside the foundational material in syntactic processing.
.
.
.
.
.
.
.
.
.
.
.
.
.
Natural Language Understanding PDF Online, Natural Language Understanding Books Online, Natural Language Understanding Ebook , Natural Language Understanding Book , Natural Language Understanding Full Popular PDF, PDF Natural Language Understanding Read Book PDF Natural Language Understanding, Read online PDF Natural Language Understanding, PDF Natural Language Understanding Popular, PDF Natural Language Understanding , PDF Natural Language Understanding Ebook, Best Book Natural Language Understanding, PDF Natural Language Understanding Collection, PDF Natural Language Understanding Full Online, epub Natural Language Understanding, ebook Natural Language Understanding, ebook Natural Language Understanding, epub Natural Language Understanding, full book Natural Language Understanding, online Natural Language Understanding, online Natural Language Understanding, online pdf Natural Language Understanding, pdf Natural Language Understanding, Natural Language Understanding Book, Online Natural Language Understanding Book, PDF Natural Language Understanding, PDF Natural Language Understanding Online, pdf Natural Language Understanding, read online Natural Language Understanding, Natural Language Understanding James F. Allen pdf, by James F. Allen Natural Language Understanding, book pdf Natural Language Understanding, by James F. Allen pdf Natural Language Understanding, James F. Allen epub Natural Language Understanding, pdf James F. Allen Natural Language Understanding, the book Natural Language Understanding, James F. Allen ebook Natural Language Understanding, Natural Language Understanding E-Books, Online Natural Language Understanding Book, pdf Natural Language Understanding, Natural Language Understanding E-Books, Natural Language Understanding Online , Read Best Book Online Natural Language Understanding
#readOnline #pdfdownload #online #TXT #ReviewOnline
| Read Human Development: A Life-Span View By Robert V. Kail EPUB PDF #Mobi | 0 | read-human-development-a-life-span-view-by-robert-v-kail-epub-pdf-mobi-106fef865d0c | 2018-08-30 | 2018-08-30 09:01:12 | https://medium.com/s/story/read-human-development-a-life-span-view-by-robert-v-kail-epub-pdf-mobi-106fef865d0c | false | 316 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | vendula | null | a40cfb68604b | vendula_96265 | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d1c0fb17ab3 | 2017-11-15 | 2017-11-15 19:25:00 | 2017-11-21 | 2017-11-21 14:29:32 | 2 | false | en | 2018-04-09 | 2018-04-09 13:47:41 | 14 | 1072d34b9821 | 2.288994 | 6 | 0 | 0 | Leading cities and countries are transforming with IoT and AI | 5 | The incredible ways governments use the Internet of Things
Leading cities and countries are transforming with IoT and AI
Pawel Nolbert / Unsplash
Government and innovation? Yes: two recent surveys highlight how governments are embracing technology to tackle issues from public safety to social services.
The 2017 Digital Cities Survey ranks cities on how they leverage technology to better serve their citizens.[1] An important takeaway: city size and budget aren’t as important as visionary leadership who recognizes technology’s potential to transform communities and attract new citizens.
“Techies, startups and recent college graduates seek to live in places that create the best citizen services for them — cities that reduce their traffic, save them time finding parking, allow them to pay taxes online, and are on the leading edge of innovation,” said Marquis Cabrera, IBM’s Global Leader of Digital Government Transformation. “Cities are planting fertilizer for greener economic pastures; they’re setting the conditions for new jobs, new business creation, and new buzz about the city.”
So what technology excites the leading digital cities?
81% are actively considering the potential of the Internet of Things.[2] Imagine if your city’s infrastructure, roads, traffic lights, buildings had a tiny sensor capturing data. Connecting and learning from that influx of new information would help governments more quickly respond to challenges such as reducing energy consumption or helping first responders safely assess dangerous situations.
At the country level, top-ranked governments optimize public services delivery to be more effective, accessible, and responsive to people’s needs. The UK, Australia, Korea, Singapore, and Finland — the top five in the UN E-Government Survey — have doubled-down on digital technology.[3]
The stakes are high: government digitization could generate over $1 trillion annually worldwide.[4]
The Angle:
The future is closer than we think: 20.4 billion IoT devices are forecast by 2020.[5]
Practical uses for IoT in government are everywhere:
Management of traffic and reduction of automobile pollution with data coming from sensors in roads, weather information, public transportation, and car GPS devices
Detection of flooding and earthquakes, giving citizens earlier warnings
Tracking first responders’ location, movement, respiration, and exposure to heat and hazardous materials
When combined with AI, can help monitor the movements of the elderly or transform accessibility for the 650 million people with disabilities worldwide [6]
Fully embracing this new technology may seem like a radical shift for some governments, but the rewards will far outweigh the headaches.
“My recommendation to cities is to take a risk and try to digitally transform citizen services using technology, so more people are inclined to learn about your city, move to your city, and contribute to your city,” Cabrera said.
Learn more about how IoT can help improve the lives of your citizens.
[1] http://www.govtech.com/dc/digital-cities/Digital-Cities-Survey-2017-Winners-Announced.html
[2] http://www.govtech.com/dc/digital-cities/Digital-Cities-Survey-2017-Winners-Announced.html
[3] https://publicadministration.un.org/egovkb/en-us/Reports/UN-E-Government-Survey-2016
[4] https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/public-sector-digitization-the-trillion-dollar-challenge
[5] http://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/
[6] https://www.ibm.com/watson/advantage-reports/ai-social-good-social-services.html
| The incredible ways governments use the Internet of Things | 29 | the-incredible-ways-governments-use-the-internet-of-things-1072d34b9821 | 2018-04-09 | 2018-04-09 13:47:43 | https://medium.com/s/story/the-incredible-ways-governments-use-the-internet-of-things-1072d34b9821 | false | 505 | Today’s industry news. Tomorrow’s reality. | null | IBMIndustries | null | IBMIndustrious | ibmindustrious | IBM,AI,IOT,DATA,TECHNOLOGY | IBMIndustries | United Nations | united-nations | United Nations | 4,219 | IBM Industries | We transform businesses. | 4a54e3dc7a5f | IBMindustries | 240 | 122 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-28 | 2018-05-28 16:30:56 | 2018-05-29 | 2018-05-29 12:16:24 | 1 | false | ru | 2018-06-08 | 2018-06-08 19:36:29 | 5 | 1073440e7609 | 1.562264 | 1 | 0 | 0 | «Если вы знаете, как справляться с этим моментом, вы знаете, как работать с вечностью», - сказал великий Садхгуру. Но в общем случае… | 4 | Жизнь в прошлом
«Вчерашние дни – они в нашей памяти, а завтра – это просто наше представление, наше воображение. Если вы знаете, как справляться с этим моментом, вы знаете, как работать с вечностью, вам не нужно справляться с завтрашним днем», – говорил Садхгуру, известный гуру йоги. В 2017 году он консультировал Сбербанк и Грефа, в 2018-м – выступил на ПМЭФ.
Мы часто слышим, что умение жить настоящим – ключ к счастью.
Жить настоящим невозможно
Оказалось, что великий гуру в общем случае неправ.
Например, камера нашего робота дает задержку в полсекунды! И вообще, у любого сенсора есть время запаздывания. То есть робот всегда живет в прошлом. Его, роботное, настоящее отстает от настоящего настоящего. Сильно отстает.
Полсекунды достаточно, чтобы робот впилился в стену и превратился в груду дымящегося пластика.
Классический reinforcement learning худо-бедно справляется с ситуацией, когда отложено вознаграждение (delayed reward). Но на базовом уровне подразумевается, что ответ среды, то есть ощущения агента, приходят в ответ на действия. Жизнь агента – это такой пинг-понг: [анализ +] действие → ответ среды + вознаграждение → [анализ +] действие → ответ среды + вознаграждение.
С вознаграждением по нашему концепту все просто: оно внутреннее и вычисляется как количество новых состояний, доступных из текущего состояния. Услугами среды для расчета вознаграждения мы не пользуемся.
А вот к задержке с восприятием среды мы были не готовы.
Робот всегда живет прошлым.
Настоящего настоящего не существует
Но если на минуту забыть, что есть настоящее настоящее, все очень упрощается.
Мы просто считаем, что реальность робота – это то, что он получает прямо сейчас от камер и датчиков. Да, действия, которые совершает агент в таких условиях, покажутся продиктованными нечеловеческой логикой. Но мы и не претендовали :).
Интуиция подсказывает, что есть предел, после которого отставание будет приводить к невозможности осмысленных действий. Особенно если среда динамично меняется не только под влиянием действий агента, но и сама по себе.
Производители беспилотных авто сокращают задержку восприятия с помощью более быстрых сенсоров. Вместо камер они ставят инфракрасные и ультразвуковые датчики, лазеры, радары.
Если вдруг окажется, что предел для нашего робота – те самые полсекунды, будем искать супербыстрые камеры.
Скоро узнаем.
В настоящее настоящее не попасть, можно лишь сократить задержку восприятия.
Подписывайтесь на обновления:
Facebook;
VK;
Telegram.
| Жизнь в прошлом | 50 | жизнь-в-прошлом-1073440e7609 | 2018-06-08 | 2018-06-08 19:36:30 | https://medium.com/s/story/жизнь-в-прошлом-1073440e7609 | false | 361 | null | null | null | null | null | null | null | null | null | Роботы | роботы | Роботы | 145 | It Just Lives | Создаем робота, который живет как хочет | 8f28972d7f19 | it.just.lives | 7 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | aabcc6ef0d98 | 2018-07-13 | 2018-07-13 10:19:26 | 2018-07-13 | 2018-07-13 10:20:42 | 2 | false | en | 2018-07-13 | 2018-07-13 10:34:16 | 12 | 10735e995fa | 4.130503 | 0 | 0 | 0 | null | 5 | One of the pressing questions that arise with artificial intelligence is how to account for the actions of machines that make decisions by themselves. Image credit — ITU Pictures, licensed under CC BY 2.0
Digital age ‘desperately’ needs ethical and legal guidelines
Technologies such as artificial intelligence and robotics raise new problems for society.
by Joanna Roberts
Digital technologies such as artificial intelligence and robotics, ‘desperately’ need an institutional framework and system of values to help regulate the industry, an ethics expert has told leading scientists and policymakers.
Jeroen van den Hoven, professor of ethics and technology at Delft University of Technology in the Netherlands, was speaking at a session on ethics in science and technology at the EuroScience Open Forum (ESOF) 2018, which is being held in Toulouse, France, from 9–14 July.
‘People are becoming aware that this digital age is not neutral…, it is presented to us mainly by big corporations who want to make some profit,’ he said.
He called for a Europe-wide network of institutions that can provide a set of values, based on the EU’s Charter of Fundamental Rights, which the technology industry could operate within.
‘We have to set up, as we’ve done for food, for aviation and for traffic, … an elaborate system of institutions that will look (at) this field of artificial intelligence.
‘We need to think about governance, inspection, monitoring, testing, certification, classification, standardisation, education, all of these things. They are not there. We need to desperately, and very quickly, help ourselves to it.’
Prof. van den Hoven is a member of the European Group on Ethics in Science and New Technologies (EGE), an independent advisory body for the European Commission, which organised the session he was speaking at.
In March, the EGE published a statement on artificial intelligence (AI), robotics and autonomous systems, which criticised the current ‘patchwork of disparate initiatives’ in Europe that try to tackle the social, legal and ethical questions that AI has generated. In the statement, the EGE called for the establishment of a structured framework.
The European Commission announced on 14 June that they have tasked a high-level group of 52 people from academia, society and industry with the job of developing guidelines on the EU’s AI-related policy, including ethical issues such as fairness, safety, transparency and the upholding of fundamental rights.
The expert group, which includes representatives from industry leaders in AI such as Google, BMW and Santander, are due to present their guidelines to the European Commission at the beginning of 2019.
‘People are becoming aware that this digital age is not neutral…, it is presented to us mainly by big corporations who want to make some profit.’
- Professor Jeroen van den Hoven, Delft University of Technology, Netherlands
Bias
Ethical issues surrounding AI – such as bias in machine learning algorithms and how to oversee the decision-making of autonomous machines — also attracted widespread discussion at the ESOF 2018 conference.
One major concern emerging with the fast-paced development of machine learning, is the question of how to account for the actions of a machine. This is a particular issue when using AI based on neural networks, a complex system set up to mimic the human brain that enables it to learn from large sets of data. This often results in algorithm becoming what is known as a ‘black box’, where it’s possible to see what goes in and what comes out, but not how the outcome was arrived at.
Maaike Harbers, a research professor at the Rotterdam University of Applied Sciences in the Netherlands, said that this was an important issue in the military, where weaponised drones are used to carry out actions.
‘In the military domain, a very important concept is meaningful human control,’ she said. ‘We can only control or direct autonomous machines if we understand what is going on.’
Prof. Harbers added that good design of the interface between humans and machines can help ensure humans exercise control at three important stages — data input, processing and reasoning, and the output or action.
Even in technologies that use AI for purposes that seem to be overwhelmingly positive, such as companion social robots for children, raise some tricky ethical issues. The conference audience heard that researchers working in this area are grappling with the effect these technologies can have on family relationships, for example, or whether they could create inequalities in society, or if they might create social isolation.
In the field of automated transport, researchers are also looking at the impact self-driving cars might have on wider issues such as justice and equality. They are investigating questions ranging from how to ensure equal access to new forms of transport to who should benefit from any cost-savings associated with automated transport.
However, the values we instil in AI may be a key factor in public acceptance of new technologies.
One of the most well-known moral dilemmas involving self-driving cars, for example, is the so-called trolley problem. This poses the question of whether an autonomous vehicle heading towards an accident involving a group people should avoid it by swerving onto a path that would hit just one person.
Dr Ebru Burcu Dogan from the Vedecom Institute in France, said research shows that while people were in favour of a utilitarian solution to the dilemma — for example, killing the driver rather than five pedestrians — they personally wouldn’t want to buy or ride in a vehicle that was programmed in such a way.
‘We all want to benefit from the implementation of a technology, but we don’t necessarily want to change our behaviour, or adopt a necessary behaviour to get there.’
If you liked this article, please share it.
See also
We want to end the de-industrialisation of Europe — Prof. Jürgen Rüttgers
‘Earworm melodies with strange aspects’ — what happens when AI makes music
Computers learning to read, watch and understand
Dreaming robots and creative computation — the future of AI takes shape
Creative computation and the What-If Machine
More info
European Group on Ethics in Science and New Technologies
ESOF
Originally published at horizon-magazine.eu.
| Digital age 'desperately' needs ethical and legal guidelines | 0 | digital-age-039-desperately-039-needs-ethical-and-legal-guidelines-10735e995fa | 2018-07-13 | 2018-07-13 10:34:16 | https://medium.com/s/story/digital-age-039-desperately-039-needs-ethical-and-legal-guidelines-10735e995fa | false | 993 | The EU research & innovation magazine | null | horizon.magazine.eu | null | Horizon | horizon-magazine | EUROPEAN UNION,SCIENCE,RESEARCH,SCIENCE POLICY,INNOVATION | HorizonMagEU | Robotics | robotics | Robotics | 9,103 | Horizon | The EU research & innovation magazine | 985aeff41355 | HorizonMagEU | 71 | 31 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-16 | 2018-08-16 17:02:16 | 2018-08-14 | 2018-08-14 00:00:00 | 2 | false | en | 2018-08-16 | 2018-08-16 17:04:39 | 4 | 107574480b7b | 2.364465 | 0 | 0 | 0 | BASINGSTOKE — JCC Bowers, the solutions provider of Connected, Intelligent, and Autonomous transportation technology, announced today that… | 5 |
JCC Develops Technology to Predict Accidents
BASINGSTOKE — JCC Bowers, the solutions provider of Connected, Intelligent, and Autonomous transportation technology, announced today that they have developed an algorithm to predict, and eventually prevent, traffic accidents.
Once deployed, the new AI-powered technology will make it easier for drivers, municipalities, and insurance companies to assess risk. Not only will it empower vehicle operators to avoid accident-prone hotspots, it will also give smart cities the up-to-the-minute information they need to efficiently make infrastructure improvements.
The new algorithm will create a series of heatmaps to help municipalities and drivers predict accident hotspots.
The model is, ahem, driven by artificial intelligence and is already off and running to integrate all sorts of additional datasets. These include a variety of inputs such as road type, speed limit, weather and ambient light conditions, whether urban or rural, etc. Additionally, the model will pull insight from live traffic feeds and other real-time sources, giving all transportation stakeholders an unprecedented level of insight into the roads we all drive.
Roughly 180,000 people are injured or killed on UK roads every year with about 6% of all traffic accidents caused by problems with the roadway itself.
“This is a game changer,” said JCC Bowers CEO John Bowers. “This is going to have a huge impact on the safety of our streets and the ways cash-strapped governments choose to invest in infrastructure improvements. We’re talking about easily finding and fixing the most dangerous sections of the road, knowing which highways to avoid when it snows, and ultimately, saving lives.”
Unlike other applications that alert users strictly to real-time traffic accidents, the JCC Bowers model takes into account historical crash data in conjunction with precise road conditions to predictively determine danger not just in one spot, but in similar locations throughout a neighborhood, city or even a country.
As a quite simple example, if the model determines that certain east-west aligned intersections are difficult to navigate at dusk when the sun strikes drivers’ eyes, our system will identify other east-west intersections that could be accident-prone in the future during that time period. This would include intersections in new neighborhoods that had yet to be driven on.
This algorithm is advantageous in any number of contexts, from helping municipalities plan driving safety campaigns to guiding housing developers’ choices for street planning to equipping insurance companies with customized options for various customers, such as shorter, higher risk routes appropriate for excellent drivers versus longer, lower risk routes for teenage drivers.
The AI team at JCC Bowers is working every day to improve transportation. Our experts are busy developing technologies that integrate image processing, natural language processing, predictive maintenance, and pattern recognition to bring safety, ease, and automation to the ways people move throughout the world.
About JCC Bowers
JCC Bowers is a UK-based technology solutions provider for Connected, Intelligent, and Autonomous systems for a variety for transportation industries, including automotive, aerospace, nautical, agriculture, and defense. The company’s suite of products enables major existing industry participants to bridge legacy technological shortfalls and address the coming digital convergence through previously unimaginable Connected, Intelligent and Autonomous capabilities. | https://jccbowers.com
Originally published at jccbowers.com on August 14, 2018.
| JCC Develops Technology to Predict Accidents | 0 | jcc-develops-technology-to-predict-accidents-107574480b7b | 2018-08-16 | 2018-08-16 17:04:40 | https://medium.com/s/story/jcc-develops-technology-to-predict-accidents-107574480b7b | false | 525 | null | null | null | null | null | null | null | null | null | Self Driving Cars | self-driving-cars | Self Driving Cars | 13,349 | JCC Bowers | Solutions providers of Connected, Intelligent, and Autonomous vehicle technology. | www.jccbowers.com | 4750d129f758 | JCCBowers | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a72e580f87ae | 2018-04-13 | 2018-04-13 22:54:10 | 2018-04-13 | 2018-04-13 23:08:11 | 5 | false | en | 2018-04-13 | 2018-04-13 23:08:11 | 4 | 1077873116d1 | 1.28239 | 0 | 0 | 0 | HyperQuant has established a partnership with Zeus Capital quantitative cryptocurrency hedge-fund. | 5 | The partnership of HyperQuant and Zeus Capital
HyperQuant has established a partnership with Zeus Capital quantitative cryptocurrency hedge-fund.
As you know, we are building the platform for top-notch quant traders providing an efficient framework and tools for crypto asset management.
We’ve thoroughly investigated the trading approach of Zeus Capital. They have a proven reputation, that we have confirmed using our developments in blockchain technology.
HyperQuant provides our new partners with risk management software, while Zeus Capital is responsible for back-testing and maintaining trading algorithms. The transactions are stored on blockchain for the transparency and the safety of investors.
In the future we’ll announce the partnerships with other crypto traders and blockchain industry leaders.
HyperQuant Social Media
| The partnership of HyperQuant and Zeus Capital | 0 | the-partnership-of-hyperquant-and-zeus-capital-1077873116d1 | 2018-04-13 | 2018-04-13 23:08:23 | https://medium.com/s/story/the-partnership-of-hyperquant-and-zeus-capital-1077873116d1 | false | 119 | Automatic Trading Revolution: https://hyperquant.net/ | null | hyperquant.net | null | hyperquant | hyperquant | null | HyperQuant_net | Blockchain | blockchain | Blockchain | 265,164 | HyperQuant | Automatic Trading Revolution https://hyperquant.net/ | da4c15da74be | hyperquant | 185 | 87 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-18 | 2018-06-18 19:41:08 | 2018-06-18 | 2018-06-18 19:42:34 | 0 | false | en | 2018-06-18 | 2018-06-18 19:42:34 | 0 | 10783741fe0b | 0.181132 | 0 | 0 | 0 | I am starting a new journey of bridging both my love of Python and Artificial Intelligence. This will be an ongoing record of my daily… | 2 | Pupil of Artificial Intelligence
I am starting a new journey of bridging both my love of Python and Artificial Intelligence. This will be an ongoing record of my daily trials and tribulations in what I face with my lessons and the products that I build up. Stay Tuned….
| Pupil of Artificial Intelligence | 0 | pupil-of-artificial-intelligence-10783741fe0b | 2018-06-18 | 2018-06-18 19:42:35 | https://medium.com/s/story/pupil-of-artificial-intelligence-10783741fe0b | false | 48 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Pognat | The beginning stages in the world of Artificial Intelligence | 20e96a21ce43 | tanujachenna | 29 | 30 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 1c1378e1d03a | 2018-04-23 | 2018-04-23 16:49:32 | 2018-04-23 | 2018-04-23 16:55:14 | 2 | false | en | 2018-04-23 | 2018-04-23 16:55:54 | 6 | 1079a8e86934 | 4.024843 | 0 | 0 | 0 | Integrated with Analysis Workspace, this function provides insight into data changes | 5 | Anomaly Detection: A feature that makes anomalous data visible in Adobe Analytics
Integrated with Analysis Workspace, this function provides insight into data changes
Optimized data visualization is an ever-present subject, not only in Digital Marketing, but also in any company that seeks to improve its performance. However, it is often necessary to go beyond the data presented and include trends and anomalies in the data range.
These patterns of alterations are often referred to as anomalies, exceptions or discordant observations, depending on the context. Generally speaking, detecting anomalies is a manual process, which uses statistical models and a coherent range of data, where any data outside the predefined interval is considered anomalous.
Concept and use
An anomaly is data that escapes a defined pattern. One of the statistical examples is the data that is outside a 95% confidence interval in a normal distribution. In addition, this value is considered anomalous when it reaches a predefined percentage above or below the mean value of the analyzed range.
According to Adobe itself, the detection of anomalies allows us to separate the “true signals” from the “noise” and to identify possible factors that contribute to the signals or anomalies. It can be defined as the problem of finding patterns in data that does not conform to expected behavior.
Among the examples of anomalies that can be investigated are drastic drops in the average order value, peaks in low-value orders, peaks or drops in valuation records and declines in home page views.
In Anomaly Detection: A Survey, Varun Chandola defines the detection of anomalies as a search to identify data characterized as anomalous, differentiating it from data that is said to be normal within a given context. According to Vic Barnett and Toby Lewis in the book Outliers in Statistical Data, an anomalous observation is one that seems to deviate sharply from the rest of the data sample in which it occurs.
In many cases, the concept of anomaly is a divergence from a pattern established in a range of data. For example, in an article published in the health sector, data with a frequency of less than 4% was considered anomalous in the analysis performed.
Inside Anomaly Detection
Among the tools available in Adobe Analytics, Analysis Workspace has an important feature: Anomaly Detection. Previously it was also used in Report & Analytics, but in the latest version of the platform, it can only be viewed internally in Workspace projects.
This feature provides a statistical method that determines how a metric has changed in relation to previous data. The detection of anomalies involves identifying data whose behavior is different to what was expected. Among the various statistical methodologies presented by Adobe, this function is used to present a larger confidence interval, closer to 100%.
Learn more about Anomaly Detection
In the case of Adobe, the algorithm also takes into account the seasonality of the data, such as for North American and world holidays. According to the company itself, the application of these holidays in the algorithm improves its performance.
Anomaly Detection offers a statistical method to determine an anomaly, considered the limiting margin, among the metrics visualized. This comparison is made in relation to a period before the visualization. For example, in a weekly granularity, it uses a retrospective of 15 periods, including the data range of the selected report (15 weeks) and a corresponding data range from the previous year, for training.
Learn more about Analytics Workspace
Visualization in Workspace
You can view the detected anomaly both in table and graph format (line graph only).
In tables, you see a warning, such as an alert signal on the line that matches the anomalous data. In the case of the line graph, you see the point of the graph that represents the anomaly, which also gives the percentage of the anomaly.
Example Scenario
An example of how Anomaly Detection can readily assist in validating and viewing changes on your site is at the time of a change in data collection. For example, when there is an update on site data collection and some pages are no longer being collected, there may be a decrease in the volume of page views. When you view this metric related to URLs, you can see which page scope has changed the most during this time.
In this scenario, you can see which pages are no longer being collected and seek a solution to improve collection. Updating the data collection and including these pages may readjust the volume of views.
Internal support
In addition to Anomaly Detection, another feature that can assist in this analysis is Contribution Analysis. This is a machine learning process that discovers the data that is directly contributing to the anomaly. The objective of this function is to assist in the resolution of the diagnosis already presented by Anomaly Detection.
Consequent actions
Anomaly detection appears as a warning regarding data being viewed in Analysis Workspace. This model is used to understand changes in the behavior of the metrics and subsequent analysis of the cause of this change.
For example, detecting an increase in visits to a page on a site and trying to discover the reason for most of the data volume may provide a suggestion for further analysis of the scenario already identified by Anomaly Detection.
After the diagnosis, a more detailed analysis can be done in relation to the data that is directly related to the change, whether it is in some specific scope variable or not. With this diagnosis, the reasons for the alteration can be made available and changes made to normalize the data if necessary.
Images:
https://marketing.adobe.com/resources/help/en_US/analytics/analysis-workspace/index.html?f=view-anomalies
Profile of the author: Rafael Rojas | Journalist with a dash of technological know-how, seeking to integrate more and more data and analysis.
| Anomaly Detection: A feature that makes anomalous data visible in Adobe Analytics | 0 | anomaly-detection-a-feature-that-makes-anomalous-data-visible-in-adobe-analytics-1079a8e86934 | 2018-04-23 | 2018-04-23 16:55:55 | https://medium.com/s/story/anomaly-detection-a-feature-that-makes-anomalous-data-visible-in-adobe-analytics-1079a8e86934 | false | 965 | Blog Posts made by the Digital Marketing Consultancy DP6 | null | null | null | DP6 US Blog | dp6-us-blog | ANALYTICS,DIGITAL MARKETING,MOBILE MARKETING | null | Data Science | data-science | Data Science | 33,617 | Blog DP6 | null | 3794c7b4a281 | dp6blog | 84 | 7 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-02-02 | 2018-02-02 21:45:32 | 2018-02-02 | 2018-02-02 21:45:50 | 0 | false | en | 2018-02-02 | 2018-02-02 21:45:50 | 0 | 107aa818cb4c | 2.260377 | 0 | 0 | 0 | Hello everyone and welcome to the first post! | 1 | Talking about AI
Hello everyone and welcome to the first post!
Today, I will be talking about artificial intelligence. AI has long been depicted as devil that dominate over humans. These years, Inside and outside of the technology industry, artificial intelligence was the most heated topic. Within the industry, my roommate, who had his internship at Intel working on wearable devices, got his whole line of business of wearable technology cancelled to make way for new AI departments in Intel. For the public, the computer once more conquered the game of Go, which was believed to be impossible for computer to master. Unlike Deep Blue that spared Kasparov 1 victory, AlphaGo dominated the game and leave Ke Jie zero chance of winning.
While Deep Blue and AlphaGo seem like similar machines that master human games, they are, as a matter of fact, vastly different. For chess games, it is possible for computer to search all the possibilities and then decide the best strategy. For the Go game, there are too many possibilities in the game even for the computer to calculate; therefore, AlphaGo used the value network to decide which dot is superior to other. In other words, the computer is trying to make choices that computer believe have the best chance of winning instead of a certain choice that the computer believe is absolutely the best. For humans, we make these choices instinctively that we believe can never be achieved by computers. The success of AlphaGo means that the collection of human intelligence over thousands of years on the game Go means nothing to the program that evolves only two years, and the choices that we believe are “instinctive” and “artistic” in the game are no more than the collection of data.
As the programs are going smarter, now do we need to worry about AI rebellion? The answer for now is No. Currently, no AI systems are even close to pass the Turing Test, a test trying to distinguish computer from human beings. In other words, everyone can easily tell whether it is a computer program or a person judging by their answers to certain questions. Or, we can arbitrarily say that AI systems are nowhere close to human intelligence now, except for some games, unless they discover some sort of intelligence on their own. However, as we can relax from AI ruling the world, there will be problems as the uprising of AI.
The first bad news would be for my roommate as his department got cancelled meaning no return offer for him at Intel. Similarly, many occupations may be overtaken by AI systems. Secretaries, drivers, accountants, translators, or even computer programmers might vanish in the few decades. These might create an unprecedented unemployment, and sadly, as a student of computer science, I am included in the tide. Also, autonomous weaponry may cause even greater problems. Imagine AI decides to bomb some country that happens to have nuclear weapon. Who knows what people would do to save their land. In the end, humans are unpredictable.
Currently, AlphaGo is trying to master Starcraft, the blizzard game that people believe is impossible for programs to beat human. Unlike chess and Go, games like Starcraft involves deception and conspiracy, and the program will not be able to see the whole map. I believe we can still overpower AI systems since humans have so many ways for warfare and we would willingly fool our enemy to gain victory. However, when the day comes that AI are better at everything, I do not know if it is the heaven or doomsday.
| Talking about AI | 0 | talking-about-ai-107aa818cb4c | 2018-02-02 | 2018-02-02 21:45:50 | https://medium.com/s/story/talking-about-ai-107aa818cb4c | false | 599 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Zhehao Cai | null | ce3c1391c052 | zhehaoca | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-31 | 2018-05-31 05:31:20 | 2018-05-31 | 2018-05-31 07:30:22 | 2 | false | th | 2018-05-31 | 2018-05-31 07:36:09 | 3 | 107b2ea9b3f6 | 1.070126 | 7 | 0 | 0 | ถ้าพูดถึง AI — Artificial Intelligence หรือ ปัญญาประดิษฐ์ ที่กำลังได้รับความนิยมมากในขณะนี้ 90% มักจะพูดถึง Machine Learning… | 4 | AI ที่ไม่ใช่แค่ Machine Learning — บทนำ
ถ้าพูดถึง AI — Artificial Intelligence หรือ ปัญญาประดิษฐ์ ที่กำลังได้รับความนิยมมากในขณะนี้ 90% มักจะพูดถึง Machine Learning ที่นำข้อมูลจำนวนมากมาจำแนกโดยโมเดลทางคณิตศาสตร์ เพื่อใช้ทำการทำนายหรือคาดการณ์ จึงมีบริษัทและStartup หลายแห่งคิดอยากนำมาใช้ แต่หลายครั้งต้องตันเพราะไม่ได้มีข้อมูลมากพอเหมือนบริษัทใหญ่ๆอย่าง Google และ Facebook เพราะความแม่นยำของ machine learning ขึ้นกับปริมาณข้อมูลที่มากพอ
แต่จริงๆแล้ว AI ยังมีวิธีการอื่นๆอีกมากมาย ในบทความนี้จึงอยากพูดถึง AI ที่ไม่ใช่แค่ machine learning บ้าง เพราะมีหลายกรณีที่การแก้ปัญหาบางอย่างเราไม่ได้มีข้อมูลจำนวนมากพอที่จะใช้ machine learning และ machine learningเองก็ไม่ได้เหมาะกับปัญหาในทุกแบบ
Semantic Reasoning คำว่า Reasoning คือการทำให้คอมพิวเตอร์ใช้เหตุผลเพื่อสรุปข้อมูลบางอย่างออกมาให้ ส่วนใหญ่มักจะใช้ในการจำแนกข้อมูล (Classification) โดยการกำหนดกฎเกณฑ์ทางตรรกศาสตร์ (Logical axiom) ขึ้นมาเพื่อใช้จำแนกข้อมูล ยกตัวอย่างเช่น
ถ้าพ่อของพ่อ คือ ปู่
A เป็นพ่อของ B และ B เป็นพ่อของ C
เพราะฉะนั้น A เป็นปู่ของ C
วิธีการแบบนี้เป็นวิธีที่ Social Network อย่าง Facebook ใช้เพื่อแนะนำเพื่อน โดยหาจากเพื่อนของเพื่อนของผู้ใช้ หรือ เพื่อนในกลุ่มที่ผู้ใช้เป็นสมาชิกอยู่ หรือ เพื่อนของเพื่อนในกลุ่มที่เป็นสมาชิกอยู่ อีกตัวอย่างคือADA (https://ada.com) ที่ทำแอพพลิเคชั่นสำหรับการวินิจฉัยโรคเบื้องต้นจากแบบสอบถามอาการ
ข้อดีของวิธีนี้คือไม่จำเป็นต้องมีข้อมูลมากเหมือนการทำ machine learning เพราะเป็นการกำหนดกฎทางตรรกศาสตร์ขึ้นมา และนั้นก็อาจจะเป็นข้อเสียด้วยเพราะต้องกำหนดกฎเหล่านั้นให้ครอบคลุม
Semantic Reasoning เป็นที่มาของเทคโนโลยีอื่นๆเช่น GraphDB, Ontology Web Language (OWL) หรือ Semantic Web
Automated Planning หรือบางทีเรียกว่า AI Planning คือการทำให้คอมพิวเตอร์รู้จักการวางแผนด้วยตัวเองเพื่อให้บรรลุเป้าหมายบางอย่าง
ตัวอย่างปัญหาที่คลาสสิคและเหมาะกับวิธีนี้คือ Cannibal-Missionary Problem
ปัญหาคือมี มิชชันนารี 3 คน คนป่า 3 คน เรือ 1 ลำ บรรทุกได้ 2 คน โดยให้ เป้าหมายคือให้มิชชันนารีและคนป่าข้ามไปยังอีกฝั่งได้ โดยห้ามคนป่ามากกว่ามิชชันนารีเพราะมิชชันนารีจะโดนคนป่ากิน
Cannibal-Missionary Problem
ใครสนใจลองไปเล่นดูได้ที่นี้ครับ http://www.novelgames.com/en/missionaries/
ปัญหาแบบนี้ถ้าใช้คนลองเล่นไม่น่าจะต่ำกว่า 1 นาทีเพื่อแก้ปัญหา แต่สำหรับคอมพิวเตอร์ด้วย Automated Planning แก้ได้ภายใน 1 วินาที โดยการกำหนดเป้าหมายและกฎในการเล่นลงไป คอมพิวเตอร์จะใช้วิธีการประเมินทุกทางที่เป็นไปได้( trial-error) และนำคำตอบแรกที่ถูกต้องตามเป้าหมายและกฎออกมาให้
Automated planning ใช้กับการแก้ปัญหาที่มีเป้าหมายและกฏชัดเจน และสามารถแก้ปัญหาได้ด้วยการประเมินหลายๆทางเลือก เช่น การจัดตารางสอนให้ลงตัว การวางแผนเดินทางตามเป้าหมายและงบประมาณที่จำกัด
Evolutionary computing ยังเป็นอีกหนึ่งสาขาของ AI ที่ได้รับความนิยมในวงการวิจัย computer science แต่ยังใหม่สำหรับการนำไปใช้จริง เน้นแก้ปัญหาแบบเลือกตัวเลือกที่ดีที่สุด จากตัวเลือกทั้งหมด (optimization) โดยการเลียนแบบวิธีการในธรรมชาติ เช่น Genetic Algorithm ประยุกต์แนวคิดของ DNA เข้ากับการแก้ปัญหา โดยตัดตัวเลือกออกไปเรื่อยๆจนได้ตัวเลือกที่ดีที่สุด, Ant Colony Optimization (ACO) จำลองการทำงานของมด ที่ช่วยกันหาทางที่ดีที่สุดเพื่อไปหาอาหาร แม้ว่าจะมีสิ่งกีดขวางมากมาย
AI ยังมีอีกหลายวิธีมาก นี้แค่ยกตัวอย่างมาหลักๆ ที่สำคัญคือการเลือกให้เหมาะกับปัญหาที่เราพยายามจะแก้และทรัพยากรที่เรามี
ในบทความหน้าจะลงรายละเอียดของแต่ละวิธีที่ว่ามา รวมไปถึงตัวอย่างปัญหาและการเขียนโค้ดเพื่อแก้ปัญหา
| AI ที่ไม่ใช่แค่ Machine Learning — บทนำ | 11 | ai-ที่ไม่ใช่แค่-machine-learning-บทนำ-107b2ea9b3f6 | 2018-06-04 | 2018-06-04 09:54:14 | https://medium.com/s/story/ai-ที่ไม่ใช่แค่-machine-learning-บทนำ-107b2ea9b3f6 | false | 182 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Nacha Chondamrongkul | I am assistant professor in software engineering at Mae Fah Luang University. Currently, I am studying PhD at the University of Auckland, New Zealand. | dfac31cfa2a0 | cnacha | 23 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | a15ec4bc4aa8 | 2018-02-21 | 2018-02-21 15:08:28 | 2018-02-27 | 2018-02-27 19:54:42 | 1 | false | en | 2018-03-06 | 2018-03-06 10:10:09 | 7 | 107b46bc9a6 | 5.728302 | 4 | 0 | 0 | Highlighting a woman in the machine learning field | 5 | Speaker Spotlight: Blessing Okeke
Highlighting a woman in the machine learning field
The field of mathematics can be applied everywhere from healthcare and engineering to social sciences and finances. Combining the skills and tools in the mathematical and data analytical fields can prepare you to delve into machine learning, at least it did for Blessing Okeke, a Senior Customer Analyst at ATB Financial. Learn how one of our MeetNTech speakers empowers businesses and helps thousands of Albertans with their banking experience.
Chic Geek: How has your background in Applied Mathematics and Mathematical Modeling helped you succeed in the machine learning field?
Blessing: The big things I learned were problem solving skills, critical thinking skills, analytical thinking, and quantitative thinking. Also being able to construct logical arguments and ideas, as well as manage my time — that’s how mathematics helped me in that space. Mathematics gives you foundational tools that you need to apply in all different spaces. You need to learn the language of the fields, but your foundation is already solid.
It was easier to understand the field of machine learning, since I was translating mathematical instructions in a way for a computer to understand, similar to mathematics. I’m not a banker, but in a bank, you can diversify.
CG: What interested you in this field?
Blessing: The critical thinking piece of it, so you could actually think outside the box. You’re not restricted to a particular thing, so you can expand your ideas. I am interested in the ability to empower businesses to make insightful and meaningful decisions, focus on the right thing, and save them a lot of money and resources as a result.
Organizations don’t focus on an aspect without having something to prove; if they’re investing in something, they want to improve themselves. If I want to grow the organization, I need to focus on this piece and enable them to make proactive decisions and go one step ahead. Data tells a lot of stories. It can give you the whole story about a customer’s history so you can gain insight towards making decisions that help customers and grow the business.
CG: How did you know you were going to end up in the field of machine learning?
Blessing: I didn’t really know much about machine learning. I was more focused on the analytics field, but they’re intertwined. You can’t explore machine learning without using the field of data mining. I already knew the analytical space and modelling aspects, but I questioned if a machine could learn by itself — then I took a course that opened my eyes. The course was through MIT called The Analytics Edge, where I also participated in a Kaggle competition.
CG: How has your experience been at ATB Financial, and what does your day-to-day look like?
Blessing: It’s nice — both ATB and the work. I like that I can add value based on my background and skillset. It’s an opportunity to think outside the box and be creative. I appreciate the culture and the flexibility to get the job done. ATB’s workplace 2.0 gives a work-life balance, so when you’re at home, you’re able to avoid distractions and concentrate.
In my day-to-day, I do a lot of data mining to understand the data before obtaining insights. If you don’t understand the data, you’re not going anywhere. Then you have to do the coding and use algorithms to figure out what you need. You’re also delivering data in a format that’s clear and understandable to the client. Using precise terms and great visuals, you create dashboards, reports, and summaries to support team members so they can give the clients a great customer experience. My day is packed, but I learn, as I don’t want to be stale. During any spare time I have, I take a course, or learn new tools and software.
CG: What advice would you give to people who want to start their career in data or machine learning?
Blessing: If you’re having doubts about the field, clarify to make sure it’s what you want to do by taking a course. I’ve been doing data analysis for years, but to determine my career path, I took courses online. In addition to The Analytics Edge course, the Statistical Thinking for Data Science and Analytics course also played a significant role in my career path. I was also part of the ATB team who went to Google Advanced solutions Lab for four weeks and we had a deep dive into machine learning, Google Cloud Platform and TensorFlow.
It really blew my mind and I knew I wanted to do that. Taking a course gives you a feel for the subject and can be a great start towards making sure that’s what you really want to pursue. It’s usually free and you can also get a certificate for a low sum.
Choose a language to learn like R or Python which are open source. Pick one and learn it. There is a lot of help available from online communities, so if you’re figuring out something, you can ask someone else for information. We’re all sharing, collaborating, and supporting each other. Have a support system if you’re having trouble — there’s lots of articles and free resources.
Technology is fast-paced, so you have to keep up. Try to read articles, at least one a day to know what’s going on. Just keep improving yourself, have an open mind, and don’t be discouraged at first. You’re learning, so be persistent and continue improving yourself.
CG: What makes ATB Financial different from other organizations in how they apply machine learning?
Blessing: ATB Financial has taken a stand in being a disruptor, customer obsessed, and giving great experiences. This has driven and pushed them to be the #1 best place to work. They’re committed to changing and improving people’s lives so they’re using all they’ve got — tools and technologies — to accelerate that journey. That’s the way things are. If you can’t beat the competition, you have to be different to become #1 in the marketplace. That’s been ATB’s drive.
CG: What are your philosophies for diversity, and how does being at ATB Financial fit with those values?
Blessing: I believe we are all unique, just like Albert Einstein said: “Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” Everyone is unique with different skills; we all have value and we all need to be respected. ATB respects everyone and supports you no matter who you are — that’s one of the things I love about ATB. Being an analyst at ATB Financial was my first meaningful career coming to Canada, and ATB gave me the chance and opportunity.
CG: How did you get involved with Chic Geek, and how can it add value to your life?
Blessing: I heard about Chic Geek through a colleague, but I didn’t really know much. When Sonu Jaswal, Director of Reputation & Brand in Transformation, asked me about speaking at MeetNTech, I accepted. I wanted to really know what Chic Geek is all about, and it’s amazing — supporting women and encouraging them. Because we’ve been shoved to the sidelines, having that empowerment adds value to society. Chic Geek is a great opportunity and I see a great potential for a great partnership.
CG: And lastly, what’s your advice for success?
Blessing: Determination is a big one. Of course, life comes with challenges and they will hit you, so how do you respond to those challenges: you breathe in, say “I can do it”, encourage yourself, and push harder. If you need to spend late nights learning something, go ahead and do it. If you need to go out of your comfort zone, make it your goal. Push yourself and move forward to accomplish that goal — that’s resilience.
Don’t be afraid to fail, in fact it is better to fail fast. When you fail, you learn from that and you get stronger. It’s better to fail fast and succeed at the end, rather than almost succeeding before you fail, as all your efforts are wasted. Fail fast, so you can aim for long lasting success.
Did you enjoy learning about ATB Financial and Blessing Okeke? Learn more about our partners and speakers at our monthly MeetNTech events, where we feature a topic in technology.
You might also enjoy reading:
Artificial Intelligence — An Introduction
Artificial Intelligence is a concept simultaneous fantasized by storytellers, debated by philosophers and investigated…medium.com
Homegrown AI
Imagine this — it’s a Friday night, you’re tired after a long work week, and all you want to do is hibernate and binge…medium.com
| Speaker Spotlight: Blessing Okeke | 8 | speaker-spotlight-blessing-okeke-107b46bc9a6 | 2018-04-10 | 2018-04-10 04:24:50 | https://medium.com/s/story/speaker-spotlight-blessing-okeke-107b46bc9a6 | false | 1,465 | Encouraging women to be builders and creators leveraging technology to shape the world we live in. | null | chicgeekyyc | null | The Chic Geek | the-chic-geek | WOMEN IN BUSINESS,WOMEN IN TECH,TECHNOLOGY,ENTREPRENEURSHIP | ChicGeekYYC | Data Science | data-science | Data Science | 33,617 | Carrie Mah | UX Designer & Developer dedicated to improving lives ღ Blogging on #ux #design #web #digitalliteracy #diversity #womenintech 💻 www.misscarriemah.ca | d23e7f552f1d | missCarrieMah | 264 | 360 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 721b17443fd5 | 2018-09-21 | 2018-09-21 07:02:07 | 2018-09-17 | 2018-09-17 16:09:27 | 4 | true | en | 2018-10-29 | 2018-10-29 12:35:12 | 4 | 107b5b9364ab | 5.681132 | 2 | 0 | 0 | Limitation of a Regular Neural Network | 5 | Convolution Neural Network - In a Nutshell
Limitation of a Regular Neural Network
In a regular neural network, the input is transformed through a series of hidden layers having multiple neurons. Each neuron is connected to all the neurons in the previous and the following layers. This arrangement is called a fully connected layer and the last layer is the output layer. In Computer Vision applications where the input is an image, we use convolutional neural network because the regular fully connected neural networks don’t work well. This is because if each pixel of the image is an input then as we add more layers the amount of parameters increases exponentially.
Consider an example where we are using a three color channel image with size 1 megapixel (1000 height X 1000 width) then our input will have 1000 X 1000 X 3 (3 Million) features. If we use a fully connected hidden layer with 1000 hidden units then the weight matrix will have 3 Billion (3 Million X 1000) parameters. So, the regular neural network is not scalable for image classification as processing such a large input is computationally very expensive and not feasible. The other challenge is that a large number of parameters can lead to over-fitting. However, when it comes to images, there seems to be little correlation between two closely situated individual pixels. This leads to the idea of convolution.
What is Convolution?
Convolution is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it [1]. In a neural network, we will perform the convolution operation on the input image matrix to reduce its shape. In below example, we are convolving a 6 x 6 grayscale image with a 3 x 3 matrix called filter or kernel to produce a 4 x 4 matrix. First, we will take the dot product between the filter and the first 9 elements of the image matrix and fill the output matrix. Then we will slide the filter by one square over the image from left to right, from top to bottom and perform the same calculation. Finally, we will produce a two-dimensional activation map that gives the responses of that filter at every spatial position of input image matrix.
Challenges with Convolution
1- Shrinking output
One of the big challenges with convolving is that our image will continuously shrink if we perform convolutional operations in multiple layers. Let’s say if we have 100 hidden layers in our deep neural network and we perform convolution operation in every layer than our image size will shrink a little bit after each convolutional layer.
2- Data lost from the image corners
The second downside is that the pixels from the corner of the image will be used in few outputs only whereas the middle region pixels contribute more so we lose data from the corners of our original image. For example, the upper left corner pixel is involved in only one of the output but middle pixel contributed in at least 9 outputs.
Padding
In order to solve the problems of shrinking output and data lost from the image corners, we pad the image with additional borders of zeros called zero padding. The size of the zero padding is a hyperparameter. This allows us to control the spacial size of the output image. So if we define F as the size of our filter, S as the stride, N as the size of the image, and P as the amount of padding that we require, then the image output size is given by the following.
Convolution on 6 x 6 image with zero padding = 1
We can see that by using zero padding as 1, we have preserved the size of the original image. There are two common choices of padding. ‘Valid’ where we use P = 0 means no padding at all and ‘Same’ where the value of P is selected such that the size of the output image is equal to input image size. As far as filter size ‘F’ is concerned it is a recommended practice to select the odd number. Common choices are 1, 3, 5, 7…etc.
Convolution Over RGB Images
Earlier we saw the convolution operation on grey scale image (6 X 6). If our image is RGB then the dimensions will be 6 X 6 X 3 where 3 denotes the number of color channels. To detect the features in RGB images we use filters with 3 dimensions where the 3rd dimension will always be equal to the number of channels.
Single Layer of Convolutional Network
In one single layer of a convolutional network, we detect multiple features by convolving our image with different filters. Each convolution operation generates a different 2 dimensions matrix. We add bias to each of these matrices and then apply non-linearity. Then all of them were stacked together to form a 3-dimensional output. The third dimension of the final output will be equal to the number of filters used in convolution operation.
Dimensions of Convolutional Network
We will compare the convolution layer with regular neural network layer to calculate the number of parameters and dimensions.
Dimension of Convolutional Layer Parameters
Pooling Layer
Though the total parameters of our network decrease after convolution still we need to further condense the spatial size of the representation to reduce the number of parameters and computation in the network. Pooling layer does this job for us and speeds up the computation as well as make some of the features more prominent. In pooling layer, we have two hyperparameters filter size and stride which are fixed only once. Following are two common types of pooling layers.
Max Pooling:
Let’s consider a 4 x 4 image matrix which we want to reduce to 2 x 2. We will use a 2 x 2 block with the stride size of 2. We will take the maximum value from each block and capture it in our new matrix.
Max Pooling
Average Pooling:
In average pooling, we take the average of each of the blocks instead of the maximum value for each of the four squares.
Average Pooling
The Architecture of Convolutional Neural Network
A neural network that has one or multiple convolutional layers is called Convolutional Neural Network (CNN). Let’s consider an example of a deep convolutional neural network for image classification where the input image size is 28 x 28 x 1 (grayscale). In the first layer, we apply the convolution operation with 32 filters of 5 x 5 so our output will become 24 x 24 x 32. Then we will apply pooling with 2 x 2 filter to reduce the size to 12 x 12 x 32. In the second layer, we will apply the convolution operation with 64 filters of size 5 x 5. The output dimensions will become 8 x 8 x 64 on which we will apply pooling layer with 2 x 2 filter and the size will reduce to 4 x 4 x 64. Finally, we will pass it through two fully connected layers to convert our image matrix into a classification matrix.
The architecture of Convolutional Neural Network
Final words
We learned about the convolutional layer, pooling, and fully connected layer. Now the question is how to make a combination of these layers to solve computer vision problems. It is actually an art and may vary depends on the problem. A lot of research has been done and many CNN architectures have been presented such as LeNet-5, AlexNet, VGG, ResNet…etc. It is a good practice to always apply those architectures on your problem first and then make necessary changes based on intuition and results. In the next article, we will practically implement a CNN using Keras.
References
https://en.wikipedia.org/wiki/Convolution
Convolutional Neural Networks by Andrew Ng. (coursera.org)
Neural Networks and Convolutional Neural Networks Essential Training by Jonathan Fernandes (LinkedIn.com/learning)
This article was originally published at engmrk.com
| Convolution Neural Network - In a Nutshell | 8 | convolutional-neural-network-in-a-nut-shell-107b5b9364ab | 2018-10-29 | 2018-10-29 12:35:13 | https://medium.com/s/story/convolutional-neural-network-in-a-nut-shell-107b5b9364ab | false | 1,320 | Coinmonks is a technology focused publication embracing all technologies which have powers to shape our future. Education is our core value. Learn, Build and thrive. | null | coinmonks | null | Coinmonks | coinmonks | BITCOIN,TECHNOLOGY,CRYPTOCURRENCY,BLOCKCHAIN,PROGRAMMING | coinmonks | Machine Learning | machine-learning | Machine Learning | 51,320 | Muhammad Rizwan Khan | https://engmrk.com | 5fc7bc24e618 | rizwankhn2003 | 14 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-07 | 2018-08-07 17:01:04 | 2018-08-07 | 2018-08-07 17:07:46 | 1 | false | en | 2018-08-08 | 2018-08-08 09:55:24 | 0 | 107b771a30b | 0.618868 | 0 | 0 | 0 | The SFF project received support from the European Union in co-funding the R&D of our technologies | 5 | EU thumbs up!
During the second quarter of 2018, Spektrolabas Ltd received research support from the European Union for the SFF project. The contract was signed to co-fund the R&D and the creation of innovative methods for detecting counterfeiting food, based on Raman spectrometry and neural networks.
European Union approval is the recognition of the merit of our project and the relevance of our goals, and for that all the SFF team is very proud.
This is a great start, but we are well aware that it is only the very beginning. We have a long path and a lot of work ahead of us, and we are extremely motivated!
| EU thumbs up! | 0 | eu-thumbs-up-107b771a30b | 2018-08-08 | 2018-08-08 09:55:24 | https://medium.com/s/story/eu-thumbs-up-107b771a30b | false | 111 | null | null | null | null | null | null | null | null | null | European Union | european-union | European Union | 9,065 | Stop Fake Food | Enjoying food, not lies | 5adbe9c0206 | stopfakefood | 12 | 0 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | f5af2b715248 | 2018-02-22 | 2018-02-22 09:17:02 | 2018-02-22 | 2018-02-22 09:24:00 | 10 | false | en | 2018-02-22 | 2018-02-22 09:56:11 | 23 | 107cf48e3f6a | 10.095283 | 15 | 1 | 0 | The corporate travel industry is slow to change. Why experiment and risk losses when everything works just fine? While leisure travel is… | 5 | Corporate Travel Management: Driving Technological Transformation in the World of Business Travel
The corporate travel industry is slow to change. Why experiment and risk losses when everything works just fine? While leisure travel is rapidly moving forward by adopting consumer booking tools and mobile payments, as it drifts away from traditional travel agencies, business travel management is still hesitant. But not travelers themselves.
By 2020, millennials will make up 50 percent of the American workforce. These people, used to planning their personal trips on smartphones, managing expenses digitally, and trusting AI with important analytical decisions, quickly learn and embrace technology. And they demand the same from their employers. Half of the corporate travelers surveyed by Sabre say their organizations don’t suggest using any travel apps, and about a third of them end up using their own preferred solutions. Although the majority of employees use mobile apps to contact the office during business trips and for emergencies, their companies don’t have the technology to track them throughout the journey and provide assistance in case of need.
There are existing solutions to address the travelers’ demand and adjust to their new behavior. You can divide these solutions into two categories:
Independent software vendors — companies providing self-service solutions for travel expense management, booking, invoicing, support dashboards, and automating your organization’s travel management process overall
Travel management organizations — companies, offering all the above, plus services that include access to travel agents, consultancy, custom travel program building, and more
See the image below for the examples of both.
To finally keep up and optimize travel management in your corporation, start with these important digital trends in business travel.
Self-service from consumer applications to business
The rise of mobile technology and fast mobile Internet allowed for enhanced connectivity. We use apps to order our coffee ahead of the line or check-in for the flight from home. Travelers too are confident with making their purchases online, changing an itinerary, and booking with the help of an app or even a chatbot. Instead of calling a person in the office, they enjoy autonomy. It doesn’t mean that business travelers want to evade a corporate policy and stay independent during their trips. Rather, they’d like to control some of their travel themselves.
Corporate travel app Lola does just that. It provides smooth, personalized trip planning for people who travel a lot and uses chat as the main medium to connect its concierge and clients. Choosing and booking in this case are up to a traveler who reviews different options independent of an agent. Paul English, a Lola founder, shares, “I thought I was building a consumer company like Kayak and discovered it was business travelers that wanted chat. The second thing is that I thought we were building for chat, email, and phone. We found that people almost never want to talk on the phone.”
Personalized flow of trip planning via Lola
No, the role of travel managers and travel management companies doesn’t become obsolete. With more time on their hands, travel agents can research and implement new technologies, update travel policies, use analytics to explore travelers’ behavior and apply personalization in the future. Also, an employee can’t solve all travel problems using just her smartphone. A travel manager will always be there to help with disruptions and to notify about sudden changes.
Duty of care and travel risk management through technology
Duty of care is a concept meaning that a company has a moral and legal obligation to protect its employees and provide safety while they’re working in remote areas. While business travelers prefer to stay fairly independent on their trips, they need to know that someone has their back. To execute their duty of care, companies are implementing travel risk management solutions. Today, however, these measures are only concerned with preparing a traveler for the trip via education, not providing security and monitoring while they’re away.
Traveling in the past few years has been strongly associated with risk: regulation changes, strikes, terrorist attacks, and natural disasters are some of the biggest disruptors that may not only prevent a business traveler from reaching their destination on time, but also can present health or life dangers. It shouldn’t come as a surprise that 65 percent of travelers have more fear regarding traveling for business than they did 12 months ago. Half of them worry that their reluctance to go on business trips may harm their careers. These alarming numbers are further confirmation that having an effective and transparent risk policy is a must.
According to the 2016 survey by GBTA, 72 percent of companies buying travel have a risk management strategy in place. Still, 28 percent of interviewed companies either don’t have an established risk management plan or their employees don’t know whether they have one. Let’s uncover how organizations can use business travel technology to streamline risk management processes.
Travel agents should be promptly notified about any emerging risks and quickly react according to established procedures. Travel management software such as 4Site allows agents to see constantly updated information about each traveler on a dashboard and communicate with travelers in a chat to inform about changes.
Travel agents can see traveler details on a timeline and remotely handle disruptions
Recorded past disruptions and risks will help organizations evaluate dangers in the future. Then, you can filter and distribute data to different departments where people can make use of it. Using information about the weather or airport service, the system can alert to possible disruptions and act accordingly. Learn more about this and other use cases of data science in travel in our article.
Protect third-party travelers on your behalf
Sometimes organizations need to look after contractors or partners when they’re traveling on a company’s behalf. By establishing a monitoring and communication tool, you can easily create a profile for any such person and track their status just as if they were your employees.
Even if you have an effective plan in place, you can’t leave your employees clueless about the company’s procedures in case of a disruption. Be it a simple flight delay or a serious obstruction, a traveler must know that someone is handling the issue and they won’t be abandoned. Establish a communication platform that would be convenient for both travel managers and traveling employees and confirm that they are trained to use it.
Embracing secure and convenient virtual payments
When services such as Uber started allowing clients to pay directly via an app, people quickly got used to paying with a tap of a finger. Corporate travel, however, still lags behind. Why? The survey by AirPlus names the main reason — the lack of knowledge about the differences between credit cards and virtual paying options. Let’s give a quick explanation.
Mobile wallets such as Apple Pay or Samsung Pay use near field communication (NFC) technology to make contactless payments. You just hold your smartphone or wearable device near a terminal or another compatible device which makes the process pretty much the same as using plastic cards. Mobile wallets can store all your virtual money — corporate and personal cards, limited use cards provided for each specific vendor with a specific limit, loyalty and discount points.
There are advantages to consider for all types of payments, but let’s see how mobile payments can transform corporate travel and expense management.
Risk and fraud prevention
People don’t have to worry about a corporate credit card being lost or stolen in travel and using their personal cards for business. From an organization’s perspective, virtual money permits the creation of virtual card numbers (VCNs) that expire after specific events or amounts spent to ensure that corporate policies are followed. This prevents theft, fraud, or simply misuse by a traveler.
Improved expense management and reporting
Using virtual payments also allows agents to keep all travel data in one place without having to deal with multiple bills and sources of data. All expenses appear in a reporting tool in real time, can be linked to GPS, and create much more sophisticated reports. This streamlines expense management while saving both travelers’ and agents’ time.
Paying for third-party travelers
Not all people traveling on a company’s behalf will be its employees or can be issued a corporate credit card. In this case, a manager can more easily and with no risk create a virtual card number for all contractors or employees who rarely travel.
Virtual payment technology benefits both travelers and travel managers. Employees can make quick and simple payments without having to carry cash or pay with their own money. Managers receive more accurate data and simplified expense management.
Shared economy is joining the game
Shared economy businesses are transforming the way we as a society receive and provide services. Many US adults (66.3 percent) are expected to use a sharing economy service this year, be it for requesting a ride, asking someone to do grocery shopping for them, or taking care of a pet when on vacation.
In the past few years, hotel room prices have gradually gotten higher and are expected to increase in 2018 as well. The surcharges and fees for services that used to be free are also hitting record highs with more and more hotels charging for Wi-Fi or an in-room safe. To reduce travel expenses, companies are searching for alternative accommodation options. Of course, we’re talking about Airbnb.
Over 250,000 companies have been using its platform for business travel since the launch of the program in 2015. A highly-functional dashboard for managing trips and traveling employees along with a growing list of business-ready accommodations helped the service skyrocket its corporate travel initiative. Handpicked homes with 24-hour check-in, wireless Internet, and a comfortable workplace are prioritized when considering a home “business-ready.”
AirBnb Business reporting dashboard
Such options go in line with the recent trend towards combining business and leisure travel — bleisure — and we have a whole article dedicated to targeting bleisure travelers for you to check out. Along with friendly hosts and a homey feeling during a long trip, employees can enjoy a much wider set of amenities from a Netflix account to a fully-functional kitchen area. Corporate travelers can truly experience the local feel and have a proper rest after a long day in a conference room.
Uber is also becoming a preferred alternative to expensive taxis and rental cars. Just like Airbnb, the transportation giant provides an activity dashboard to help manage all corporate transportation in one place, request a ride on behalf of your employee or customer for it to be ready at the hotel or an airport, and even customize your settings according to each employee’s preferences.
Managing business transportation on Uber
As for the flights, there are even options to skip commercial airlines and use private jets for a fraction of the price. Subscription-based services like SurfAir allow for flexible and convenient air transportation when you travel as a group or are meeting a valuable client.
Leveraging machine learning to increase value and lower costs
There’s a lot of predictability when it comes to business travel. Trips are fixed to dates of conferences, the length of stay is usually short, corporate travelers have similar demands and needs when it comes to comfortable plane seats and hotel rooms. You can learn a lot about people from the same category if you collect and analyze all data you receive from them. And the better you do it, the more satisfied the future travelers will be. There are numerous applications of data science in travel, and when it comes to business travel, there’s no exception.
Recently, Google Flights started predicting delays using a machine learning algorithm. While this tool will be useful for all types of travelers, in the corporate world, such disruptions can cause significant financial losses and should be dealt with promptly and effectively. By being alerted about possible turmoil, travel agents can adjust the schedules and trip details, for instance, extend a hotel room booking without making an important client wait in an airport.
Google Flights delay prediction
The boom of chatbots has been especially prominent in the travel industry, allowing people to deal with stressful trip planning using AI-enabled smart assistants. Such technologies learn from your past choices and make better suggestions the more you use them. They also collect data from all other users, which helps segment and provide more targeted options. For example, travel assistant Mezi keeps living profiles of each traveler and automatically updates them depending on user behavior. This allows agents to provide more personalized services. Learn more about customer personalization in travel in our article dedicated to that topic.
AI-enabled Travel Dashboard by Mezi
Not only does machine learning help predict delays, it also can provide insight on future travel prices. By analyzing historical data, algorithms provide accurate forecasts about the drops or spikes of flight and hotel prices depending on demand, seasonal trends, and other factors. Analytics-based software like one that AltexSoft built for Fareboom allows agents to plan days, weeks, or months ahead and save time and money.
Overcoming the digital divide
These days, corporate travel is struggling with solving problems that the current technological landscape can help resolve. The boosted prices for stays, unstable natural and political environments, travelers’ demands — all these factors affect how companies and their employees do business.
The main takeaway here is that technology doesn’t exist to replace travel managers and automate business travel altogether. In fact, it empowers agents with tools enabling them to reduce risk and stress for corporate travelers. Utilizing services and tools will bring faster and more efficient solutions, freeing time for strategic and analytical decisions.
Liked the story? Clap for us so more people can find it! 👏
Originally published at AltexSoft’s blog: “Corporate Travel Management: Driving Technological Transformation in the World of Business Travel”
This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 299,352+ people.
Subscribe to receive our top stories here.
| Corporate Travel Management: Driving Technological Transformation in the World of Business Travel | 177 | corporate-travel-management-driving-technological-transformation-in-the-world-of-business-travel-107cf48e3f6a | 2018-04-11 | 2018-04-11 10:30:20 | https://medium.com/s/story/corporate-travel-management-driving-technological-transformation-in-the-world-of-business-travel-107cf48e3f6a | false | 2,344 | Medium's largest publication for makers. Subscribe to receive our top stories here → https://goo.gl/zHcLJi | null | null | null | The Startup | null | swlh | STARTUP,TECH,ENTREPRENEURSHIP,DESIGN,LIFE | thestartup_ | Travel | travel | Travel | 236,578 | AltexSoft Inc | Being a Technology & Solution Consulting company, AltexSoft co-builds technology products to help companies accelerate growth. | fb641da4b895 | AltexSoft | 572 | 13 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 9c47f8def1a1 | 2018-05-01 | 2018-05-01 14:05:12 | 2018-05-01 | 2018-05-01 14:00:57 | 1 | false | en | 2018-05-02 | 2018-05-02 08:19:41 | 4 | 107d70217417 | 2.301887 | 5 | 0 | 0 | null | 5 | Swift for TensorFlow is now open source
TensorFlow has continued its success in 2017 well into 2018. It’s quickly expanding its capabilities, and we’re beginning to see it used by engineers that aren’t data specialists. We’ve seen that in the launch of TensorFlow.js, which allows you to bring machine learning to the browser. But Swift for TensorFlow is a slightly different proposition. In fact, it does two things. On the one hand it offers a new way of approaching TensorFlow, but it also helps to redefine Swift.
Let’s be honest — Swift has come a long way since it was first launched by Apple back at WWDC 2014. Back then it was a new language created to reinvigorate iOS development. It was meant to make Apple mobile developers happier and more productive. That is, of course, a noble aim — and by and large it seems to have worked. If it hadn’t we probably wouldn’t still be talking about it. But Swift for TensorFlow marks Swift as a powerful modern programming language that can be applied to some of the most complex engineering problems.
What is Swift for TensorFlow?
Swift for TensorFlow was first unveiled at the TensorFlow Dev Summit in March 2018. Now it’s open source, it’s going to be interesting to see how it shapes the way engineers use TensorFlow — and, of course, how the toolchain might shift.
But what is it exactly? Watch the video below, recorded at TensorFlow Dev Summit, to find out more.
Here’s what the TensorFlow team had to say about Swift for TensorFlow in a detailed post on Medium.
“Swift for TensorFlow provides a new programming model that combines the performance of graphs with the flexibility and expressivity of Eager execution, with a strong focus on improved usability at every level of the stack. This is not just a TensorFlow API wrapper written in Swift — we added compiler and language enhancements to Swift to provide a first-class user experience for machine learning developers.”
Why did TensorFlow choose Swift?
This is perhaps the key question: why did the TensorFlow team decide to use Swift for this project? The team themselves note that they are often asked this question themselves. Considering many of the features of Swift for TensorFlow can easily be implemented in other programming languages, it’s a reasonable question to ask.
To properly understand why TensorFlow chose Swift you need to go back to the aims of the project. And they’re actually quite simple — the team want to make TensorFlow more usable. They explain:
“We quickly realized that our core static analysis-based Graph Program Extraction algorithm would not work well for Python given its highly dynamic nature. This led us down the path of having to pick another language to work with, and we wanted to approach this methodically.”
The post on GitHub is well worth reading. It provides a detailed insight into how to best go about evaluating the advantages and disadvantages of one programming language over another.
Incidentally, The TensorFlow team say the final shortlist of languages was Swift, Rust, Julia, and C++. Swift ended up winning out — there were ‘usability concerns’ around C++ and Rust, and compared to Julia not only was there a larger and more active community, it is also much more similar to Python in terms of syntax.
| Swift for TensorFlow is now open source | 7 | swift-for-tensorflow-is-now-open-source-107d70217417 | 2018-05-17 | 2018-05-17 22:35:39 | https://medium.com/s/story/swift-for-tensorflow-is-now-open-source-107d70217417 | false | 557 | Insight and analysis on how the software landscape is changing. And how it’s changing the world. | null | PacktPub | null | Packt Hub | null | packt-hub | TECHNOLOGY,TECH,SOFTWARE | PacktPub | Swift | swift | Swift | 13,689 | Richard Gall | Charming Millennial | 9bd134c414c1 | richggall | 94 | 129 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-07-09 | 2018-07-09 07:40:33 | 2018-07-09 | 2018-07-09 11:11:00 | 2 | false | en | 2018-08-30 | 2018-08-30 17:02:17 | 7 | 107ebd42da6a | 2.941824 | 795 | 9 | 0 | Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management… | 5 | Serving the Market of the Most Vulnerable Populations Out There
Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management, Prevention of Substance Use Disorders, and Rehabilitation of Repeat Offenders.
So, let us begin by throwing out some key facts -
There are approximately 255 million individuals suffering from substance use disorders and roughly $100 billion is being spent on addiction treatment worldwide. In addition, according to the International Centre for Prison Studies, the global prison population is currently set at 10.5 million. Prison budgets are also currently set at roughly $35.2 billion dollars worldwide. These numbers are huge and both costly to the governments, tax payers, and society.
We have a problem amigos and amigas ! No. We have a serious problem !
Every person in life seeks a second chance. Inmates and substance addicts are no exception. In fact, they are the ones that are in most dire need for help, support, and development to become improved citizens upon their release from prisons or rehabilitation (“rehab”) centers. This is realized through correctional and rehabilitation programs that will prepare them to lead their future lives in a positive manner to avoid the possibility of repeat offending and substance addictions.
That’s exactly the reason why we launched Virtual Rehab back in 2017. We wanted to make a difference in this world and in the lives of those who are in most dire need for our help. We rolled-up our sleeves and we got to work. Not only that. We equally loved every bit of the challenges and the successes.
So, let us tell you more about Virtual Rehab and what exactly are we doing ? (Note that we can go on and on about this, but we appreciate that you don’t have all the time for us — but if you do, we will share our White Paper at the end of the article)
As mentioned earlier, Virtual Rehab’s evidence-based solution leverages the advancements in virtual reality, artificial intelligence, and blockchain technologies for pain management, prevention of substance use disorders, and rehabilitation of repeat offenders.
At Virtual Rehab, our innovative and our powerful solution (supported by existing research) is intended to rehabilitate rather than just punish. The scope of our solution includes pain management, psychological, and correctional rehabilitation. However, the Virtual Rehab team reserves the right to explore new industries to further expand its global operations accordingly. Our technology includes services in a telemedicine context and can extend to individual users of the Virtual Rehab solution to serve the B2C market, in addition to hospitals, rehab centers, correctional facilities, correctional officers, inmates, etc. to serve the B2B market. Furthermore, using blockchain technology, we can now reach out to those vulnerable populations directly, to offer help and reward, by empowering them with the use of Virtual Rehab’s Utility ERC-20 $VRH Token within our network.
Virtual Rehab believes that putting a kid in the corner does not teach them how to be a better person but rather teaches them not to get caught. Therefore, we are in it for the social good and to help address the needs of the most vulnerable populations out there.
Listen, since this is our very first article, we will leave it at that for now. However, we will be sharing much more in future articles, so please bear with us as we are very passionate about what we do.
In the meantime, if you can’t wait and want to learn more, we just released our White Paper earlier today !!! So, go ahead and check it out and let us know if you have any questions. We are on Telegram, Twitter, Facebook, LinkedIn, YouTube, and yes, now, we are here.
Our $VRH Token Sale will last until October 31st (Private Sale), November 1st (Pre-Sale), and November 15th (Main Sale).
OK ! Now that we got this out of the way, we will let you go and we will be back with more. Be Safe and Make a Difference in this World !!!
Peace Out !
| Serving the Market of the Most Vulnerable Populations Out There | 1,862 | serving-the-market-of-the-most-vulnerable-populations-out-there-107ebd42da6a | 2018-08-30 | 2018-08-30 17:02:17 | https://medium.com/s/story/serving-the-market-of-the-most-vulnerable-populations-out-there-107ebd42da6a | false | 678 | null | null | null | null | null | null | null | null | null | Cryptocurrency | cryptocurrency | Cryptocurrency | 159,278 | Virtual Rehab | Virtual Rehab's evidence-based solution uses #VR, #AI, & #blockchain technology for Prevention of Substance Use Disorders & Rehabilitation of Repeat Offenders | f0264e3a3a70 | VirtualRehab | 2,168 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | eab1500c3930 | 2018-09-23 | 2018-09-23 21:29:15 | 2018-09-23 | 2018-09-23 21:32:31 | 1 | false | pt | 2018-10-10 | 2018-10-10 15:27:09 | 0 | 1081cbbd986 | 2.150943 | 1 | 0 | 0 | Palestrantes: Ricardo Cappra | 4 | 10 Coisas Sobre um Futuro Inundado de Dados
Palestrantes: Ricardo Cappra
Hub: Tanguera / Estudio de Danza
Horário: 10h15
___________________________________________________________________
Overview da palestra:
Como lidar com tanta informação que recebemos/produzimos.
___________________________________________________________________
Highlights:
O cientista-chefe da Cappra Data Science, empresa que desenvolve frameworks para implementação de cultura analítica e métodos para facilitar o uso da ciência de dados, trouxe em sua palestra os 10 ideias sobre a quantidade de dados que estamos produzindo e recebendo:
1. Caos: Há produção de muitos dados atualmente, essa tendência jamais vai diminuir, tende somente a evoluir. Com isso, há caos: nosso cérebro tem uma quantidade limitada para consumo de informação .
2. Overdose: Não é apenas analogia, mas existe mesmo. Infoxication é uma doença reconhecida pela OMS, e se trata de intoxicação por excesso e informação. Estamos muito perto de uma overdose de informações, visto que é um vício consumi- las. Para provar isso, basta pensar no que a maioria das pessoas faz ao abrir os olhos para acordar: abrir o smartphone.
3. Privacidade: As redes sociais, como o facebook, nos controlam com base no sistema punição/recompensa. Quando damos like em algo sabemos que mais informações desse tipo serão nos dada, quando não damos importância a determinado anúncio, o algoritmo irá nos punir não nos dando mais aquele tipo de conteúdo. Deixando esses rastros nas redes e como fica a nossa privacidade? O poder que essas coletas tem pode ser enorme.
4. Ruído Ensurdecedor: Com a quantidade de informação disponível, fica difícil saber o que é útil ou não. Esse “ruído” de informação precisa ser eliminado, filtrado, para que se possa transformar dados em informação relevante. O desafio é transformar o Big Data em Smart Data.
5. Swin Lanes (raias de piscina): Captamos dados e os separamos praticamente em raias. O conteúdo separado diminuiu nossa visão ampla sobre uma realidade. É preciso eliminar as raias para enxergar tudo em um ambiente comum.
6. Cultura Analítica: O Brasil é carente de uma educação analítica, que se baseie em comparativos e análises. É preciso que essa lógica seja ensinada desde o ensino fundamental.
7. Dados contam histórias: Temos que passar a tomar decisões sistemáticas, ou seja, baseadas nos dados que são relevantes para nós. É importante aprender o “pensamento do algoritmo”.
8. Cérebros sintéticos: Os algoritmos aprendem conforme são ensinados pelas experiências humanas, ou seja, se não os “ensinarmos” eles tomarão decisões sozinhos, que não são as pretendidas.
9. Ciência de dados: é importante valorizar a ciência de dados, que perpassa conhecimentos pelas áreas da ciência, tecnologia e negócios.
10. Dados para o bem: Todos os dados que podemos apreender podem nos dar muito poder. É importante que levemos esse conhecimento para a o bem. Cappra citou o exemplo do projeto Data4good, iniciativa que dava 48h para que alunos pudessem criar um protótipo de dados.
___________________________________________________________________
Reflexões da Trendspotter:
Achei de extrema importância todo o conhecimento apresentado por Cappra. Acredito que são tendências que precisamos investir logo, pois é o caminho que pode dar mais certo para o futuro de um mundo baseado em dados.
___________________________________________________________________
Trendspotter: Maira Miguel, Social Media na Global
| 10 Coisas Sobre um Futuro Inundado de Dados | 1 | 10-coisas-sobre-um-futuro-inundado-de-dados-ricardo-cappra-1081cbbd986 | 2018-10-10 | 2018-10-10 15:27:09 | https://medium.com/s/story/10-coisas-sobre-um-futuro-inundado-de-dados-ricardo-cappra-1081cbbd986 | false | 517 | BS Festival Trendspots by GPRS | null | null | null | BS Trendspots by GPRS | null | bstrendspotsbygprs | null | null | Big Data | big-data | Big Data | 24,602 | Grupo de Planejamento RS | Representante dos profissionais de planejamento, estratégia e gestão de marcas no Rio Grande do Sul | a1ec50072fc3 | GrupoPlanejRS | 47 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | if (2 eyes && 1 nose && 1 mouth) {
It's a face!
}
if (2 adjacent eyes && nose under eyes && mouth under nose) {
It's a face!
}
git clone https://github.com/bourdakos1/capsule-networks.git
pip install -r requirements.txt
python main.py
python main.py --is_training False
| 6 | null | 2017-11-17 | 2017-11-17 11:04:28 | 2017-11-10 | 2017-11-10 17:04:16 | 6 | false | en | 2017-11-17 | 2017-11-17 11:05:02 | 11 | 1082ca88bebe | 4.368868 | 1 | 0 | 0 | If you follow AI you might have heard about the advent of the potentially revolutionary Capsule Networks. I will show you how you can start… | 1 | Capsule Networks Are Shaking up AI — Here’s How to Use Them
Geoffrey Hinton [Source]
If you follow AI you might have heard about the advent of the potentially revolutionary Capsule Networks. I will show you how you can start using them today.
Geoffrey Hinton is known as the father of “deep learning.” Back in the 50s the idea of deep neural networks began to surface and, in theory, could solve a vast amount of problems. However, nobody was able to figure out how to train them and people started to give up. Hinton didn’t give up and in 1986 showed that the idea of backpropagation could train these deep nets. However, it wasn’t until 5 years ago in 2012 that Hinton was able to demostrate his breakthrough, because of the lack of computational power of the time. This breakthrough set the stage for this decade’s progress in AI.
And now, on October 26, 2017, he has released a paper on a new groundbreaking concept, Capsule Networks.
Note: I won’t go into too much detail, because Hinton’s papers do a fabulous job at explaining all the technical information and can be found here and here.
Up until now Convolutional Neural Networks (CNNs) have been the state-of-the-art approach to classifying images.
CNNs work by accumulating sets of features at each layer. It starts of by finding edges, then shapes, then actual objects. However, the spatial relationship information of all these features is lost.
This is a gross oversimplification, but you can think of a CNN like this:
You might be thinking that this sounds pretty good, it makes sense, and it does. Although, we might run into a few problems, take this picture of Kim Kardashian for example:
Yikes! There’s definitely two eyes, a nose and a mouth, but something is wrong, can you spot it? We can easily tell that an eye and her mouth are in the wrong place and that this isn’t what a person is supposed to look like. However, a well trained CNN has difficulty with this concept:
In addition to being easily fooled by images with features in the wrong place a CNN is also easily confused when viewing an image in a different orientation. One way to combat this is with excessive training of all possible angles, but this takes a lot of time and seems counter intuitive. We can see here the massive drop in performance by simply flipping Kim upside down:
Finally, convolutional neural networks can be susceptible to white box adversarial attacks. Which is essentially embedding a secret pattern into an object to make it look like something else.
Fooling Neural Networks in the Physical World with 3D Adversarial Objects [Source]
“Convolutional neural networks are doomed” — Geoffrey Hinton
Architecture of CapsNet
The introduction of Capsule Networks gives us the ability to take full advantage of spatial relationship, so we can start to see things more like:
You should be able to see that with this definition our neural net shouldn’t be as easily fooled by our misshapen Kardashian.
This new architecture also achieves significantly better accuracy on the following data set. This data set was carefully designed to be a pure shape recognition task that shows the ability to recognize the objects even from different points of view. It beat out the state-of-the-art CNN, reducing the number of errors by 45%.
CapsNet was able to identify the bottom images were within the same category (animals, humans, airplanes, cars, trucks) as the correspoding top image far better than CNNs.
Further more, in their most recent paper, they found that Capsules show far more resistance to white box adversarial attack than a baseline convolutional neural network.
I have pieced together a repo that is an implementation of Hinton’s paper (many thanks to naturomics). In order to use the Capsule Network model you first need to train it.
The following guide will get you a model trained on the MNIST data set. For those of you who don’t know, MNIST is a data set of handwritten digits and is a good baseline for testing out machine learning algorithms.
Start by cloning the repo:
And install the requirements:
Start training!
The MNIST data set is 60,000 training images. By default the model will be trained for 50 epochs at a batch size of 128. An epoch is one full run through the training set. Since the batch size is 128 it will do about 468 batches per epoch.
Note: Training might take a very long time if you don’t have a GPU. You can read this article on how to speed up training time.
Once our model is fully trained we can test it by running the following command:
Capsule Networks seem awesome, but they are still babies. We could see problems in the future when training huge datasets, but I have faith.
P.S. Here is a great video that I recommend taking the time to watch.
Thanks for reading! If you have any questions, feel free to reach out at [email protected], connect with me on LinkedIn, or follow me on Medium.
If you found this article helpful, it would mean a lot if you gave it some applause👏 and shared to help others find it! And feel free to leave a comment below.
Originally published at hackernoon.com on November 10, 2017.
| Capsule Networks Are Shaking up AI — Here’s How to Use Them | 1 | capsule-networks-are-shaking-up-ai-heres-how-to-use-them-1082ca88bebe | 2018-01-30 | 2018-01-30 07:18:53 | https://medium.com/s/story/capsule-networks-are-shaking-up-ai-heres-how-to-use-them-1082ca88bebe | false | 906 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Fang Yinfeng | null | ac65d96e73a8 | yinfengfang | 0 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-06 | 2018-09-06 17:12:12 | 2018-09-06 | 2018-09-06 18:53:50 | 3 | false | en | 2018-09-06 | 2018-09-06 18:53:50 | 16 | 10844c327612 | 3.840566 | 2 | 0 | 0 | What’s New | 5 | Sentiment Analysis for Algorithmic Trading + Free Data Science Books
What’s New
Cannon Beach — Haystack Rock
I got back from a trip to Oregon and the views are stunning. I meandered up from Florence (best chowder is at the Depot) to Astoria (where some of the Goonies got filmed). Cuteness, breweries and antiques abound!
Things I Enjoyed:
Cascades Brewery, Portland — even if you are not a sour beer fan they will convert you. Barrel-aging in wine barrels works wonders. You should also go if you love tie-dye shirts.
Pok Pok, Portland — small plates, entrees, house brews and cocktails make up this super cute place to eat fantastic Thai. Yum.
Portland Markets — lots of gifted people have put down roots in Portland. You will not leave empty handed.
Florence and Yachats — If you like browsing cute shops, cruising dunes and eating good food then Florence is where it’s at. Go to the Green Salmon Coffee Co. in Yachats for a plethora of drinks. Try the Clockwork Orange Mocha.
Cannon Beach — see the Haystack Rock above!
Sentiment Analysis for Algorithmic Trading — Webinar with Datacamp and Quantopian
Dialed into a webinar this morning where there was a quick walk-through on using sentiment analysis to determine whether to long or short a stock. Some was a review of things I’ve learned previously in projects and there were some things that I now have to check out!
Notes/Further Education:
Lemmatization — an improvement on stemming words (just keeping the root word) in text processing, it takes into account parts of speech (adverb, noun etc.) and thus improves context and accuracy. There’s some really cool things you can look up in the dataset too, such as word similarity with the accompanying strength of that relationship.
Sense2Vec — provides more context for word embedding (where words or phrases from the vocabulary are mapped to vectors of real numbers). I had only heard of Word2Vec before now so I’m interested in trying this out for sure.
Using Recurrent Neural Networks over simple Neural Networks for text processing is recommended. RNN views sentences as sequences of words, whereas simple NN does not. Order matters! Especially when you are using n-grams to properly classify a sentence as positive or negative. Ex: Seeing “Not bad” versus just seeing “bad”
Long Short Term Memory Network (LSTM) — LTSM are a kind of RNN. It processes data sequentially and uses distance and weight as part of the training process. It is capable of learning long-term dependencies (aka can remember information for long periods of time, can forget when necessary).
Factor Models — Models used in asset pricing and portfolio management (CAPM, Fama-French Factors) etc. Multi Factor Models can be used to explain either an individual security or a portfolio of securities. Quantopian actually looks like it has a really nice environment for creating and testing algorithms. You work within a Jupyter Notebook with some pre-loaded finance packages and data. Check them out if interested.
During the webinar the speaker went through an example of using sentiment analysis with sentiment140.com data. He showed, inside Quantopian, how you could create and analyze a strategy of shorting/going long on the most positively/negatively classified companies and have a profitable spread.
One thing that I thought of while this webinar was running was the possibility of bot accounts distorting actual sentiment and thus the results of the algorithm, seeing as sentiment analysis is a popular go-to for trading algorithms now. If there were groups interested in taking advantage of companies/individual users who use sentiment analysis in their algorithms, they could do so by creating thousands of tweets that would advocate buying/selling a stock using positive/negative keywords/phrases in tweets. These groups could be trying to push up the value of companies they have a vested interest in and sell high/hold, punish competitors, or sneak in and buy stock when a negatively classified company’s stock value goes down temporarily.
In short, if I ever end up doing sentiment analysis using social media I will be reading posts like Identifying “Dirty” Twitter Bots with R and Python by Paul van der Laken before I move onto text processing and deep learning.
Learn Hands-On - Sentiment Analysis:
Sentiment Analysis in R — Datacamp
Shorting based on Sentiment Analysis signals — Python for Finance 11 — Quantopian
FREE Books!
There is currently a deal on at Humble to get up to 15 O’Reilly Data Science books. Hurry if you want to take advantage, there’s only 4 days left to do so!
Humble Book Bundle: Machine Learning by O’Reilly
Machine Learning Is Changing the Rules
Introduction to Machine Learning with R
Introduction to Machine Learning with Python
Thoughtful Machine Learning with Python
Machine Learning for Hackers
Practical Machine Learning with H2O
Natural Language Annotation for Machine Learning
An Introduction to Machine Learning Interpretability
Learning TensorFlow
Machine Learning and Security
Feature Engineering for Machine Learning
Learning OpenCV
Fundamentals of Deep Learning
Deep Learning
Deep Learning Cookbook
Something Completely Different
I accidentally discovered a fantastic Github Collection on Machine Learning. Check it out!
Calamityware has a Threadless store with fantastic monster inspired t-shirts that you could previously find on plates/mugs/scarves/shower curtains. They are a ton of fun.
| Sentiment Analysis for Algorithmic Trading + Free Data Science Books | 9 | sentiment-analysis-for-algorithmic-trading-free-data-science-books-10844c327612 | 2018-09-06 | 2018-09-06 18:53:51 | https://medium.com/s/story/sentiment-analysis-for-algorithmic-trading-free-data-science-books-10844c327612 | false | 872 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Brianna Taylor | Data Analyst with an Interest in Machine Learning and AI | 7c61bf248ddd | briannataylor007 | 4 | 76 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | e851b8191a08 | 2018-01-09 | 2018-01-09 20:48:20 | 2018-01-10 | 2018-01-10 08:54:38 | 3 | false | en | 2018-01-10 | 2018-01-10 23:09:20 | 6 | 1085a863ed17 | 3.025472 | 1 | 0 | 0 | Artificial intelligence and machine learning algorithms have been with us for quite some time in everyday applications that we use, for… | 4 | How to survive automatization (AI) and boost your career
Artificial intelligence and machine learning algorithms have been with us for quite some time in everyday applications that we use, for example, the recommendations Facebook uses for the publications, Uber or Waze algorithms when driving, these are just some examples of technology that we already have used advances in artificial intelligence and machine learning for some time.
Let’s take into account that every time an “industrial revolution” begins, after some jobs that disappear, new ones also appear. As I will explain in the following article, we must prepare ourselves to develop our human critical thinking skills and exploit all those qualities that prepare us for the jobs that are coming.
What the AI represents to the world of work
About 47% of the U.S workforce is in jobs at high risk for becoming automated within the next two decades, according to the 2013 Oxford University study.
Customer care is in the midst of mass disruption. Technologies are advancing at record speed, driving customer expectations higher than ever. With the integration of AI with cognitive capabilities, they offer a much cheaper alternative from the point of view of companies to reduce costs. IBM is one of the pioneer companies in developing this type of disruptive technologies on the customer service industry as this 2018 trends article explain.
This represents a big alert for countries like mine (Guatemala) where a large part of the young generation belongs to the customer service industry, where they generate more than 35,000 jobs, according to the Guatemalan economy minister.
This means that many of the jobs that represent performing repetitive tasks will become automated. This will initially generate a certain degree of inequality for people whose jobs are affected.
Prepare yourself
Always remember that this in the first instance means more than the dissolution of complete works only those tasks that can be automated. To prepare for this future, investigate AI tools in your own field. Learn how to use them and exploit them to increase your own productivity.
Learn how to think, prepare for jobs don’t exist yet
Let us also remember that, like other international revolutions, they have served to create new jobs. In fact 65 percent of today’s schoolchildren will eventually be employed in jobs that have yet to be created, according to this U.S. Department of Labor report. That also means that many currently employed workers for the first time since the industrial revolution must be thinking about what they will do to make a living 10 to 20 years from now. So let’s exploit our creativity, curiosity and beings aware of consciousness to work hand in hand with these new tools.
According to the World Economic Forum these are the second of the skills to be reinforced for future positions.
Image: World Economic Forum
Explore as a Freelance
Today, more than 57 million workers — about 36% of the US workforce — freelances. Based on current workforce growth rates found in Freelancing in America: 2017, the majority of the US workforce will freelance by 2027. The youngest workforce generation is leading the way, with almost half of millennials freelancing already.
So a good way to be prepared is to begin to familiarize yourself with the freelance work mode and the tools. Such as UpWork , Freelancer or Fiverr.
Remember our responsibility as technology creators
This fourth industrial revolution has to be created always thinking of teamwork so that the AI serves as a tool for productivity and empowerment and not as a substitute. Our work as developers of technology must always be to put human needs and that the machines are to make us more useful and productive.
‘We use the term augmented intelligence [rather than artificial intelligence],’ Paul Ryan, head of Watson Artificial Intelligence, IBM UK, tells Metro.co.uk at the AI Summit.
‘It’s all about helping humans do their jobs better, it’s man plus machine being greater than man or machine.’
| How to survive automatization (AI) and boost your career | 1 | how-to-survive-automatization-ai-and-boost-your-career-1085a863ed17 | 2018-10-01 | 2018-10-01 17:37:12 | https://medium.com/s/story/how-to-survive-automatization-ai-and-boost-your-career-1085a863ed17 | false | 656 | Whether you want to start your business, scale or reduce your operating costs, Arly’s platform helps you in the process. Imagine the concept of renting a place for your business, in Arly you can rent the online store, the staff and the distribution logistics. | null | arlystores | null | Arly Stores | arly-stores | AI,BOTS,YCOMBINATOR | arlystores | 2018 | 2018 | 2018 | 8,179 | @janpoloy | Cool bio description [email protected] https://linkedin.com/in/juan-pablo-https://github.com/misterpoloy | 8a9f63a61257 | misterpoloy | 12 | 16 | 20,181,104 | null | null | null | null | null | null |
|
0 | brew install webdriver
| 1 | null | 2018-08-31 | 2018-08-31 05:38:25 | 2018-09-07 | 2018-09-07 17:52:31 | 7 | false | en | 2018-09-07 | 2018-09-07 17:52:31 | 9 | 1087b9fcd734 | 6.042453 | 2 | 0 | 0 | Code Walkthrough For A Data Analysis Project | 5 | Web-scraping and Analyzing Gods Unchained Card Data
Code Walkthrough For A Data Analysis Project
https://youtu.be/T3GPZi6BQAo
This article is intended to explain the technical process of my previous post: Relative Value of Cards in Gods Unchained. There, I show results from exploratory data analysis of card data from a new blockchain based game called Gods Unchained. Below is an excerpt from my previous post that provides an overview of the game.
Gods Unchained (GU) is a digital trading card game (TCG) that runs in a downloadable app with the cards stored on the Ethereum blockchain. The game has recently gained attention when the Mythic Titan card known as Hyperion sold for 146.28 ETH, which at the time was ~$62,000! There are still two Mythic Titans hiding in the packs, however the chances to find one are one in a million chance per card reveal. The coolest thing about the game is the cards exist on the Ethereum blockchain. A major benefit of this is that the game allows the user to permanently own their digital cards, so they have the power to trade or sell their cards on sites like OpenSea. As an avid Hearthstone player who has spent lots on money on packs, I am excited for this feature. I can optimize my collection and not have to destroy my account-locked cards to obtain competitive cards like in Hearthstone.
This isn’t the only difference, the game has a Battle Royale Mode that will allow players to take copies of cards from beaten opponents and use them to survive. It’s a similar concept to survival shooter games like Fortnight or PUBG, except with cards! The Beta is expected to be released in early October ’18 and go live early January ’19. There is also a tournament happening Q1 2019 with up to a $1.6 million prize pool, depending on pack sales! The timeline for the project can be found at the bottom of the page here.
Objective
How does the Gods Unchained rarity specific pack selection system affect the value of the cards?
To gain insight on this question, I needed to get my hands on some real time card data and put it in a visual form for easy understanding. The image in Figure 1 shows the block diagram visualization of my data collection and processing pipeline. Here is a short outline of the code that you will see in this post:
Card data gathering
Simulation of a “perfect world” card pulling
Processing data, storing in tables and generation of interesting plots
Figure 1: Data Pipeline Block Diagram
Let’s dive in!
Part 1a: Ad Hoc — Web Scraping
This is the painful way to get the data from data from the Gods Unchained website. They have an area on their webpage that shows all the possible cards that are available in the Genesis set, the pre-release card set. The webpage shows the card image and the amount of each card per shininess (example in Figure 1).
Figure 2: Example of GU card
The cards span across 12 different pages, this can be seen on the bottom of Figure 3.
Figure 3: Example of one page of cards
The images and information exist in the html code of webpage and we can access the data in a variety of ways. Unfortunately, we can’t simply download the page source code, because it only contains function for rendering, not the data we want. Due to this, we need to use the selenium package. Selenium allows us to download rendered html of a webpage by controlling a browser (Chrome in my case).
In our web-scraping python program, we import the following libraries:
BeautfulSoup will be used to clean up the html and pandas will hold data in a convenient structure called a dataframe. The time library is used to add delays in our code.
During the installation of these packages, I had issues with webdriver. I ended up needing to use homebrew and running the following command:
Automation of the browser is a bit confusing, so here is a breakdown of the code.
Line 1: Opens an instance of Chrome
Line 3: Browser goes to the url specified in Line 2
Line 5: We add a delay because we need to manually change to 1 of the 12 pages we want data from. This is where the inefficiency on this method comes in. We need to rerun the script for each page.
Once we have the massive innerHTML data, we need to search for the data of interest. We want the card names, rarity and card count. We can use BeautifulSoup to parse the html and make it slightly easier to traverse.
We then initialize a pandas dataframe with column names and save the dataframe to a text file for the next step in the pipeline
Now we see a better way to do this.
Part 1b: Efficient Method — API
The Fuel Games team has an API (Application Program Interface) that makes it extremely easy to download any data you could want about the cards in the game. When I started this project, I was unaware of this and due to my utter excitement, used a spoon to dig a 10 foot hole (but hey it worked!).
This method uses the following libraries.
Line 1–2 & Lines 4–5: We use the request library to gain access to two different tables that contain the relevant data and store them in two separate dataframes cardDist and cardNames.
Once we have the two dataframe we do the following:
Lines 1–3: Every card has a unique id, and here we make a list of them
Lines 9–25: We use the card ids and process the tables to find the amount of cards in each
Part 2: Simulating Pack Openings
Below is the code to simulate a random selection of cards based on rarity. The rarities that were used can be found in the bottom of this GU page.
Now let’s look at the code:
Line 4: We fill a list with card rarities depending on their respective percentage.
Line 8–9: Here we (uniformly) randomly select elements of the pos list and append it to list simulated_space.
Line 19–29: All the elements are counted and placed in their four respective bins. We then add the four bins to a list and move on to add it to a bar plot.
Part 3: Performing Data Analysis
Once we have the data in a dataframe, we can process it and start the visualization.
In this code section we include matplotlib, which is an awesome package for creating plots. For this project, we only need barplots.
For the article, I only looked at rarity distribution, but I also want to see how many of each legendary and epic cards have been found. Here is the code breakdown:
Line 1: We import the text file made by the API method. If we wanted to use the webscrape method, we would have to append the files together.
Line 3–9: We initialize counts for each rarity and two dictionaries that will hold each epic/legendary card name as a key and count the total amount in existence.
Lines 11–28: We iterate through all the data and count the amount of cards in each rarity as well as store epic/legendary cards in the dictionary and count the total amount.
Finally, we take all the data and create bar plots.
Below are the resultant images (data collected on September 7th 2018 at 10:33am).
Figure 4: Distribution of Epic Cards
Figure 5: Distribution of Legendary Cards
Figure 6: Distribution of All Card Rarities
For an analysis of the data, check out my previous article: Relative Value of Cards in Gods Unchained.
Overall, this was a very fun project. If you want to download the files and run the code on your machine, checkout my GitHub. Make sure to use my referral link if you buy packs!
For any questions or comments, please drop a response below.
Notes:
Referral Link (gives me 10% of the packs you buy): https://godsunchained.com/?refcode=0x07453584C359A2b95fe115CC5eA72c56eEFE3Ee2
BTC Wallet: 3BeJmm5dVje9SmW86jg9PP1rXpddPi5vPp
ETH Wallet: 0x11D13a1c8762bdcaF05E4daC15635607C4551A33
Direct link to the GitHub files can be found here.
| Web-scraping and Analyzing Gods Unchained Card Data | 51 | web-scraping-and-analyzing-gods-unchained-card-data-1087b9fcd734 | 2018-09-07 | 2018-09-07 17:52:31 | https://medium.com/s/story/web-scraping-and-analyzing-gods-unchained-card-data-1087b9fcd734 | false | 1,323 | null | null | null | null | null | null | null | null | null | Python | python | Python | 20,142 | Danny Mendoza | Data Scientist. Have fun and learn alittle. | e6d41c732514 | jdannym93 | 4 | 9 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 863f502aede2 | 2018-05-28 | 2018-05-28 19:44:03 | 2018-05-28 | 2018-05-28 19:45:54 | 4 | false | en | 2018-05-28 | 2018-05-28 19:45:54 | 2 | 1088fadffcd4 | 3.850943 | 8 | 0 | 0 | May 15th — Fujitsu and Nissha Develop AI Solution for Bookstores
Fujitsu and Japan Publishing Co. (Nissha) together introduce SeleBoo, an… | 5 | AI Biweekly: 10 Bits from May W3 — May W4
May 15th — Fujitsu and Nissha Develop AI Solution for Bookstores
Fujitsu and Japan Publishing Co. (Nissha) together introduce SeleBoo, an AI solution the leverages information on 3.5 million books and some 3,000 bookstores nationwide to automatically stock books based on characteristics of each bookstore. Bookstore staff can better understand customer purchase trends and update book displays accordingly.
May 16th — Sony Real Estate Launches AI Website for Condominium Sales
Sony Real Estate releases an AI-powered intelligent website, “Apartment AI Report,” which can estimate price ranges of condominiums based on prefectures, villages, stations and other information. The system can automatically estimate purchase price changes due to the age of the house for example. It can also rank average condominium sale prices by area, age of building, floor plan, etc.
May 17th — Intel and Mobileye Test Self-Driving Car Fleet in Jerusalem
Intel and Mobileye start the testing phase for their fleet of 100 autonomous vehicles in Jerusalem’s challenging traffic conditions. Israel-based Mobileye’s technology is driving the project with a Responsibility-Sensitive Strategy to improve safety and prove their technology is capable of handling the city’s diverse geography and driving conditions.
May 21st — Johns Hopkins & Lustgarten Develop AI Solution for Pancreatic Cancer Detection
The world’s largest centres for pancreatic cancer treatment employ NVIDIA GPUs to develop a deep learning solution for early detection of pancreatic cancer. With support from the Lustgarten Foundation, Johns Hopkins Hospital has launched the Felix project, which leverages big data to train a machine learning algorithm to spot pancreatic cancer at an early stage. Test results of the machine learning model detected pancreatic cancer with a success rate of 90%.
May 22nd — Samsung Announces Three New Global AI Research Centres
Samsung Research will launch AI Centers in Cambridge, the U.K. (May 22nd), Toronto, Canada (May 24th), and Moscow, Russia (May 29th), joining existing centres in South Korea and the United States. The Toronto Centre will be led by Dr. Larry Heck, Senior Vice President of Samsung Research America, and focus on developing core AI tech with strategic cooperation from local universities; while the Moscow Centre will draw from Russia’s expertise in mathematics, physics and other fundamental science. Samsung Research plans to raise its global number of AI researchers to 1,000 by 2020.
May 22nd — Amazon Adds Calendar Appointment Capability to Alexa
The Alexa voice assistant gains a new smart calendar function which will allow users to reschedule meetings based on availability. Users can ask Alexa to move an existing meeting, and the Amazon assistant will search through the different schedules of meeting participants to suggest available time slots for rescheduling.
May 23rd — Intel Introduces the Latest Nervana Neural Network Processors
At the Intel AI DevCon, Intel VP and GM Artificial Intelligence Products Group Naveen Rao introduces optimized Intel Xeon Scalable Processors which deliver significant performance improvements in training and inference compared to previous generations. Based on feedback from the Nervana Neural Network Processor (NNP) prototype Lake Crest, Intel is now building the first commercial NNP product, the Intel Nervana NNP-L1000 (Spring Crest), which will be available in 2019.
May 24th — Singapore Collaborated with London to Launch AI in Finance Online Course
Singapore’s Ngee Ann Polytechnic (NP) and the London-based Centre for Finance, Technology, and Entrepreneurship (CFTE) jointly launch the first industry-led AI in Finance online course. The collaboration addresses the need for finance and technology professionals to update their skills. NP and CFTE have brought together 20 industry and thought leaders to share their AI and fintech expertise.
May 24th — Uber Builds Technology Centre in Paris to Develop Flying Taxis
Uber will spend US$23.4 million over the next five years to develop the AI algorithms, air traffic control standards and other tech necessary to send taxis soaring over cities. The project will include machine learning-based transport demand modeling, high-density low-altitude air traffic management simulations, integration of innovative airspace transport solutions and the development of smart grids.
May 24th — Intel AI Lab Open-Sources Deep Learning-Driven NLP Library
Intel AI Lab open-sources a Natural Language Processing library, which will enable researchers and developers to build conversational agents on chatbots and virtual assistants. The library includes valuable features such as name entity recognition, intent extraction, and semantic parsing of words. Intel AI Lab also open-sourced libraries for reinforcement learning and neural networks. The reinforcement learning library allows users to embed an agent in training environments, while the neural network library is able to strip away non-relevant neural connections.
* * *
Analyst: Synced Industry Analyst Team | Editor: Michael Sarazen
* * *
Subscribe here to get insightful tech news, reviews and analysis!
* * *
The ATEC Artificial Intelligence Competition is a fintech algorithm competition hosted by Ant Financial for top global data algorithm developers. It focuses on highly-important industry-level fintech issues, provides a prize pool worth millions. Register now!
| AI Biweekly: 10 Bits from May W3 — May W4 | 73 | ai-biweekly-10-bits-from-may-w3-may-w4-1088fadffcd4 | 2018-05-31 | 2018-05-31 17:36:42 | https://medium.com/s/story/ai-biweekly-10-bits-from-may-w3-may-w4-1088fadffcd4 | false | 835 | We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights. | null | SyncedGlobal | null | SyncedReview | syncedreview | ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS | Synced_Global | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Synced | AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B | 960feca52112 | Synced | 8,138 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | c2aa71d9ec41 | 2017-11-13 | 2017-11-13 17:13:37 | 2017-11-14 | 2017-11-14 17:32:43 | 5 | false | en | 2018-10-27 | 2018-10-27 02:02:17 | 36 | 108905ab4955 | 8.705031 | 495 | 7 | 2 | In this post, I will outline a strategy to ‘learn pandas’. For those who are unaware, pandas is the most popular library in the scientific… | 5 | How to Learn Pandas
In this post, I will outline a strategy to ‘learn pandas’. For those who are unaware, pandas is the most popular library in the scientific Python ecosystem for doing data analysis. Pandas is capable of many tasks including:
Reading/writing many different data formats
Selecting subsets of data
Calculating across rows and down columns
Finding and filling missing data
Applying operations to independent groups within the data
Reshaping data into different forms
Combing multiple datasets together
Advanced time-series functionality
Visualization through matplotlib and seaborn
Although pandas is very capable, it does not provide functionality for the entire data science pipeline. Pandas is typically the intermediate tool used for data exploration and cleaning squashed between data capturing and storage, and data modeling and predicting.
Data Science Pipeline
For a typical data scientist, pandas will play the largest role as the data traverses through the pipeline. One metric to quantify this is with the Stack Overflow trends app.
Currently, pandas has more activity on Stack Overflow than any other Python data science library and makes up an astounding 1% of all new questions submitted on the entire site.
Stack Overflow Overuse
From the chart above, we have evidence that many people are using and also confused by pandas. I have answered about 400 questions on pandas on Stack Overflow and see first hand how poorly understood the library is. For all of the greatness that Stack Overflow has bestowed upon programmers, it comes with a significant downside. The instant gratification of finding an answer is a massive inhibitor for working through the documentation and other resources on your own. I think it would be a good idea to dedicate a few weeks each year to not using Stack Overflow.
Step-by-Step Guide to Learning Pandas
A couple weeks ago, I posted a simple guide on the r/datascience subreddit when someone asked for help on practicing pandas. The following elaborates on the information from that post.
To begin, you should not actually have a goal to ‘learn pandas’. While knowing how to execute the operations in the library will be useful, it will not be nearly as beneficial as learning pandas in ways that you would actually use it during a data analysis. You can segment your learning into two distinct categories:
Learning the pandas library independent of data analysis
Learning to use pandas as you would during an actual data analysis
The difference between the two is like learning how to saw a few small branches in half versus going out into a forest and sawing down some trees. Let’s summarize these two approaches before getting into more detail.
Learning the Pandas library independent of data analysis: This approach will primarily involve reading, and more importantly, exploring, the official pandas documentation.
Learning to use Pandas as you would during an actual data analysis: This approach involves finding or collecting real-world data and performing an end-to-end data analysis. One of the best places to find data is with Kaggle datasets. This is not the machine learning component of Kaggle, which I would strongly suggest you avoid until you are more comfortable with pandas.
Alternating Approach
During your journey to learn how to do data analysis with pandas, you should alternate between learning the fundamentals from the documentation and their application in a real-world dataset. This is very important, as its easy to learn just enough pandas to complete most of your tasks, and then to solely rely on these basics far too heavily when more advanced operations exist.
Begin with the Documentation
If you have never worked with pandas before, but do have an adequate grasp of basic Python, then I suggest beginning with the official pandas documentation. It is extremely thorough and at its current state, 2,195 pages (careful, link is to full pdf). Even with its massive size, the documentation doesn’t actually cover every single operation and certainly doesn’t cover all the different combinations of parameters you can use within pandas functions/methods.
Getting the Most out of the Documentation
To get the most out of the documentation, do not just read it. There are about 15 sections of the documentation that I suggest covering. For each section, create a new Jupyter notebook. Read this blogpost from Data Camp if you are unfamiliar with Jupyter notebooks.
Your First Jupyter Notebook
Begin with the section, Intro to Data Structures. Open this page alongside your Jupyter notebook. As you read through the documentation, write the code (don’t copy it) and execute it in the notebook. During the execution of the code, make sure to explore the operations and attempt new ways to use them.
Continue with the section Indexing and Selecting Data. Make a new Jupyter notebook and again write, execute, and explore the different operations that you learn. Selecting data is one of the most confusing aspects for beginning pandas users to grasp. I wrote a lengthy Stack Overflow post on .loc vs .iloc which you may want to read for yet another explanation.
After these two sections, you should understand the components of a DataFrame and a Series and know how to select different subsets of data. Now read 10 minutes to pandas to get a broader overview of several other useful operations. As with all sections, make a new notebook.
Press shift + tab + tab to get Help in a Jupyter Notebook
I am constantly pressing shift + tab + tab when using pandas in a Jupyter notebook. When a cursor is placed inside the name, or in the parentheses that follow any valid Python, the documentation for that object pops out into a little scrollable box. This help box is invaluable to me, since its impossible to remember all the different parameter names and their input types.
Pressing shift + tab + tab to reveal the documentation for the stack method
You can also press tab directly following a dot to have a dropdown menu of all the available objects
Pressing tab following a DataFrame lists the 200+ available objects
Major Downside of the Documentation
While the documentation is very thorough, it does not do a good job at teaching how to properly do a data analysis with real data. All the data is contrived or randomly generated. Also, real data analysis will involve multiple pandas operations (sometimes dozens) strung together. You will never get exposure to this from the documentation. The documentation teaches a mechanical approach to learning pandas, where one method is learned in isolation from the others.
Your First Data Analysis
After these three sections of the documentation, you will be ready for your first exposure to real data. As mentioned previously, I recommend beginning with Kaggle datasets. You can sort by most voted to return the most popular ones such as the TMDB 5000 movie dataset. Download the data and create a new Jupyter notebook on just that dataset. It is unlikely that you will be able to do any advanced data processing at this point, but you should be able to practice what you learned in the three sections of the documentation.
Look at the Kernels
Every Kaggle dataset has a kernels section (movie dataset kernels). Don’t let the name ‘kernel’ confuse you — its just a Jupyter notebooks created by a Kaggle user in Python or R. This will be one of your best learning opportunities. After you have done some basic analysis on your own, open up one of the more popular Python kernels. Read through several of them and take pieces of code that you find interesting and insert it into your own notebook.
If you don’t understand something, ask a question in the comments section. You can actually create your own kernel, but for now, I would stick with working locally in your notebooks.
Going Back to the Documentation
Once you have finished your first kernel, you can go back to the documentation and complete another section. Here is my recommended path through the documentation:
Working with missing data
Group By: split-apply-combine
Reshaping and Pivot Tables
Merge, join, and concatenate
IO Tools (Text, CSV, HDF5, …)
Working with Text Data
Visualization
Time Series / Date functionality
Time Deltas
Categorical Data
Computational tools
MultiIndex / Advanced Indexing
This order is significantly different than the order presented on the left-hand-side of the home page of the documentation and covers the topics I think are most important first. There are several sections of the documentation that are not listed above, which you can cover on your own at a later date.
After completion of these sections of the documentation and about 10 Kaggle kernels, you should be well on your way to feeling comfortable both with the mechanics of pandas and actual data analysis.
Learning Exploratory Data Analysis
By reading many popular Kaggle kernels, you will learn quite a lot about what makes a good data analysis. For a more formal and rigorous approach, I recommend reading chapter 4, Exploratory Data Analysis of Howard Seltman’s online book.
Creating your own Kernels
You should consider creating your own kernels on Kaggle. This is an excellent way to force yourself to write clean and clear Jupyter notebooks. It is typical to create notebooks on your own that are very messy with code written out of order that would be impossible for someone else (like your future self) to make sense of. When you post a kernel online, I would suggest making it as if you expected your current or future employer to read. Write an executive summary or abstract at the top and clearly explain each block of code with markdown. What I usually do, is make one messy, exploratory notebook and an entire separate notebook as a final product. Here is a kernel from one of my students on the HR analytics dataset.
Don’t just Lean Pandas; Master it
There is a huge difference between a pandas user who knows just enough to get by and a power user who has it mastered. It is quite common for regular users of pandas to write poor code, as there is quite a substantial amount of functionality and often multiple ways to get the same result. It is quite easy to write some pandas operations that get your result, but in a highly inefficient manner.
If you are a data scientist that works with Python, you probably already use pandas frequently, so making it a priority to master it should create lots of value for you. There are lot’s of fun tricks available as well.
Test your Knowledge with Stack Overflow
You don’t really know a Python library if you cannot answer the majority of questions on it that are asked on Stack Overflow. This statement might be a little too strong, but in general, Stack Overflow provides a great testing ground for your knowledge of a particular library. There are over 50,000 questions tagged as pandas, so you have an endless test bank to build your pandas knowledge.
If you have never answered a question on Stack Overflow, I would recommend looking at older questions that already have answers and attempting to answer them by only using the documentation. After you feel like you can put together high-quality answers, I would suggest making attempts at unanswered questions. Nothing improved my pandas skills more than answering questions on Stack Overflow.
Your own Projects
Kaggle kernels are great, but eventually, you need to tackle a unique project on your own. The first step is finding data, of which there are many resources such as:
data.gov
data.world
NYC open data, Houston open data, Denver open data — most large American cities have open data portals
After finding a dataset you want to explore, continue the same process of creating a Jupyter notebook and when you have a nice final product, post it on github.
Summary
In summary, use the documentation to learn the mechanics of pandas operations and use real datasets, beginning with Kaggle kernels, to learn how to use pandas to do data analysis. Finally, test your knowledge with Stack Overflow.
Intro to Data Science Bootcamp
For a more personalized class, take my Intro to Data Science Bootcamp in:
NYC, Nov 10–18
Houston, Dec 15–23
Pandas Cookbook
Mastering pandas is possible without the use of any book or paid class. Besides the outline in this post, there are numerous other free tutorials and github repositories filled with good pandas stuff. However, if you would like more advanced recipes, with extremely detailed explanations, consider Pandas Cookbook, which was recently released a few weeks ago. I will have another post up shortly going into great detail on it.
| How to Learn Pandas | 2,972 | how-to-learn-pandas-108905ab4955 | 2018-10-27 | 2018-10-27 02:02:17 | https://medium.com/s/story/how-to-learn-pandas-108905ab4955 | false | 2,086 | Tutorials for doing exploratory data analysis and machine learning with Python | null | DunderData | null | Dunder Data | dunder-data | DATA SCIENCE,JUPYTER,MACHINE LEARNING,PYTHON,PANDAS | DunderData | Data Science | data-science | Data Science | 33,617 | Ted Petrou | Author of Pandas Cookbook and Founder of Dunder Data | cf7f60f2eeb3 | petrou.theodore | 1,322 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | | | Cat 0 | Cat 1 | Dog 0 | Dog 1 |
|---------|-------|-------|-------|-------|
| Topic 0 | * | | | * |
| Topic 1 | | * | * | |
| | Topic 0 | Topic 1 |
|-----|---------|---------|
| Cat | 1 | 1 |
| Dog | 1 | 1 |
| | Topic 0 | Topic 1 |
|------------|---------|---------|
| Document 0 | 1 | 1 |
| Document 1 | 1 | 1 |
t0 =
(
(cat emoji with Topic 0 + beta)
/
(emoji with Topic 0 + unique emoji * beta)
)
*
(
(emoji in Document 0 with Topic 0 + alpha)
/
(emoji in Document 0 with a topic + number of topics * alpha)
) =
(
(0 + 0.01)
/
(1 + 2 * 0.01)
)
*
(
(0 + 0.5)
/
(1 + 2 * 0.5)
) = 0.0024509803921568627
t1 = ((1 + 0.01) / (2 + 2 * 0.01)) * ((1 + 0.5) / (1 + 2 * 0.5))
= 0.375
p(Cat 0 = Topic 0 | *) = t0 / (t0 + t1) = 0.006493506493506494
p(Cat 0 = Topic 1 | *) = t1 / (t0 + t1) = 0.9935064935064936
| | Cat 0 | Cat 1 | Dog 0 | Dog 1 |
|---------|-------|-------|-------|-------|
| Topic 0 | * | * | | * |
| Topic 1 | | | * | |
| | Topic 0 | Topic 1 |
|-----|---------|---------|
| Cat | 2 | 0 |
| Dog | 1 | 1 |
| | Topic 0 | Topic 1 |
|------------|---------|---------|
| Document 0 | 2 | 0 |
| Document 1 | 1 | 1 |
| | Cat 0 | Cat 1 | Dog 0 | Dog 1 |
|---------|-------|-------|-------|-------|
| Topic 0 | * | * | | |
| Topic 1 | | | * | * |
| | Topic 0 | Topic 1 |
|-----|---------|---------|
| Cat | 2 | 0 |
| Dog | 0 | 2 |
| | Topic 0 | Topic 1 |
|------------|---------|---------|
| Document 0 | 2 | 0 |
| Document 1 | 0 | 2 |
Phi row column =
(emoji row with topic column + beta)
/
(all emoji with topic column + unique emoji * beta)
Theta row column =
(emoji in document row with topic column + alpha)
/
(emoji in document row + number of topics * alpha)
| 14 | 336d898217ee | 2018-02-13 | 2018-02-13 21:44:36 | 2018-02-23 | 2018-02-23 23:32:55 | 6 | false | en | 2018-02-28 | 2018-02-28 23:13:44 | 7 | 108abf40fa7d | 9.629245 | 165 | 7 | 0 | Before we get started, I made a tool (here’s the source) that runs LDA right inside your browser (it’s pretty neat). Be sure to have that… | 5 | Your Easy Guide to Latent Dirichlet Allocation
Not too many stock photos for “Latent Dirichlet Allocation”.
Before we get started, I made a tool (here’s the source) that runs LDA right inside your browser (it’s pretty neat). Be sure to have that open as I go along. All of the emoji I use come from this page. Just select any one emoji and copy it into the input boxes. If you don’t see the emoji, try using Firefox version 50 or greater.
What is LDA?
LDA or latent Dirichlet allocation is a “generative probabilistic model” of a collection of composites made up of parts. In terms of topic modeling, the composites are documents and the parts are words and/or phrases (n-grams). But you could apply LDA to DNA and nucleotides, pizzas and toppings, molecules and atoms, employees and skills, or keyboards and crumbs.
The probabilistic topic model estimated by LDA consists of two tables (matrices). The first table describes the probability or chance of selecting a particular part when sampling a particular topic (category). The second table describes the chance of selecting a particular topic when sampling a particular document or composite.
I suddenly have a taste for bacon avocado toast.
Take the image up above for example. We have four emoji sequences (the composites) and three types of emoji (the parts). The first three documents have 10 of only one type of emoji while the last document has 10 of each.
X marks the bacon.
After we run our emoji composites through LDA, we end up with the probabilistic topic model you see above. The left table has emoji versus topics and the right table has documents versus topics. Each column in the left table and each row in the right table sums to one (allowing for some truncation and precision loss). So if I were to sample (draw an emoji out of a bag) Topic 0, I’d almost certainly get the avocado emoji. If I sampled Document 3, there’s an equal (uniform) chance I’d get either Topic 0, 1, or 2.
The LDA algorithm assumes your composites were generated like so.
Pick your unique set of parts.
Pick how many composites you want.
Pick how many parts you want per each composite (sample from a Poisson distribution).
Pick how many topics (categories) you want.
Pick a number between not zero and positive infinity and call it alpha.
Pick a number between not zero and positive infinity and call it beta.
Build the parts versus the topics table. For each column, draw a sample (spin the wheel) from a Dirichlet distribution (a distribution of distributions) using beta as the input. Each sample will fill out each column in the table, sum to one, and give the probability of each part per topic (column).
Build the composites versus the topics table. For each row, draw a sample from a Dirichlet distribution using alpha as the input. Each sample will fill out each row in the table, sum to one, and give the probability of each topic (column) per composite.
Build the actual composites. For each composite, 1) look up its row in the composites versus topics table, 2) sample a topic based on the probabilities in the row, 3) go to the parts versus topics table, 4) look up the topic sampled, 5) sample a part based on the probabilities in the column, 6) repeat from step 2 until you’ve reached how many parts this composite was set to have.
Now we know this algorithm (or generative procedure/process) is not how say articles are written but this — for better or worse — is the simplified model LDA assumes.
What is the Dirichlet distribution?
Good ol’ matplotlib.
A bit about the Dirichlet distribution before we move on. What you see up above are iterations of taking 1000 samples from a Dirichlet distribution using an increasing alpha value. The Dirichlet distribution takes a number (called alpha in most places) for each topic (or category). In the GIF and for our purposes, every topic is given the same alpha value you see displayed. Each dot represents some distribution or mixture of the three topics like (1.0, 0.0, 0.0) or (0.4, 0.3, 0.3). Remember that each sample has to add up to one.
At low alpha values (less than one), most of the topic distribution samples are in the corners (near the topics). For really low alpha values, it’s likely you’ll end up sampling (1.0, 0.0, 0.0), (0.0, 1.0, 0.0), or (0.0, 0.0, 1.0). This would mean that a document would only ever have one topic if we were building a three topic probabilistic topic model from scratch.
At alpha equal to one, any space on the surface of the triangle (2-simplex) is fair game (uniformly distributed). You could equally likely end up with a sample favoring only one topic, a sample that gives an even mixture of all the topics, or something in between.
For alpha values greater than one, the samples start to congregate to the center. This means that as alpha gets bigger, your samples will more likely be uniform or an even mixture of all the topics.
The GIF demonstrates the sampling of topic mixtures for the documents but the Dirichlet distribution is also assumed the source (the Bayesian prior) for the mixture of parts per topic. Three topics were used as that works well when plotting in three dimensions but typically one shoots for more than three topics depending on the number of documents they have.
If you look at the tool mentioned earlier, you’ll see an alpha and beta slider. These two hyperparameters are required by LDA. The alpha controls the mixture of topics for any given document. Turn it down and the documents will likely have less of a mixture of topics. Turn it up and the documents will likely have more of a mixture of topics. The beta hyperparameter controls the distribution of words per topic. Turn it down and the topics will likely have less words. Turn it up and the topics will likely have more words.
Ideally we want our composites to be made up of only a few topics and our parts to belong to only some of the topics. With this in mind, alpha and beta are typically set below one.
Why use LDA?
If you view the number of topics as number of clusters and the probabilities as the proportion of cluster membership then using LDA is a way of soft clustering your composites and parts. Contrast this with say k-means where each entity can only belong to one cluster. These fuzzy memberships provide a more nuanced way of recommending similar items, finding duplicates, or discovering user profiles/personas.
You could analyze every GitHub repository’s topics/tags and infer themes like native desktop client, back-end web service, single-paged app, or flappy bird clone.
If you choose the number of topics to be less than the documents, using LDA is a way of reducing the dimensionality (the number of rows and columns) of the original composite versus part data set. With the documents now mapped to a lower dimensional latent/hidden topic/category space, you can now apply other machine learning algorithms which will benefit from the smaller number of dimensions. For example, you could run your documents through LDA and then hard cluster them using DBSCAN.
Of course the main reason you’d use latent Dirichlet allocation is to uncover the themes lurking in your data. By using LDA on pizza orders, you might infer pizza topping themes like spicy, salty, savory, and sweet. You could analyze every GitHub repository’s topics/tags and infer themes like native desktop client, back-end web service, single paged app, or flappy bird clone.
How does latent Dirichlet allocation work?
There are a few ways of implementing LDA. Still, like most — if not all — machine learning algorithms, it comes down to estimating one or more parameters. For LDA those parameters are phi and theta (sometimes they’re called something else). Phi is the parts versus topics matrix (or topics versus parts) and theta is the composites versus topics matrix.
Are you more a cat or dog person?
To learn how it works, I’ll make this easy and step through a concrete example. The documents and emoji are shown in the image above. Our hyperparameters are alpha 0.5, beta 0.01, topics 2, and iterations 1. The following manual run through is based on the paper Probabilistic Topic Models, M Steyvers, T Griffiths, Handbook of latent semantic analysis 427 (7), 424–440.
Instead of estimating phi and theta directly, we will estimate the topic assignments for each of the four emoji using the Gibbs sampling algorithm. Once we have the estimated topic assignments, we can then estimate phi and theta.
To start, we need to randomly assign a topic to each emoji. Using a fair coin (sampling from a uniform distribution), we assign to the first cat Topic 0, the second cat Topic 1, the first dog Topic 1, and the second dog Topic 0.
This is our current topic assignment per each emoji.
This is our current emoji versus topic counts.
This our current document versus topic counts.
Now we need to update the topic assignment for the first cat. We subtract one from the emoji versus topic counts for Cat 0, subtract one from the document versus topic counts for Cat 0, calculate the probability of Topic 0 and 1 for Cat 0, flip a biased coin (sample from a categorical distribution), and then update the assignment and counts.
After flipping the biased coin, we surprisingly get the same Topic 0 for Cat 0 so our tables before updating Cat 0 remain the same.
Next we do for Cat 1 what we did for Cat 0. After the flipping the biased coin, we get Topic 0 so now our tables look like so.
This is our current topic assignment per each emoji.
This is our current emoji versus topic counts.
This our current document versus topic counts.
What we did for the two cat emoji we now do for the dog emoji. After flipping the biased coins, we end up assigning Topic 1 to Dog 0 and Topic 1 to Dog 1.
This is our current topic assignment per each emoji.
This is our current emoji versus topic counts.
This our current document versus topic counts.
Since we’ve completed one iteration, we are now done with the Gibbs sampling algorithm and we have our topic assignment estimates.
To estimate phi, we use the following equation for each row-column cell in the emoji versus topic count matrix.
And for estimating theta we use the following equation for each row-column cell in the document versus topic count matrix.
Stylish styling brought to you by Bulma.
At long last we are done. The image above contains the estimated phi and theta parameters of our probabilistic topic model.
If you enjoyed this article you might also like Boost your skills, how to easily write K-Means, k-Nearest Neighbors from Scratch, and Let’s make a Linear Regression Calculator with PureScript.
How have you used LDA? Do you prefer Gensim or MALLET? How cool would it be if Elasticsearch did topic modeling out of the box? Let me know in the responses below. Thanks!
| Your Easy Guide to Latent Dirichlet Allocation | 963 | how-does-lda-work-ill-explain-using-emoji-108abf40fa7d | 2018-06-20 | 2018-06-20 15:57:02 | https://medium.com/s/story/how-does-lda-work-ill-explain-using-emoji-108abf40fa7d | false | 2,300 | Stories worth reading about programming and technology from our open source community. | medium.freecodecamp.org | freecodecamp | null | freeCodeCamp.org | free-code-camp | TECHNOLOGY,DESIGN,TECH,STARTUP,PRODUCTIVITY | freecodecamp | Machine Learning | machine-learning | Machine Learning | 51,320 | Lettier | https://lettier.com | dd4eda19e229 | lettier | 96 | 97 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-27 | 2018-06-27 23:15:28 | 2018-06-28 | 2018-06-28 02:15:29 | 1 | true | en | 2018-08-03 | 2018-08-03 01:06:21 | 0 | 108ad1a610bd | 3.045283 | 0 | 0 | 0 | Yet people are still convinced otherwise | 5 | Those Things That Don’t Really Work
Yet people are still convinced otherwise
The Classic Rube Goldberg Machine
Colour me skeptical. I shouldn’t have to even write this. This should be common sense after a 21st century education. No-one in the entire world can show that any of the following things really work. Yet if you post against any of these subjects on social media there is a strong chance you will “trigger” the audience into a frenzy of snide comments and insults. As though the very notion of scientific reality is offensive.
One of these days I’ll write an article that lists some of the more common forms of verbal abuse the so called “mainstream” now deal in. For now please enjoy these apparently controversial ideas…that are not really controversial at all. If you understand how scientific capitalism really works then none of this should be a surprise to you.
These things DON’T WORK and it’s really expensive to pretend they do.
Artificial Intelligence — because it’s a perpetual motion machine. Automatic production is not possible as it requires the creative capacity of original thought firmly rooted in real world feedback to recognize anthropic opportunities. AI exists purely within a simulation and production suffers accordingly.
Ideologies — because ideologues can’t capitalize on real time opportunities and spend too much conforming to yesterday’s standards. The harshness of “fitting” conditions to framework without the adaptive quality of science causes error reducing competitiveness below sustainable thresholds.
Magic — because energy can neither be created nor destroyed. Leveraging little known physical properties as an entertaining illusion is the best it can do. Conversion of one thing into another requires proportional energy. Any sufficiently advanced technology is said to be indistinguishable from Magic. Not true. Technology is measurable and like all real world artifacts it consumes appropriate resources where magic claims it does not.
Prognostication — because it is effect without cause. Return without investment. Progress without progression. It would not only undermine the production of every other human being in the universe but would also rob from every other natural process in the cosmos. If even the smallest creature could see the future it would break the entire game by short circuiting evolution quickly starving all other life through unearned resource appropriation.
Simulation Theory — it’s a pyramid scheme. There is no limiting principle to avoid run away inflationary expense as the layers compound endlessly claiming principal returns as though they were operating within natural opportunities not possible within simulations. The opportunity in a simulation disconnected from real physics… being artificial and distant from substantive data reflecting the real world… always costs more than it returns and only by pretending the simulation has no cost does this appear otherwise. Another illusion.
Socialism — because there is no such thing as a free lunch. Compound interest on debt always erases the benefits as socialism encourages waste in its distribution of other people’s resources by turning producers into consumers dependent on government services. A sort of simulated economy with all the failures of any simulation. It seems stable only when the costs are hidden by making individuals pay for governments mistakes.
Supply Management — because the incentives to serve special interests always redirects profits from their natural destination by establishing bureaucratic supremacy over private production. Selling market access to political friends by passing regulations to control them cripples the competitiveness of any economy by drawing it further away from new opportunities in order to protect the favoured elite. It’s robbing the rest of us of the resources we would have naturally earned and then used to further produce.
Time Travel — violates order of operations, can’t co-exist with the concept of momentum, and undermines past investment. A traveler’s very presence in the past would “add energy” to that time/place violating the conservation of energy principal and ultimately creating a paradox where they could never have existed to go back in time in the first place. Best enjoyed in movies like Back To The Future. It’s science fiction.
I really hope this helps some of you avoid most of the rabbit holes of delusion that so effectively reduce profits. It’s a truly harmful thing that so many pseudo scientific types entertain these delusions to attract investment and clicks. The closer our conception of reality matches the true reality, the more productive we become. The less it costs to produce more as science informs our decisions with ever improving precision. It’s time to think in high resolution. I know it’s harder, but it pays dividends.
| Those Things That Don’t Really Work | 0 | those-things-that-dont-really-work-108ad1a610bd | 2018-10-22 | 2018-10-22 22:51:16 | https://medium.com/s/story/those-things-that-dont-really-work-108ad1a610bd | false | 754 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Anthony Jon Mountjoy | Programmer/Publisher | Computer Scientist with over 20 years professional experience. Trying to grow a really big beard. | 65fc1266eb1b | AnthonyMountjoy | 293 | 93 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-24 | 2018-08-24 13:56:59 | 2018-08-24 | 2018-08-24 13:59:24 | 1 | false | en | 2018-08-24 | 2018-08-24 13:59:24 | 18 | 108eed52ae9c | 5.954717 | 0 | 0 | 0 | At law school, one of my professors once asked “Assume you are a sovereign king. You have power to write and rewrite the law at will. What… | 5 | Artificial Intelligence, Rules of Origins and the Lemons Problem
At law school, one of my professors once asked “Assume you are a sovereign king. You have power to write and rewrite the law at will. What do you do?”
This was my first encounter with “normative” law. Unsophisticated scholars like me define this as what the law should be. This is distinct from exercises in “positive” law where the point is to elucidate, and explain, what the law is (the bread, butter … and caviar of legal practitioners).
Why the legalese?
Because a real life exercise in normative law is today before us. With the rise of Artificial Intelligence in society (“AI”), we face a hard normative question: what laws should govern AIs and the machines that embody them?
There are myriad ways to discuss this. The lazy me likes to think that two approaches really stand out. The first starts from the black letter law. It conjectures points of friction in the legal system when AI applications are rolled out. Common examples of this include questions such as: is a factory floor AI-powered robot a labor force worker or a piece of capital? Can a robot-worker be party a labor contract? Can a robot own IP over inventions discovered on the job? Can it congregate with other robots, form a trade union and conclude collective bargaining agreements? Can a robot worker be owned, taxed, sold, or fired?
The upside of the black letter law approach is obvious. It gives us a stepwise method to think about normative AI law. All laws are indeed “object” oriented (or “formalistic”). For example, they refer to a “contract”. This concept is itself defined as a convention concluded between “parties”. And parties are either “individuals” or social or economic “groups” like corporations, NGOs or nation states. The black letter law approach is thus a good compass to methodically discover legal gaps and redundancies as well as conceptual rigidities and interpretive inflexibilities. This is especially true in civil law and continental legal systems, where the law is more often specified through structured “rules” rather than pure “standards” that define abstract goals (like fairness, wellbeing, dignity, etc.).
But the black letter law approach has a downside. It does not catch emergent AI behavior. To take a fictional example — no Westworld spoiler intended — what if several factory AI workers start a non-human language based conversation, and conspire to organize a socialist revolution? We have no black letter laws with specified objects designed to proximately or remotely regulate non-human language communications on the workplace. And general principles of law do not give us a ready-made answer to the normative question: should we prohibit this?
This is where the second approach helps. Here, the idea is to operate just below the skin of black letter law. The point is to explore the entire legal system in search of design patterns and transversal properties of text and case law. To use a computer science metaphor, we reverse engineer the first principles underpinning the graphic user interface of legal prescriptions. Those properties often give information on the big idea pursued by lawmakers, and help overcome the ambiguity of the explicit goals often loosely and superfluously expressed in the boilerplate provisions of every and any judgment or preamble of a statutory instrument. To come back to our fictional example, we can observe that our laws accept the production and commercialization of machines like computers, tablets and cellphones which use binary machine code that most humans are mostly unable to read. We can thus derive from this that such communications are a priori lawful, and that it is not necessary for all of us citizens to understand them. At the same time, unlike binary code which is only readable by a few, the way AIs communicate with each other may be incomprehensible for everyone. Does this difference require stricter laws? The answer to that question is not obvious, but the second approach brings you one step closer to it.
Once the basic properties of our laws are mapped out, we can question whether AI idiosyncrasies require additional law creation (upholding those social choices in new legal instruments).[1]
An illustration of this exercise relates to “rules of origins”. In our legal systems, a whole host of rules effectuate a first order social demand to know the “who”, “where”, “how” or “why” behind an output. Obvious examples are the rules on mandatory labelling of food products (eg, GMOs), locational requirements like the “made in China”, or the GDPR duty to disclose “automated decision making”.
IMHO, AI augurs a promising future for “rules of origins”. In an AI centric world, the demand for man-made outputs will grow, due to individuals’ valuation for craft, trust, legibility and projection.[2] We actually know that economic agents display such preferences for a long time. Since the 1970s, the predominance of films featuring human actors over “computer generated imagery” animated movies is a case in point.
At the same time, the supply of AI outputs will expand in a broad array of areas from journalism to recreational arts, from the pricing of retail goods on e-commerce platforms to troubleshooting call centers.
To date, few of those domains are covered by rules of origins that prevent opportunistic suppliers to fool users into the belief that they are buying man-made products. They should. Economics 1.01 tell us why. Unless enforceable rules of origins are adopted, markets will not generate clear price signals that differentiate man from machine made outputs. We will end up with a “lemons” problem.[3] When there is imperfect information, potential buyers of “high price” man made products will discount their maximum purchasing valuation by a discrete amount to internalize the risk of being sold “low price” machine made products. Say man-made books are worth 20€ and machine made ones 10€. If buyers believe there is a 50% probability that the book has been written by a machine, the market equilibrium price will be 15€. The upshot is this: no publisher of man-made books will come to this market. By contrast, suppliers of machine made books will make a killing. In the end, lemons problem of this kind inefficiently discourage the production of man-made outputs. Black mirror conjecture here: the end game could be human joblessness.
The good news is: the legal system is here to remedy market failures of this kind. Private or public ordering institutions can design rules of origins that promote the provision of optimal information on markets. A man v machine made label is an obvious example. But more specific rules of origins may have to be invented in situations where users value more accurate information on the particular AI technique or dataset employed.[4] Similarly, when hybrid outputs are concerned, buyers may display different reservation prices depending on the output’s man v AI mix.[5] Further, to assist consumer choice, sellers of AI-made outputs could be required to provide information on the next best competing man-made alternative. At the extreme, quotas, tariffs or other quantitative restrictions on AI made outputs may be necessary to maintain a reference man-made product price cap in the market place.
Of course, markets (and AI) are perfectly apt to generate such information practices. But they are not a stable equilibrium due to collective action problems. In competitive markets, profit-maximizing suppliers have incentives to cheat on soft commitments. Enforceable contracts and property rights, industry or Government-led standardization or regulatory intervention are thus needed to keep everyone on the line. In California, a law was introduced that makes unlawful for a person to use a bot to communicate or interact with another person without disclosure.
Even more importantly than market place efficiency, rules of origin matter in areas where States provide public goods to society. Think about the justice system. Hearings are in principle public for a reason: seeing judges and juries decide cases gives us knowledge of the who, how, why and where justice has been handed down. In other words, justice is kept under close eyes. Is this still true with automated justice? When law enforcement is embedded in computer code, and judicial decisions delegated to unfathomable deep learning processes, natural and legal subjects lose understanding of the origination of justice. Arbitrariness, or the perception thereof, is the outcome.[6] Legitimacy is the loser.
[1] Assuming invariance in collective preferences.
[2] As economist Mogstad recently said, automation “may very well create demand for service with a personal touch”. See https://www.wsj.com/articles/short-of-workers-fast-food-restaurants-turn-to-robots-1529868693
[3] Akerlof, George A. (1970). “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism”. Quarterly Journal of Economics. The MIT Press. 84 (3): 488–500.
[4] A left handed person may prefer to train with a virtual tennis coach that has used mostly training data from left wing ATP tour players.
[5] Customers’ utility function may change drastically depending on whether there’s a human pilot in the plane.
[6] Though I fully acknowledge that use of AI assistants in the justice system help correct existing and documented biases of man-made justice.
| Artificial Intelligence, Rules of Origins and the Lemons Problem | 0 | artificial-intelligence-rules-of-origins-and-the-lemons-problem-108eed52ae9c | 2018-08-29 | 2018-08-29 14:23:44 | https://medium.com/s/story/artificial-intelligence-rules-of-origins-and-the-lemons-problem-108eed52ae9c | false | 1,525 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Nicolas Petit | Prof Uni Liege, Belgium and UniSA, Australia. Visiting Scholar @Stanford Uni Hoover Institution. All things tech, antitrust, law and economics. | 70cea85a6fcf | CompetitionProf | 141 | 48 | 20,181,104 | null | null | null | null | null | null |
0 | import warnings, os, sys, numpy as np, pandas as pd, time, datetime
warnings.filterwarnings('ignore')
print("\n\n")
%load_ext rpy2.ipython
print("\n\n")
The rpy2.ipython extension is already loaded. To reload it, use:
%reload_ext rpy2.ipython
import rpy2
import rpy2.robjects.numpy2ri
import rpy2.robjects as robjects
from robjects import r, pandas2ri
pandas2ri.activate()
print("\n\n")
pr2 = lambda : print("\n\n")
tally = lambda df:range(len(df))
pr2()
dir = "c:/bigdata/raw/chicago"
os.chdir(dir)
cname = "communities.csv"
crname = "chicagocrime.csv"
fname = "fbicode.csv"
MISSING = "-999.999-"
pr2()
colnames = open(cname, 'r').readline().split(",")
communities = pd.read_csv(cname, header=0,skiprows=1,dtype={'GeogKey':np.float64})
communities.columns = [c.replace("\n",'').replace('*','').replace(' ', '').lower() for c in colnames]
communities = communities[['geogkeyx', 'geogname','p0050001']]
communities.columns = ['communityarea', 'name','population']
print(communities.info())
pr2()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 77 entries, 0 to 76
Data columns (total 3 columns):
communityarea 77 non-null float64
name 77 non-null object
population 77 non-null int64
dtypes: float64(1), int64(1), object(1)
memory usage: 1.9+ KB
None
fbicode = pd.read_csv(fname)
print(fbicode.info())
pr2()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 26 entries, 0 to 25
Data columns (total 2 columns):
fbicode 26 non-null object
fbicodedesc 26 non-null object
dtypes: object(2)
memory usage: 496.0+ bytes
None
start = time.time()
hdr = pd.read_csv(crname,nrows=0,header=0)
nmes = [h.replace('*','').replace(' ', '').lower() for h in hdr.columns]
vars = ['date','communityarea','fbicode','latitude','longitude']
crime = pd.read_csv(crname,header=None,names=nmes,usecols=vars,skiprows=1,
dtype={'communityarea':np.float64},index_col=False)
crime.date = pd.to_datetime(crime.date,format='%m/%d/%Y %H:%M:%S %p')
crimeplus = pd.merge(pd.merge(crime[vars], communities, how='left',
on='communityarea'),fbicode,how='left',on='fbicode')
"""
crimeplus.fbicode = pd.Categorical(crimeplus.fbicode)
crimeplus.fbicodedesc = pd.Categorical(crimeplus.fbicodedesc)
crimeplus.communityarea = pd.Categorical(crimeplus.communityarea)
crimeplus.name = pd.Categorical(crimeplus.name)
crimeplus.fbicode = crimeplus.fbicode.cat.add_categories(MISSING)
crimeplus.fbicodedesc = crimeplus.fbicodedesc.cat.add_categories(MISSING)
crimeplus.communityarea = crimeplus.communityarea.cat.add_categories(MISSING)
crimeplus.name = crimeplus.name.cat.add_categories(MISSING)
"""
null = crimeplus.columns[crimeplus.isnull().any()]
print(null,"\n")
end = time.time()
print(end-start,"\n")
print (crimeplus.info(),"\n")
pr2()
Index(['communityarea', 'latitude', 'longitude', 'name', 'population'], dtype='object')
45.03502345085144
<class 'pandas.core.frame.DataFrame'>
Int64Index: 6599283 entries, 0 to 6599282
Data columns (total 8 columns):
date datetime64[ns]
communityarea float64
fbicode object
latitude float64
longitude float64
name object
population float64
fbicodedesc object
dtypes: datetime64[ns](1), float64(4), object(3)
memory usage: 453.1+ MB
None
homicide = set(["01A"])
violentcrime = set(["01A","01B","02","03","04A","04B"])
propertycrime = set(["05","06","07","09"])
indexcrime = set(['01A','01B','02','03','04A','04B','05','06','07','09'])
pr2()
freqs = crimeplus.communityarea.value_counts(dropna=False)
print(freqs.head(),"\n")
print(crimeplus.communityarea.isnull().sum())
pr2()
NaN 616029
25.0 382409
8.0 201625
43.0 195319
23.0 189500
Name: communityarea, dtype: int64
616029
fvar = ['communityarea','fbicodedesc']
freqss = crimeplus[fvar].copy().astype(str).groupby(fvar).size().reset_index()
freqss.columns = fvar + ['frequency']
freqss.communityarea = freqss.communityarea.astype(np.float64)
freqss.sort_values('frequency',ascending=False, inplace=True)
freqss.index = tally(freqss)
print(freqss.head(),"\n")
print(freqss.frequency[freqss.communityarea.isnull()].sum(),"\n")
print(freqss.info(),"\n")
pr2()
communityarea fbicodedesc frequency
0 NaN Larceny 123083
1 NaN Simple Battery 98329
2 8.0 Larceny 83536
3 25.0 Drug Abuse 82085
4 32.0 Larceny 71827
616029
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1981 entries, 0 to 1980
Data columns (total 3 columns):
communityarea 1956 non-null float64
fbicodedesc 1981 non-null object
frequency 1981 non-null int64
dtypes: float64(1), int64(1), object(1)
memory usage: 46.5+ KB
None
fvar = ['communityarea','fbicodedesc']
freqsf = crimeplus[fvar].fillna(MISSING).groupby(fvar).size().reset_index().replace(MISSING,np.NaN)
freqsf.columns = fvar + ['frequency']
freqsf.sort_values('frequency',ascending=False,inplace=True)
freqsf.index = tally(freqsf)
print(freqsf.head(),"\n")
print(freqsf.frequency[freqsf.communityarea.isnull()].sum(),"\n")
print(freqsf.info())
pr2()
communityarea fbicodedesc frequency
0 NaN Larceny 123083
1 NaN Simple Battery 98329
2 8.0 Larceny 83536
3 25.0 Drug Abuse 82085
4 32.0 Larceny 71827
616029
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1981 entries, 0 to 1980
Data columns (total 3 columns):
communityarea 1956 non-null float64
fbicodedesc 1981 non-null object
frequency 1981 non-null int64
dtypes: float64(1), int64(1), object(1)
memory usage: 46.5+ KB
None
def frequenciesf(df,fvar):
freqs = df[fvar].fillna(MISSING).groupby(fvar).size().reset_index().replace(MISSING,np.NaN)
freqs.columns = fvar + ['frequency']
N = freqs.frequency.sum()
freqs['percent'] = 100.0*freqs.frequency.astype(float)/N
freqs.sort_values(['frequency']+fvar, ascending=False, inplace=True)
freqs.index = tally(freqs)
return(freqs)
pr2()
freqs = frequenciesf(crimeplus,['communityarea'])
print(freqs.head(),"\n")
print(crimeplus.communityarea.isna().sum())
pr2()
communityarea frequency percent
0 NaN 616029 9.334787
1 25.0 382409 5.794705
2 8.0 201625 3.055256
3 43.0 195319 2.959700
4 23.0 189500 2.871524
616029
fvar = ["communityarea",'population',"fbicodedesc"]
start = time.time()
freqsf = frequenciesf(crimeplus,fvar)
end = time.time()
print(end-start,"\n")
print(freqsf.head())
pr2()
4.537893533706665
communityarea population fbicodedesc frequency percent
0 NaN NaN Larceny 123083 1.865097
1 NaN NaN Simple Battery 98329 1.489995
2 8.0 80484.0 Larceny 83536 1.265834
3 25.0 98514.0 Drug Abuse 82085 1.243847
4 32.0 29283.0 Larceny 71827 1.088406
myear = crimeplus.date.dt.year.max()
numdays = crimeplus[crimeplus.date.dt.year==myear].date.dt.dayofyear.max()
maxdt = crimeplus.date.dt.date.max()
print(maxdt)
print(numdays)
crimeplus['year'] = crimeplus.date.dt.year
fvar = ['year']
freqs = frequenciesf(crimeplus[(crimeplus.fbicode.isin(homicide)) &
(crimeplus.date.dt.dayofyear<=numdays)],fvar)
print(freqs,"\n")
print(freqs.info())
crimeplus.drop(['year'], axis=1, inplace=True)
pr2()
2018-05-08
128
year frequency percent
0 2017 201 7.576329
1 2016 201 7.576329
2 2003 189 7.124011
3 2001 183 6.897851
4 2002 175 6.596306
5 2012 167 6.294761
6 2018 162 6.106295
7 2008 145 5.465511
8 2004 142 5.352431
9 2007 135 5.088579
10 2015 128 4.824727
11 2010 128 4.824727
12 2005 127 4.787034
13 2006 125 4.711647
14 2009 123 4.636261
15 2011 110 4.146250
16 2014 109 4.108556
17 2013 103 3.882397
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18 entries, 0 to 17
Data columns (total 3 columns):
year 18 non-null int64
frequency 18 non-null int64
percent 18 non-null float64
dtypes: float64(1), int64(2)
memory usage: 512.0 bytes
None
fvar = ['isna_communityarea','isna_longitude']
crimeplus[fvar[0]] = crimeplus.communityarea.isnull()
crimeplus[fvar[1]] = crimeplus.longitude.isnull()
print(crimeplus['isna_communityarea'].sum(),"\n")
freqs = frequenciesf(crimeplus,fvar)
print(freqs,"\n")
print(freqs.info())
crimeplus.drop(fvar, axis=1, inplace=True)
pr2()
616029
isna_communityarea isna_longitude frequency percent
0 False False 5934460 89.925830
1 True False 606670 9.192968
2 False True 48794 0.739383
3 True True 9359 0.141818
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 4 columns):
isna_communityarea 4 non-null bool
isna_longitude 4 non-null bool
frequency 4 non-null int64
percent 4 non-null float64
dtypes: bool(2), float64(1), int64(1)
memory usage: 152.0 bytes
None
fvar = ['year','month']
crimeplus[fvar[0]] = crimeplus.date.dt.year
crimeplus[fvar[1]] = crimeplus.date.dt.month
freqs = frequenciesf(crimeplus[crimeplus.fbicode.isin(homicide)],fvar)
print(freqs.head(10),"\n")
print(freqs.info())
crimeplus.drop(fvar, axis=1, inplace=True)
pr2()
year month frequency percent
0 2016 8 95 1.050420
1 2017 6 86 0.950907
2 2016 10 82 0.906678
3 2016 11 79 0.873507
4 2016 6 78 0.862450
5 2001 7 78 0.862450
6 2002 8 77 0.851393
7 2017 7 76 0.840336
8 2001 10 71 0.785051
9 2001 9 71 0.785051
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 209 entries, 0 to 208
Data columns (total 4 columns):
year 209 non-null int64
month 209 non-null int64
frequency 209 non-null int64
percent 209 non-null float64
dtypes: float64(1), int64(3)
memory usage: 6.6 KB
None
myear = crimeplus.date.dt.year.max()
crimeplus['year'] = crimeplus.date.dt.year
fvar = ['year']
freqs = frequenciesf(crimeplus[(crimeplus.fbicode.isin(homicide))
& (crimeplus.year<myear)],fvar)
print(freqs)
crimeplus.drop(['year'], axis=1, inplace=True)
rfreqs = pandas2ri.py2ri(freqs)
pr2()
year frequency percent
0 2016 778 8.759288
1 2017 669 7.532087
2 2001 667 7.509570
3 2002 656 7.385724
4 2003 601 6.766494
5 2008 513 5.775726
6 2012 504 5.674398
7 2015 493 5.550552
8 2006 471 5.302860
9 2009 460 5.179014
10 2004 453 5.100203
11 2005 451 5.077685
12 2007 447 5.032650
13 2010 438 4.931322
14 2011 436 4.908804
15 2014 424 4.773700
16 2013 421 4.739923
%R require(ggplot2); require(tidyr); require(data.table); require(RColorBrewer);require(R.utils)
array([1], dtype=int32)
%%R -w700 -h500 -i rfreqs
murders <- data.table(year=as.integer(as.character(rfreqs$year)),count=rfreqs$frequency)#[year<2018]
pal <- brewer.pal(9,"Blues")
ggplot(murders,aes(x=year,y=count,col=pal[7])) +
geom_point() +
geom_line() +
theme(legend.position = "none", plot.background = element_rect(fill = pal[2]),
panel.background = element_rect(fill = pal[2])) +
ylim(0,max(murders$count)*1.25) +
theme(axis.text.x = element_text(size=7, angle = 45)) +
theme(legend.position="none") +
scale_color_manual(values=pal[7]) +
scale_x_continuous(breaks=sort(unique(murders$year))) +
theme(plot.title = element_text (face="bold",size=12)) +
labs(title=paste("Annual Chicago Homicides, ",min(murders$year), " to ", max(murders$year),"\n",sep=""),x="Year",y="#Homicides\n")
myear = crimeplus.date.dt.year.max()
crimeplus['year'] = crimeplus.date.dt.year
fvar = ['year']
freqs = frequenciesf(crimeplus[(crimeplus.fbicode.isin(violentcrime))
& (crimeplus.year<myear)],fvar)
print(freqs)
crimeplus.drop(['year'], axis=1, inplace=True)
rfreqs = pandas2ri.py2ri(freqs)
pr2()
year frequency percent
0 2001 45531 8.130202
1 2002 44296 7.909675
2 2003 39755 7.098816
3 2004 37213 6.644906
4 2005 36466 6.511518
5 2008 36003 6.428843
6 2006 35815 6.395273
7 2007 35220 6.289027
8 2009 34203 6.107428
9 2010 30967 5.529594
10 2011 29620 5.289068
11 2012 28438 5.078006
12 2016 28203 5.036043
13 2017 27976 4.995509
14 2013 24552 4.384106
15 2015 23164 4.136259
16 2014 22601 4.035727
%%R -w700 -h500 -i rfreqs
violent <- data.frame(year=as.integer(as.character(rfreqs$year)),count=rfreqs$frequency)#[year<2018]
pal <- brewer.pal(9,"Blues")
ggplot(violent,aes(x=year,y=count,col=pal[7])) +
geom_point() +
geom_line() +
theme(legend.position = "none", plot.background = element_rect(fill = pal[2]),
panel.background = element_rect(fill = pal[2])) +
ylim(0,max(violent$count)*1.25) +
theme(axis.text.x = element_text(size=7, angle = 45)) +
theme(legend.position="none") +
scale_color_manual(values=pal[7]) +
scale_x_continuous(breaks=sort(unique(violent$year))) +
theme(plot.title = element_text (face="bold",size=12)) +
labs(title=paste("Annual Chicago Violent Crime, ",min(violent$year), " to ", max(violent$year),"\n",sep=""),
x="Year",y="# Violent Crimes\n")
myear = crimeplus.date.dt.year.max()
numdays = crimeplus[crimeplus.date.dt.year==myear].date.dt.dayofyear.max()
print(numdays)
crimeplus['year'] = crimeplus.date.dt.year
fvar = ['year']
vars = ['year','frequency']
freqs1 = frequenciesf(crimeplus[(crimeplus.fbicode.isin(homicide)) &
(crimeplus.date.dt.dayofyear<=numdays)],fvar)[vars].sort_values(['year'])
freqs2 = frequenciesf(crimeplus[crimeplus.fbicode.isin(homicide)],fvar)[vars].sort_values(['year'])
freqs1.columns = ['year','freq']
freqs2.columns = ['year','freq']
freqs1['which'] = "s"
freqs2['which'] = "a"
freqs3 = freqs1.append(freqs2)
mxyear = freqs3.year.max()
freqs3.freq[(freqs3.year==mxyear) & (freqs3.which=='a')] = np.NaN
freqs3.index = tally(freqs3)
print(freqs3.head(),"\n")
print(freqs3.tail(),"\n")
crimeplus.drop(['year'], axis=1, inplace=True)
rfreqs3 = pandas2ri.py2ri(freqs3)
pr2()
128
year freq which
0 2001 183.0 s
1 2002 175.0 s
2 2003 189.0 s
3 2004 142.0 s
4 2005 127.0 s
year freq which
31 2014 424.0 a
32 2015 493.0 a
33 2016 778.0 a
34 2017 669.0 a
35 2018 NaN a
%%R -w700 -h500 -i rfreqs3
pal <- brewer.pal(9,"Blues")
ggplot(rfreqs3,aes(x=year,y=freq,col=which)) +
geom_point() +
geom_line() +
theme(legend.position = "bottom", plot.background = element_rect(fill = pal[2]),
panel.background = element_rect(fill = pal[2])) +
ylim(0,max(rfreqs3$freq)*1.25) +
scale_color_manual(values=pal[c(9,5)]) +
theme(axis.text.x = element_text(size=7, angle = 45)) +
scale_x_continuous(breaks=sort(unique(rfreqs$year))) +
labs(title="Chicago Homicide", subtitle="2001-2018\n",
x="Year", y="# Homicides", color="Year/SoFar")
myear = crimeplus.date.dt.year.max()
numdays = crimeplus[crimeplus.date.dt.year==myear].date.dt.dayofyear.max()
print(numdays)
crimeplus['year'] = crimeplus.date.dt.year
fvar = ['year']
vars = ['year','frequency']
freqs1 = frequenciesf(crimeplus[(crimeplus.fbicode.isin(violentcrime)) &
(crimeplus.date.dt.dayofyear<=numdays)],fvar)[vars].sort_values(['year'])
freqs2 = frequenciesf(crimeplus[crimeplus.fbicode.isin(violentcrime)],fvar)[vars].sort_values(['year'])
freqs1.columns = ['year','freq']
freqs2.columns = ['year','freq']
freqs1['which'] = "s"
freqs2['which'] = "a"
freqs3 = freqs1.append(freqs2)#.reset_index()
freqs3.index = tally(freqs3)
mxyear = freqs3.year.max()
freqs3.freq[(freqs3.year==mxyear) & (freqs3.which=='a')] = np.NaN
print(freqs3.head())
crimeplus.drop(['year'], axis=1, inplace=True)
rfreqs3 = pandas2ri.py2ri(freqs3)
pr2()
128
year freq which
0 2001 14377.0 s
1 2002 13779.0 s
2 2003 12322.0 s
3 2004 11744.0 s
4 2005 11259.0 s
%%R -w700 -h500 -i rfreqs3
pal <- brewer.pal(9,"Blues")
ggplot(rfreqs3,aes(x=year,y=freq,col=which)) +
geom_point() +
geom_line() +
theme(legend.position = "bottom", plot.background = element_rect(fill = pal[2]),
panel.background = element_rect(fill = pal[2])) +
ylim(0,max(rfreqs3$freq)*1.25) +
scale_color_manual(values=pal[c(9,5)]) +
theme(axis.text.x = element_text(size=7, angle = 45)) +
scale_x_continuous(breaks=sort(unique(rfreqs$year))) +
labs(title="Chicago Violent Crime", subtitle="2001-2018\n",
x="Year", y="# Violent Crimes", color="Year/SoFar")
| 168 | bf382d623e3a | 2018-06-07 | 2018-06-07 11:24:37 | 2018-06-07 | 2018-06-07 11:26:27 | 1 | false | en | 2018-06-07 | 2018-06-07 11:26:27 | 4 | 109094caaf72 | 9.071698 | 0 | 0 | 0 | Of course, R and Python are the two current language leaders for data science computing, while Pandas is to Python as data.table and… | 5 | Frequencies in Pandas — and a Little R Magic for Python
Of course, R and Python are the two current language leaders for data science computing, while Pandas is to Python as data.table and tidyverse are to R for data management: everything.
So I took on the challenge of extending the work I’d started in Pandas to replicate the frequencies functionality I’d developed in R. I was able to demonstrate to my satisfaction how it might be done, but not before running into several pitfalls.
Pandas is quite the comprehensive library, aiming “to be the fundamental high-level building block for doing practical, real world data analysis in Python.” I think it succeeds, providing highly-optimized structures for efficiently managing/analyzing data. The primary Pandas data structures are the series and the dataframe; the Pandas developer mainly uses core Python to manage these structures.
Pandas provides a procedure, value_counts(), to output frequencies from a series or a single dataframe column. To include null or NA values, the programmer designates dropna=False in the function call.
Alas, value_counts() works on single attributes only, so to handle the multi-variable case, the programmer must dig into Pandas’s powerful split-apply-combine groupby functions. There is a problem with this though: by default, these groupby functions automatically delete NA’s from consideration, even as it’s generally the case with frequencies that NA counts are desirable. What’s the Pandas developer to do?
There are several work-arounds that can be deployed. The first is to convert all groupby “dimension” vars to string, in so doing preserving NA’s. That’s a pretty ugly and inefficient band-aid, however. The second is to use the fillna() function to replace NA’s with a designated “missing” value such as 999.999, and then to replace the 999.999 later in the chain with NA after the computations are completed. I’d gone with the string conversion option when first I considered frequencies in Pandas. This time, though, I looked harder at the fillna-replace option, generally finding it the lesser of two evils.
The remainder of this notebook looks at these Pandas frequencies options for the same Chicago crime data with almost 6.6M records I illustrated last time. I first build a working data set from the downloaded csv file, then take a look at the different options noted above, finally settling on a poc frequency function using fillna-replace.
Import a few Python libraries.
In [28]:
Load the rpy2 “magic” R extension
In [29]:
Import other rpy2 libraries and activate the the Pandas to R dataframe copy capability for later use.
In [30]:
Define a few simple lambda functions.
In [31]:
Assign directories and file names.
In [32]:
Read the Chicago communities data file into a Pandas dataframe.
In [33]:
Ditto for the FBI crime code/description file.
In [34]:
Read several attributes from the latest Chicago crime csv file downloaded in a separate process. Join in the communities and fbicode descriptions to make crimeplus, the final Pandas dataframe of almost 6.6M records. The unexecuted code between the two “””s, illustrates how to make categorical variables from character objects. The “null” list tells which columns have NA’s.
In [35]:
Define categories of crime from the fbicodes.
In [36]:
Compute frequencies for the communityarea attribute in the crimeplus dataframe using the Pandas value_counts() function, specifying the inclusion of NA’s.
In [37]:
Compute a bivariate frequencies data.frame with NA’s using the workaround of converting the frequency variables to string first, then computing the frequencies, and finally transforming the frequency table columns back to their original types. This sequence is slow and klunky.
In [38]:
Now do the same computation using the alternative workaround of first converting NA’s to a designated “MISSING”, then computing the frequencies, and finally replacing the MISSING’s with NA’s. This is a simpler option that performs better.
In [39]:
Define a generic frequencies function prototype, using the “MISSING” approach.
In [40]:
Test it first on a single attribute.
In [41]:
Now consider a 3-way frequencies combination with the new function. The performance isn’t bad.
In [42]:
A few more examples — the first subsetting for homicide and partial years. Note the addition followed by deletion of the computed year attribute from the data.frame.
In [43]:
Take a look at NA’s for communityarea by longitude.
In [44]:
Now consider homicides by month and year.
In [45]:
Another look at homicides by year followed by a little rmagic — passing the frequencies dataframe to R for ggplot.
In [46]:
Using “rmagic”, load pertinent R libraries and use ggplot on the frequencies data.frame.
In [47]:
Out[47]:
In [48]:
The same with violentcrime.
In [49]:
In [50]:
Now juxtapose annual homicides with those through “numdays” (128) of the year.
In [51]:
In [52]:
Ditto for violent crime.
In [53]:
In [54]:
Originally posted on DataScienceIO.com
You can also follow us on Facebook, Twitter, and Instagram.
| Frequencies in Pandas — and a Little R Magic for Python | 0 | frequencies-in-pandas-and-a-little-r-magic-for-python-109094caaf72 | 2018-06-07 | 2018-06-07 11:26:28 | https://medium.com/s/story/frequencies-in-pandas-and-a-little-r-magic-for-python-109094caaf72 | false | 2,351 | All About Data Science and a Data Science Mastery Portal | null | datascienceio | null | Data Science IO | data-science-io | DATA SCIENCE,UI DESIGN,AI,UX DESIGN,DATA | datascienceio | Data Science | data-science | Data Science | 33,617 | Jennifer Winget | null | 3197692004b2 | nagsciencecorp | 6 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-01 | 2018-08-01 22:28:06 | 2018-08-03 | 2018-08-03 00:40:22 | 1 | false | en | 2018-08-03 | 2018-08-03 00:40:22 | 1 | 1091ad1d489 | 2.022642 | 1 | 0 | 0 | Last week, I had the honor and pleasure of being a judge for this year’s Imagine Cup. Now in it’s 16th year, the Imagine Cup has become THE… | 1 | Seeing The Future of Technology — Up Close and Personal
Last week, I had the honor and pleasure of being a judge for this year’s Imagine Cup. Now in it’s 16th year, the Imagine Cup has become THE world’s premium student technology competition. It is an opportunity for students from around the world to showcase projects they’ve created to help change the way we live and work. Over the past few months, 40,000(!) students competed in regional events across the globe. From these, 49 teams moved forward to the three-day World Finals last week at Microsoft HQ in Redmond, WA.
By competing at this level on a global scale, the students have shown themselves to be some of the most innovative and creative developers in the world. Whether they decide to continue with their project as the cornerstone of a business or join an existing company, they are certain to continue pushing the limits of cutting-edge technology, and, we’ll all benefit from it.
As a co-founder of a technology company and an investor in early stage companies, I’m clearly excited about technology. But, my experience as judge energized me in a way I didn’t expect. The competitors were tasked with leveraging cloud-based technologies of today and the near-future. Their use of AI, mixed reality and big data are good indicators of how these technologies can be used to change lives for the better.
One of the most interesting things about the finalists was that they inserted technology into people’s lives in a way we’re just starting to see. It’s a clear shift from people interacting with technology to technology interacting with, and enhancing, the human experience. For example, the third place winner, Team Mediated Ear of Japan, applied deep-learning to audio waveforms. This enables hearing-impaired individuals to isolate voices of specific speakers when multiple conversations are happening at once. Second place went to Team iCry2Talk of Greece. Their project helps a parent understand their baby’s cries in real time through an intelligent interface, then associates it with a specific physiological and psychological state, depicting the result in a text, image and voice message.
The winner was Team smartARM of Canada, which created a robotic hand prosthetic that uses a camera embedded in its palm to recognize and calculate the most appropriate grip. It uses Microsoft Azure Computer Vision, Machine Learning and Cloud Storage and its use of machine learning means the more it is used the more accurate it will become.
Innovative technology improving humanity — how could I not be in awe of all the teams. I’m excited to follow the development of this year’s projects as well as what’s to come next year’s Imagine Cup. If you have kids, the Imagine Cup is a truly inspiring watch and I’d encourage you to watch it with them and encourage them to apply in the future!
| Seeing The Future of Technology — Up Close and Personal | 6 | seeing-the-future-of-technology-up-close-and-personal-1091ad1d489 | 2018-08-03 | 2018-08-03 00:40:23 | https://medium.com/s/story/seeing-the-future-of-technology-up-close-and-personal-1091ad1d489 | false | 483 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Erica Brescia | Co-founder and COO of Bitnami, Linux Foundation Board Member, XFactor investor, YCombinator alum, and, most importantly, loving wife and mother. | 365c716eb5cb | erica.brescia | 188 | 104 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 70e3cda8f6c9 | 2018-03-25 | 2018-03-25 18:57:43 | 2018-03-25 | 2018-03-25 19:12:44 | 1 | false | en | 2018-03-25 | 2018-03-25 19:12:44 | 1 | 1091b8de1fd4 | 4.309434 | 2 | 0 | 0 | Hollywood is obsessed with artificial intelligence. The last year alone saw a string of AI related movies, including Blade Runner 2049… | 4 | AI in 2018 — What works and what doesn’t
Hollywood is obsessed with artificial intelligence. The last year alone saw a string of AI related movies, including Blade Runner 2049, Marjorie Prime and Star Wars: The Last Jedi. There is a strong desire in the general public to see depictions of artificial intelligence that seem truly human. While this desire to create human equivalent AI systems is still a fantasy, progress in real AI technology is still showing amazing results.
Science Fiction has a long history of spurring real world innovation, but where do we stand with artificial intelligence?
What’s Working
Decision-tree Established Chatbots
Chatbots are the closest to emulating human interaction. Airlines, customer support and even retail have employed highly advanced AI chatbots in their forward facing interactions. These chatbots take the place of human agents and provide better, faster service. As machine learning advances, they are being augmented with customized personalities suited to their industry.
Easily the largest market for messaging services, China is at the forefront of the chatbot revolution. China’s Google equivalent, Baidu, has recently released their Melody AI chatbot for medical assistance. This advanced chatbot uses AI to collect information from patients to provide to physicians, sometimes with pertinent recommendations. As Melody is available to many people at once, any time of day, she can ask pre-screening questions that would otherwise result in increased wait times at clinics. This information is then available to the attending medical professional.
The travel industry deployed chatbots to help gather information as well. These forward facing AI systems gather information from the client to help curate travel content. The client is then sent to a human agent that has access to all of the information previously provided. This severely cuts down on overhead for the travel company, without decreasing the service to the end user.
Personalized Assistant Apps
The fantasy of a ‘Robot Butler’ has hounded society for years, but now we are starting to get close to the reality. Personal AI assistants like Amazon’s Alexa are entering the market in droves, giving users the ability to converse directly with their AI with voice recognition. These simple tasks are accessed without direct manipulation and simplify user’s lives. As this technology progresses, we will see more integration on the web — which will require a shift towards textual chatbots. These chatbots are still easy to use, and learn dialects, slang and inflection through continued interactions with customers in a more efficient manner than their voice recognition counterparts.
What Doesn’t
Crowdsourced machine learning-based chatbots
Chatbots that are left fully open to the public suffer from the interaction. Large populations of completely unfiltered users spur the creation of unsavory characteristics. With no inherent investment in the platform, users will amuse themselves by teaching the AI to insult, berate and spew hateful language. This isn’t all that different from someone teaching a child to swear, because they think it’s funny.
Instead, chatbots should be used for highly focused reasons. These reasons include collecting data, directing users and allowing customers to self-serve. In these situations, only the most deviant individuals will seek to sabotage the system. If a user has no direction to their interaction, they may very well focus on amusing themselves — to the detriment of the artificial intelligence.
Natural language processing/understanding
We mentioned Alexa earlier when discussing personalized AI applications. While the market share for this type of device continues to increase, they still struggle with fully integrating natural language processing and understanding. Similar technology ‘Siri’, the Apple iOS virtual assistant, has seen a usage drop. Novelty is wearing off, which results in a decrease to the patience of the customer base. While technology is new, people are willing to forgive mistakes and difficulties — but over time, those annoyances will build up.
Alexa itself has been plagued with strange interactions. These include a viral video of a child asking the AI assistant to play one of her favorite songs, but due to the similarity to other words, resulted in a string of pornography related keywords. Another child used the bot to order multiple pounds of sugar cookies. This was a relative non-issue, until news reports of the situation triggered additional Alexas to make the same mistake.
In only the last few days, a glitch in the Alexa system resulted in sporadic laughter in the middle of the night. These hurdles result in the user base viewing artificial intelligence as a HAL9000 equivalent, rather than Rosie the Robot from the Jetsons.
Everything to Expect in 2018
Artificial intelligence is a true, market tested technology that is rapidly evolving. Yet, we are still years away from the ‘Replicants’ of Blade Runner. Some are still terrified of the idea of an AI uprising — a true Terminator type scenario — but the realities of AI prevent that occurring anytime soon. Everyone wants to see Alexa as a true peer that can communicate at a human level, but the technology has simply not advanced that far. 2049 is still over 30 years away — so there’s still time for Replicant development!
We will continue to see artificial intelligence integrated into our lives. In immediate future, chatbots are going to become completely ubiquitous and aide in human interactions. The level of labor reduction these offer will make their adoption a complete necessity. Companies that fail to see their potential will be at a critical disadvantage. Customer service is a notoriously low-performing sector of any industry, and consistent, always available chatbots are already changing public perceptions. The age old adage of ‘Just wanting to talk to a human,’ is going away, as people realize that chatbots are often more efficient and less stressful than talking to a human agent.
Voice recognition software has a long way to go, but the technology is advancing. There is no doubt that it will one day reach the level where it can pass the Turing test — but we aren’t there yet. Chatbots, however, have matured to the level where they can frequently be mistaken for a human customer service agent. AI has a long way to go, but chatbots are here today.
At Synthetics AI, we connect consumers & brands through conversation.
We develop chatbot solutions for innovative businesses and market leaders that offer instant communication between brands and users, delivers better engagement and provides a more personalized way to serve clients and create awareness.
If you want to find out more about our Project and ICO campaign, visit our website: https://www.syntheticsai.com/
| AI in 2018 — What works and what doesn’t | 8 | ai-in-2018-what-works-and-what-doesnt-1091b8de1fd4 | 2018-05-16 | 2018-05-16 21:34:54 | https://medium.com/s/story/ai-in-2018-what-works-and-what-doesnt-1091b8de1fd4 | false | 1,089 | Syntheticsai.com | null | null | null | Synthetics AI | null | synthetics-ai | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Baiceanu Bogdan | Co-founder @ Syntheticsai.com | eba62faca769 | baiceanubogdan | 119 | 30 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-27 | 2017-11-27 14:52:37 | 2017-11-27 | 2017-11-27 14:59:36 | 0 | false | es | 2017-11-27 | 2017-11-27 14:59:36 | 8 | 10929bcb6e92 | 1.196226 | 1 | 0 | 0 | Mi primer contacto con los datos puede que fuera en cuarto de primaria, cuando me pidieron que eligiese un animal e hiciera un trabajo en… | 5 | Necesito los datos!
Mi primer contacto con los datos puede que fuera en cuarto de primaria, cuando me pidieron que eligiese un animal e hiciera un trabajo en word sobre sus características: elegí la foca. Wikipedia, que empezaba por entonces a popularizarse entre los estudiantes, hizo la mayoría de trabajo, pero oye, en aquel entonces la tecología del copy-paste era puntera.
Bien, los años pasaron, y acabé en el universidad haciendo gráficos sobre la influencia de los esclavos en el GDP de Cuba, el impacto de las IT en la política, la economía de en los 60s Iran o la aparición televisiva de Podemos en la campaña electoral de 2015. Al final, todo eran datos!
Creía que los manejaba a la perfección: había una tabla de Excel, se seleccionaba, se creaba un gráfico y se observaba que pasaba con él: Fácil.
No sé si los problemas empezaron el día que usé por tercera vez la función “buscar palabra” de word para contar las palabras en un trabajo de análisis del discurso para unos textos de más de 90 páginas, o cuando me descargué un mapa mundi en blanco para pintarlo en paint. Algo estaba fallando!
Fue entonces cuando empecé a preguntarme: y la gente cómo lo hace? Descubrí que había algo llamado visualización de datos, que los datos además podían manejarse programando y encontré perfiles de twitter tan impresionantes como Elijah Meeks o Nadieh Bremer.
A partir de ese momento los descubrimientos no cesaron: R, Python, Codeacademy, Datacamp, Edx, Coursera, Medium, Github, Codewars… los recursos eran tantos que costaba priorizar y empezar un curso en lugar de vagar sin fin por enlaces y más enlaces. Por suerte encontré alguien que ya me había hecho el trabajo y decidí seguir la ruta marcada por David Venturi.
Hace un mes empecé el R Programming Track de Datacamp y pretendo contar mi aprendizaje en este blog des de hoy!
| Necesito los datos! | 1 | necesito-los-datos-10929bcb6e92 | 2018-05-28 | 2018-05-28 22:26:12 | https://medium.com/s/story/necesito-los-datos-10929bcb6e92 | false | 317 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Blanca C-F | Learning R and SQL. Interested in public discourse analysis: traditional media, politic speeaches and Twitter trends. Just a begginer, any help is welcomed. | 6cca139a3ce7 | Blanca_C_Fi | 1 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-08-22 | 2018-08-22 05:15:19 | 2018-08-22 | 2018-08-22 05:15:36 | 0 | false | en | 2018-08-22 | 2018-08-22 05:15:36 | 1 | 1092ae78465c | 0.89434 | 0 | 0 | 0 | Venue: Leading B-School Summit | 1 | 60–20–10–5
Venue: Leading B-School Summit
Agenda: Ideal %spilt of an org’s tech focus
Concept:
60% on what “has happened”
20% on what “is happening”
10% on what “will happen”
5% on what “may happen”
1998 — For me as an undergrad fresher, Electronics was #1 (60%). TI, Nat Semi best recruiters, Intel the dream. Software (20%) was in design, Infosys <$100M in revenue. www (10%) was in waiting — Google in college, Yahoo just out. Mobiles (5%) were merely a status symbol.
2007 — Tech had moved 1 step in. For me as a new MBA, software (60%) was everywhere. My class was them and they all went back to it. UI/UX led wwwv2.0 (20%) was in vogue but penetration still sucked. Jobs in ’07, not only launched Iphones but also a mobile-led (10%) corporate strategy wave. Post victories in chess, artificial cars and e-media, AI was now the 5%.
2018 — Tech’s now moved 2 steps in. Mobiles (60%) are everywhere: soon subscribers >> India’s population. Oracle claims we will be ruled by the cognitive power of AI (20%), the intuitive ability of IoT (10%) and the infallible memory of blockchain.
We, at OfBusiness, have taken baby steps too in using NLP and Machine Learning in our flagship product, Bidassist. The fun has just begun as the new age technologies leapfrog to 60% of our and each org’s focus.
Originally published at www.ofbusiness.com.
| 60–20–10–5 | 0 | 60-20-10-5-1092ae78465c | 2018-08-22 | 2018-08-22 05:15:36 | https://medium.com/s/story/60-20-10-5-1092ae78465c | false | 237 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Asish Mohapatra | CEO and Co-Founder - OfBusiness and Oxyzo Financial Services , IIT Kharagpur, ISB Hyderabad | 36f3646a619c | asishmohapatra | 1 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 97cefb4d066 | 2018-08-23 | 2018-08-23 21:56:35 | 2018-08-20 | 2018-08-20 20:50:29 | 4 | false | en | 2018-08-23 | 2018-08-23 22:00:16 | 4 | 10939d5fc356 | 3.156604 | 3 | 1 | 1 | Knowledge sharing fueled by AI | 5 | Introducing the Relay Knowledge Ecosystem
Knowledge sharing fueled by AI
One of the most significant challenges for those trying to offer service and support is being able to relay the proper internal knowledge to customers who need it promptly. As organizations and their product and service offerings grow, the amount of support-related information continuously expands. As this scope of knowledge and information broadens the gap between a tenured, experienced support agent and a new hire grows tremendously.
AI-powered automation can help bridge this gap and make all of your organization’s support and service-related knowledge accessible to your customers at the click of a button. We created the Relay Knowledge Ecosystem to provide the knowledge organization and automation needed to take your support or service offering to the next level.
What is the Relay Knowledge Ecosystem?
The Relay Knowledge Ecosystem brings your organization’s knowledge to your customers through chatbots and knowledge base articles that are fueled by Artificial Intelligence (AI). Getting onboard and using the Relay Knowledge Ecosystem is a breeze too as in many cases we can import your organization’s existing knowledge base (if you have one) and make it immediately accessible via chatbots and a visual search bar embedded on a landing page with customizable branding.
Don’t worry about building bots. We’ve fully automated that process with the Relay software. Now your organization can merely focus on creating a vast knowledge base while knowing that it will be accessible via chatbots and a search bar presentation similar to the Google experience we all know and love. Talk about killing two birds with one stone…
The Relay Knowledge Base & Search Bar. Your organization’s personal Google engine
When it comes to searching and accessing internal knowledge, it’s tough to beat the concept of a Google-esque search bar. Unlike a chatbot, a search engine is tailored to display a bevy of relevant search results thus increasing the likelihood that you find exactly what you’re looking for.
While chatbots are an ideal solution for displaying knowledge to external customers, your organization’s internal employees may prefer to find and access knowledge via a search bar since they will likely have the insight needed to weed through a selection of search results. Your customers probably won’t have that same insight, and therefore a conversational approach via chatbots is more likely to provide success.
Deliver your knowledge to your customers via Chatbots
While a traditional search bar approach works great for your internal team, it often requires context and insights for success that your customers don’t have. When delivering your organization’s knowledge to customers, chatbots are the way to go.
The Relay platform uses proprietary AI and Machine Learning (ML) powered search algorithms that can deliver the same piece of content to your internal team via the Google-like search bar we just highlighted and/or your external customers via the Relay Bots. Just create your piece of knowledge content once, and decide whether it’s searchable via the search bar, chatbots, or both. With the Relay Knowledge Ecosystems, we’re already making manual chatbot creation a thing of the past.
So what’s next?
If you aren’t currently using knowledge bases and chatbots towards the goal of being able to provide better service and support, then you’re running the risk of falling well behind the competition. Start doing your research and find out whether or not a chatbot and/or knowledge base aligns with the focus of you or your organization. For tips on determining the best chatbot platform for your needs check out one of our previous posts on the subject.
We would love to give you a demo to show you just how powerful our native chatbots are when combined with the Relay intelligent support platform. Schedule a demo, and we’ll help you transform your support and service operations and take your organization to new heights.
Originally published at thinkrelay.com on August 20, 2018.
| Introducing the Relay Knowledge Ecosystem | 52 | introducing-the-relay-knowledge-ecosystem-10939d5fc356 | 2018-08-23 | 2018-08-23 22:00:16 | https://medium.com/s/story/introducing-the-relay-knowledge-ecosystem-10939d5fc356 | false | 651 | The #1 place to learn how AI, Chatbots, ML, NLP, and technology, in general, will revolutionize the way we get support and ask for help. | null | null | null | Support Automation Magazine | support-automation-magazine | AI,ARTIFICIAL INTELLIGENCE,CHATBOTS,AUTOMATION,SUPPORT | null | Chatbots | chatbots | Chatbots | 15,820 | Casey Phillips | AI Chatbot UX Owner @ Intuit. AI fanatic, tech enthusiast, and passionate product builder! LinkedIn.com/in/casey-phillips-mba-pmp/ | e3f0721527ec | casphill | 470 | 538 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-08-12 | 2018-08-12 04:48:17 | 2018-08-12 | 2018-08-12 04:48:17 | 8 | false | en | 2018-08-12 | 2018-08-12 04:58:20 | 5 | 109684f95dcd | 7.201258 | 1 | 0 | 0 | Originally published at spark-in.me on July 15, 2018. | 5 | Playing with Crowd-AI mapping challenge — or how to improve your CNN performance with self-supervised techniques
Originally published at spark-in.me on July 15, 2018.
Looks really good, huh? This is end-to-end evaluation. Two major failure cases — inconsistent performance with smaller objects near bigger houses + some spurious False Positives sometimes)
TLDR
In 2018 ML competitions are plagued by:
“Let’s stack 250 models” type solutions;
Very nasty / imbalanced / random data-sets, which essentially turn any potentially interesting / decent competition into a casino;
Lack of inquisitive mentality — just stack everything — people do not really try to understand which technique really works and how much gain it provides;
Too much BS and marketing;
When looking a Crowd AI mapping challenge, I tried several “new” techniques, which worked really well in Semseg and can be partially applied to other domains:
Looking at the internal structure of the data / understanding the key failure cases => leveraging it via loss weighting (a bit of self-supervised fashion);
Polishing your Semseg CNN fine-tuning — namely training last epochs of your CNN with higher DICE loss weight;
I will not be releasing any code at this stage, because the competition is not yet over, but I will gladly share my insights to facilitate discussion.
Why Crowd-ai mapping challenge?
I believe it is a really good test playground for testing new CNN-related ideas for a number of reasons:
The dataset is really well-balanced;
The hosts were kind enough to provide a lot of starter-kit boilerplate to do the most tedious tasks;
There is a stellar open solution to the task;
Standard techniques
I believe the following Semseg CNN related techniques to be standard in 2018, but I will list them anyway for completeness:
Using UNet / LinkNet style CNNs;
Using pre-trained encoders on Imagenet;
Trying various combinations of encoder-decoder architectures;
Trying augmentations of vastly different “heaviness” (you should tune capacity of your model + augmentations for each dataset);
(Funnily enough, they rarely write about this in papers).
Open solution
From an ML perspective it is great. No, really. Their write-up is amazing.
Weight illustrations in the open solution. Note that they do not show the final weights. If they did — the need to calculate n² distances will be less apparent.
These are my corresponding weights
An this is how the CNN sees the weights after parametrization / squeezing — no real difference on how to calculate the distances
But I have a number of serious bones to pick with it and I have voiced my concerns in the gitter chat here, but I will also repeat them here:
This solution is a blatant marketing of a US$100-per-month-minimum solution for ML, which I guess is supposed to solve scaling / experimentation problems, but I cannot see why Tensorboard + simple logs + bash scripts cannot solve this for free without the code bulk + introducing extra dependencies. Ofc, using this platform is not mandatory, but why would you otherwise ;
The code bulk in the repo — it contains 3–4 levels of abstractions — decorators on decorators. It’s really ok, if you invested in a team of at least several people that compete in interesting competitions as their job, but when you are publishing your code for the public — it looks a bit like those obfuscated repositories with TF code. Their code is really good, but you have to dig through all of there extra layers to get to the “meat”;
The advertised neptune.ml experiment tracking capabilities. Yeah, it tracks all of the hyper-params. But if you follow a link to their open spreadsheet with experiments, you cannot really understand much from there => then what is the tracking advantage? Yeah, running your model on amazon with 2 lines of code is great, but also extremely expensive. Maybe it’s better to just assemble a devbox? It looks like they wrote a stellar write-up, but used this tracking only for “illustration”, not like a real tool that is crucial;
The idea about distance weighting is really good — but they implemented line by line as written in UNet paper it by calculating n² distances, which in my opinion is over-engineering. Just visually, after “squishing” even one distance transform provides similar mask weights;
From these points my biggest concern is that new ML practitioners will see this and possibly may draw the following conclusions, which ARE HIGHLY DETRIMENTAL TO THE ML COMMUNITY FROM AN EDUCATIONAL STANDPOINT:
You need some paid proprietary tools to track experiments. No, you do not. It starts to make sense when you have a team of at least 5 people and you have a LOT of different pipelines;
You need to invest into highly abstract code. No, you do not. CLI scripts + clean code + bash scripts is enough;
Implementing UNet paper word-by-word is ok, but probably calculating n² distances is not useful;
What worked? Which internal structure did I explore?
Current state / my simple ablation analysis
Currently, I did not implement the second stage of the pipeline, as the second stage is delayed and last time I checked the submits were frozen.
But here is a very brief table with my major tests (I ran ~100 different test, training ca 20–30 models till convergence).
(*) Challenge hosts provide a tool for local evaluation. It is based on jsons, that have to be filled with polygons + some confidence metric. The authors of the open solution claim that this weighting has a really major impact on the score. I did not test this yet, but I believe it can easily add 2–3 percentage points to the score (i.e. reduce False Positives + make the score .9 => .92-.93 for example);
(**) I just adopted a histogram based F1 score from DS Bowl. I guess it works best when you have many objects. We did a naive implementation of a proper F1 score together with visualization — it is slow, but shows much higher score;
(***) My best runs with (i) size based mask weighting (ii) distance-based mask weighting (iii) polishing with 10x DICE loss component weight;
(+) This is hard DICE score @ 0.5 threshold. Essentially a % of guessed pixels. Somewhere I do not remember the best score exactly;
Some learning curves. (3) — one of the best runs. (2) — effect of untimely LR decay
My best run so far — starting with lr 1e-4 + higher size-based weights — trains faster, achieves a bit better score
And here is the gist of my architecture search.
(0)Encoder architectureBest model — the fattest ResNet152I did some ablation tests and: — ResNet101 was a bit worse — ResNext was close, but heavier — VGG-family models over-fitted heavily — Inception family models were good, but worse than ResNet
(1)Model architectureLinkNet and UNet based models were close with fat ResNet encoders, but UNet is 3–4x times slowerJust for lulz, maybe it’s worth leaving a UNet for a couple of days? =) But that borders on remembering the dataset …Note, that ideally (0) and (1) should be also tested with the whole pipeline, which I did not do yet (my friend will do the second pipeline part).
(2) Augs
Played with different levels of augs, small augs were the best, model does not overfit (therefore it is very interesting to see the second stage data — maybe the data will be different => all the training know-hows will become useless, as it always happens with the competitions on Kaggle ..)
(3) Training regime
Freeze encoder, tune the decoder with lr 1e-3 and adam for 0.1 of the dataset (randomly ofc)
Unfreeze, train with lr 1e-4 and adam for 1.0 of the dataset (randomly ofc)
Train with lr 1e-5 for 1.0 of the datasetIncrease DICE loss 10x and train as long as you want — this possibly may be very fragile (!) if the delayed test dataset is different
(4) Loss weighting
Visually I saw no difference in using only one distance transform vs. calculating distances between each object => they are squished anyway
UNet weighting worked best with — w0 5.0 — sigma 10.0 , which means that the weights are distributed [1;5]Size weighting worked, and gave +3–5% F1 score
Distance weighting did not improve the result, but together with size weighting there was a slight improvement
Things I decided not to try
Things I decided not to try — morphological operations, erosion, borders etc
Morphological operations — they are unnecessary here;
Deep Watershed inspired approaches;
Proposal based models — they have been shown to be a good start on DS Bowl, but much harder to tune later on;
Recurrent models — too complicated, too big objects;
Things yet to try (updated)
Build a 2 stage end-to-end pipeline with polygons + meta-data and LightGBM;
Try the following tricks:
AdamW;
CLR + SGD / Adam (did not really help);
Weight-based ensembling (did not really help);
TTA;
Gradual encoder unfreezing (just for the sake of it), batch-norm freeze (networks with and w/o freezing converged the same);
3. A couple of ablation tests:
Maybe lighter UNet;
Maybe A/B test other weighting approaches (tried using 2 weights — to the closest house and to the second closes — did no helo);
If you interested in this topic — please feel free to comment — we will release our code with more rigorous tests / references / explanations!
Originally published at spark-in.me on July 15, 2018.
| Playing with Crowd-AI mapping challenge — or how to improve your CNN performance with… | 3 | playing-with-crowd-ai-mapping-challenge-or-how-to-improve-your-cnn-performance-with-109684f95dcd | 2018-08-12 | 2018-08-12 04:58:20 | https://medium.com/s/story/playing-with-crowd-ai-mapping-challenge-or-how-to-improve-your-cnn-performance-with-109684f95dcd | false | 1,608 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Alexander Veysov | Data Scientist | f29885e9bef3 | aveysov | 3 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-11 | 2017-10-11 07:48:58 | 2017-10-11 | 2017-10-11 07:54:26 | 1 | false | en | 2017-10-11 | 2017-10-11 07:54:26 | 3 | 1098c52455c2 | 1.083019 | 3 | 0 | 0 | Artificial intelligence and machine learning technologies are already revolutionizing many businesses. Especially B2B selling is one of the… | 5 | ARTIFICIAL INTELLIGENCE IS TRANSFORMING B2B SALES! ARE YOU READY FOR THE AI TRANSFORMATION?
Artificial intelligence and machine learning technologies are already revolutionizing many businesses. Especially B2B selling is one of the areas that artificial intelligence and machine learning software companies are cooperating with enterprises.
To understand how artificial intelligence is helping B2B selling, we need to understand B2B selling process.
We can divide B2B selling into two groups:
Inside sales managed by a CRM installation like salesforce
Field sales which include face to face meetings with clients
For both, selling process starts with lead generation which is identifying possible customers who may have interest for the product that is offered. After that, leads are scored based on the possibilities and some are qualified as opportunities. Then, sales team converts these opportunities into actual sales with offerings and services.
Through this sales pipeline, there are many tools that assist sales teams to automate some of the process or guide such as digital assistants that automate mails, automate meeting set ups; chatbots or some analytics software.
Artificial intelligence steps in at this point. Most new generation sales tools use artificial intelligence and machine learning systems. AI vendors enable companies to generate, score and convert leads. For more information on these AI use cases and vendors providing these solutions, please visit our comprehensive guide on lead generation
You can find the explanatory article about lead generation here.
| ARTIFICIAL INTELLIGENCE IS TRANSFORMING B2B SALES! ARE YOU READY FOR THE AI TRANSFORMATION? | 3 | artificial-intelligence-is-transforming-b2b-sales-are-you-ready-for-the-ai-transformation-1098c52455c2 | 2018-02-09 | 2018-02-09 02:38:54 | https://medium.com/s/story/artificial-intelligence-is-transforming-b2b-sales-are-you-ready-for-the-ai-transformation-1098c52455c2 | false | 234 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | appliedAI | null | f7fce25af699 | appliedAI | 58 | 29 | 20,181,104 | null | null | null | null | null | null |
0 | !git clone https://github.com/bourdakos1/capsule-networks.git
!pip install -r capsule-networks/requirements.txt
!touch capsule-networks/__init__.py
!mv capsule-networks capsule
!mv capsule/data/ ./data/
! ls
import os
import tensorflow as tf
from tqdm import tqdm
from capsule.config import cfg
from capsule.utils import load_mnist
from capsule.capsNet import CapsNet
capsNet = CapsNet(is_training=cfg.is_training)
tf.logging.info('Graph loaded')
sv = tf.train.Supervisor(graph=capsNet.graph,
logdir=cfg.logdir,
save_model_secs=0)
path = cfg.results + '/accuracy.csv'
if not os.path.exists(cfg.results):
os.mkdir(cfg.results)
elif os.path.exists(path):
os.remove(path)
fd_results = open(path, 'w')
fd_results.write('step,test_acc\n')
with sv.managed_session() as sess:
num_batch = int(60000 / cfg.batch_size)
num_test_batch = 10000 // cfg.batch_size
teX, teY = load_mnist(cfg.dataset, False)
for epoch in range(cfg.epoch):
if sv.should_stop():
break
for step in tqdm(range(num_batch), total=num_batch, ncols=70, leave=False, unit='b'):
global_step = sess.run(capsNet.global_step)
sess.run(capsNet.train_op)
if step % cfg.train_sum_freq == 0:
_, summary_str = sess.run([capsNet.train_op, capsNet.train_summary])
sv.summary_writer.add_summary(summary_str, global_step)
if (global_step + 1) % cfg.test_sum_freq == 0:
test_acc = 0
for i in range(num_test_batch):
start = i * cfg.batch_size
end = start + cfg.batch_size
test_acc += sess.run(capsNet.batch_accuracy, {capsNet.X: teX[start:end], capsNet.labels: teY[start:end]})
test_acc = test_acc / (cfg.batch_size * num_test_batch)
fd_results.write(str(global_step + 1) + ',' + str(test_acc) + '\n')
fd_results.flush()
if epoch % cfg.save_freq == 0:
sv.saver.save(sess, cfg.logdir + '/model_epoch_%04d_step_%02d' % (epoch, global_step))
fd_results.close()
tf.logging.info('Training done')
| 11 | 8a5c79a9c9e6 | 2017-11-25 | 2017-11-25 06:04:04 | 2017-11-27 | 2017-11-27 09:11:04 | 4 | false | en | 2017-11-27 | 2017-11-27 09:11:04 | 9 | 1099f5c67189 | 2.598113 | 118 | 4 | 0 | Practical way of exploring Capsule Networks (by Geoffrey Hinton) using Tensorflow on Colab notebook. | 4 | Running CapsuleNet on TensorFlow
Geoffrey Hinton in his lecture
So now we all know that Capsule Networks (by Geoffrey Hinton) is shaking up the AI space and literature states that it will push the limits of Convolutional Neural Network (CNN) to the next level. There are lot of Medium posts, articles and research papers available that discuss about the theory and how it is better than traditional CNN’s.
So I am not going to cover that part, instead I would try to implement the CpNet on TensorFlow using Google’s amazing internal tool called Colaboratory.
Few links you can follow to understand the theory part of CpNet:
Geoffrey Hinton talk “What is wrong with convolutional neural nets ?”
Capsule Networks Are Shaking up AI
Dynamic Routing Between Capsules
Let’s code the network.
Before starting, you can follow my CoLab Notebook and execute the following code.
CoLab URL : https://goo.gl/43Jvju
Now we will clone the repository and install the dependencies. Then we will take the MNIST dataset from the repository and move it out to the parent directory.
Let’s import all the modules.
Initialise the Capsule Network
This is how Capsule Network (CpNet) looks like on Tensorboard graph.
Training the Capsule Network
And creating TF session and running the epochs.
By default the model will be trained for 50 epochs at a batch size of 128. You are always welcome to change the config and try new combinations of hyper parameters.
Training process
For running 50 epochs of Capsule network it took me around 6 hours on NVIDIA TitanXp card.
But after successful training I was able to achieve 0.0038874 total loss which is incredible. 😃 💯
Total loss plot
Download my trained Capsule model
CpNet Model URL : https://goo.gl/DN7SS3
YOU NEVER KNOW UNTIL YOU TRY IT
If you liked this article, my notebook and models, please applause👏 and share with others!
For any queries , reach out to me via LinkedIn , Twitter or email me on [email protected].
| Running CapsuleNet on TensorFlow | 676 | running-capsulenet-on-tensorflow-1099f5c67189 | 2018-06-16 | 2018-06-16 15:16:56 | https://medium.com/s/story/running-capsulenet-on-tensorflow-1099f5c67189 | false | 503 | Organizations partner with our network of AI scientists, bot engineers and creatives to co-create AI & bots | null | BotSupply | null | #WeCoCreate | botsupply | MACHINE LEARNING,ARTIFICIAL INTELLIGENCE,CHATBOTS,DEEP LEARNING,UX DESIGN | botsupplyhq | Machine Learning | machine-learning | Machine Learning | 51,320 | Rahul Kumar | I’m a DeepLearning Enthusiast, an Independent Researcher and Technology Explorer. Chief AI Scientist @ BotSupply.ai | Jatana.ai | fb54c1c7a2a1 | hellorahulk | 312 | 179 | 20,181,104 | null | null | null | null | null | null |
|
0 | input * weight = guess
ground truth - guess = error
error * weight's contribution to error = adjustment
| 3 | null | 2018-08-14 | 2018-08-14 11:57:46 | 2018-08-14 | 2018-08-14 12:58:13 | 4 | false | en | 2018-08-14 | 2018-08-14 12:58:13 | 0 | 109aba9776f | 2.809434 | 2 | 0 | 0 | Neural networks are the mechanics behind how machine learning (or deep learning works). Instead of an engineer having to instruct a machine… | 1 | Neural Networks
Neural networks are the mechanics behind how machine learning (or deep learning works). Instead of an engineer having to instruct a machine on exactly what to do, they make it possible for software to learn to from input it is receiving and come up with responses on its on. Some common use cases are image recognition, search engines results, social media friend suggestions.
So put simply a neural network takes in input and calculates a result through an algorithm. Here’s a simple diagram of what that looks like:
The information passes through a network of nodes. The first layer is the input layer. No calculations happen here. The second layer (represented as one layer in the diagram but can be none to very, very many) is the hidden layer. This layer contains nodes that are given certain “weights.” The weight of the node determines the importance of its result. Calculations are done here based on the input passed through from the input layer. The last layer is the output nodes and here we have our result(s).
From these results a neural network can make a guess. It needs some training to some extent in order to learn how to make the correct guess. For instance, if we were trying to train a neural network to recognize pictures of cats, we would first give it a picture, then it would make a guess as to whether or not it was a picture of a cat. Then there would have to be some kind of confirmation as to whether or not it guessed correctly. Perhaps during the training phase we would use only pictures that had attached data about what was in the picture.
If we are to reeeeeally dumb down the process into a verrrrrrry simplified equation it would look like this:
Then we compare it to the correct answer:
Then based on the error we need to make the necessary adjustments to our algorithm.
In this phase, the adjustment is walked back down all the hidden weighted nodes to change their weights. Then hopefully the next time it sees a picture of a kitten it will know it’s a kitten!
So these are the simplified concepts. A usable neural network would be closer to this diagram than the previous one:
We have to note that each node affects multiple other nodes and that connections and weights that must be calculated add up quickly. Imagine in our picture example that each input node takes in a piece of a picture. Even if we just want to recognize a handwritten letter “A.” This can be really hard.
In figure (a) we see the handwritten ‘A’ as we would see it. In figure (b) we see the a as the our neural network would see it if we gave it 48 input nodes.
The previous deep neural network diagram had 8 input nodes.
So this is a just the tip of the iceberg. If you’re interested in learning more, I highlight recommend jabrils’ youtube channel. He is a self taught coder who has some really well done videos explaining how neural networks work. He also documents his projects and experiments on youtube so you can see some fun examples of machine learning.
| Neural Networks | 55 | neural-networks-109aba9776f | 2018-08-14 | 2018-08-14 12:58:13 | https://medium.com/s/story/neural-networks-109aba9776f | false | 559 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Katrina Garloff | Software Engineer | 9b97a8e11e2b | katrinagarloff | 5 | 16 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | c3edb4be73d2 | 2017-10-13 | 2017-10-13 14:04:58 | 2017-10-18 | 2017-10-18 20:31:50 | 3 | false | en | 2017-11-14 | 2017-11-14 22:23:58 | 5 | 109bbe7a9a68 | 4.282075 | 1 | 0 | 0 | Lingkang is a NextAI entrepreneur and Co-founder of InspiRED Robotics. | 4 | #NEXTFounderChats: Lingkang Zhang
Lingkang is a NextAI entrepreneur and Co-founder of InspiRED Robotics.
Tell us a bit about you and your venture! What was your inspiration for creating your venture?
I started building robots with perception capacities in 2010 when I was an undergraduate student. By the end of the 4 years, I had competed and had created several award-winning technologies in the robotics space. After graduation, I started graduate research in human-machine interaction at Autonomy Lab, led by Prof. Richard T. Vaughan at Simon Fraser University (SFU). My graduate research was all related to the design and implementation of visual perception methods for robots. I also completed an business graduate program focusing on science and technology commercialization in SFU, where I implemented the first version of the business plan for InspiRED Robotics with the help of Prof. Elicia Maine.
InspiRED Robotics is dedicated to building deep neural network based machine vision solution for embedded devices. It helps drones, smart TVs, toys and other consumer electronics products to understand humans’ gestures and behaviours, so that they can interact with human more autonomously and efficiently.
Understanding human behaviours is the key for intelligent machine systems. And vision is the most important perception method for humans to understand each other. In Autonomy Lab and Vision and Media Lab in SFU, where the co-founding team of InspiRED Robotics comes from, researchers has been working on machine vision techniques to help machines to understand humans and interact with humans autonomously for more than 10 years. Now it is the right time to commercialize the technology since we finally have processors and software algorithms good enough to handle the task.
What problem is your venture solving? Why did you choose to tackle this market?
When we talk about machine vision or artificial intelligence in general, we usually think about those super computers in giant boxes running on the cloud. However, the super computers are not always available because of the price, size and power consumption, especially for mobile devices with embedded systems. And there are other issues for cloud computing including transmission delay and data privacy. Most of existing solutions cannot solve the problem yet. We chose to tackle this market since we would like to make deep learning based machine vision technique more accessible, not only for the super computers but also for the mobile embedded platforms.
What are some of your venture’s biggest milestones?
Through one of the networking events of NextAI and the Ontario government, we got connected with an important strategic partner in China to help us get into the Chinese market. The first customer is always the hardest to get. After we established the partnership with our Chinese partner, we got another two B2B customers right after we graduated from the NextAI program.
When you first started, what were your biggest hurdles in building your venture?
When we first started, most of the team members come with a technical background. Although I have a business degree myself, there was still a lot to learn to start a real business. We spent a long time to figure our the related policies on business registration, accounting, tax, and other topics. Another challenge is to get the first customer. We sent a lot of cold emails and did a lot of cold calls as well just to figure out whether the market exists and what the customers actually want. According to our experience, it is usually easier if you can get a warm introduction to your potential customers by someone they already knew, than simply cold calling them.
How do you believe technology will impact your industry over the next decade?
Machines equipped with visual perception capabilities and powered by the artificial intelligence will serve the humans better. This will not only happen to the consumer electronics, but also to the industrial machines. The deep learning technology will help solve the problems that was hard to solve before, for example to detect certain objects and recognize gestures in very dynamic and complicated environment reliably.
What piece of advice would you give to someone who wants to start a company in your industry?
Although technology itself is certainly crucial for an AI startup, my suggestion would be to put enough focus on real-life applications and use-cases. Just to make sure that the needs and the markets actually exist before digging too deep into the technology development.
What are 3 books, blogs or newsletters you recommend for entrepreneurs looking to make an impact in your industry?
Elon Musk:Tesla, SpaceX, and the Quest for a Fantastic Future
IEEE Spectrum Robotics: https://spectrum.ieee.org/robotics
The Robot Report: https://www.therobotreport.com/map/
What would you say are the most important skills needed to be a successful entrepreneur?
Always working hard: I mean literally putting in 80- to 100-hour weeks every week.
Making right decision with limited information: I learned this from Reza Satchu who is part of Next Canada and also a very successful entrepreneur.
Willingness to learn and learn fast: the world is changing fast and as an entrepreneur you have to move faster by learning;
Who is one person that has tremendously helped you through your time in NextAI? How did they help you?
Annick Dufort, the program manager of NextAI, has been so helpful during the whole process. As one of the entrepreneurs in the first cohort of NextAI, I would say the program is very well organized and I can see that Annick and the Next Canada team have put lots of effort in it. Annick not only helped on “small things” such as working space arrangement, but also helped making connection with important business partners for us. When we attended events together, she was always happy to make introductions for us to other attendees. I would like to say thank you to Annick and the Next Canada team.
The InspiRED team.
Applications for the 2018 NextAI cohort are open! Click here to apply.
| #NEXTFounderChats: Lingkang Zhang | 4 | nextfounderchats-lingkang-zhang-109bbe7a9a68 | 2017-11-14 | 2017-11-14 22:23:59 | https://medium.com/s/story/nextfounderchats-lingkang-zhang-109bbe7a9a68 | false | 989 | A national non-profit dedicated to developing Canada’s most promising entrepreneurs & innovators through 3 programs: @next36 @Next_Founders @_NextAI | null | nextcanadaorg | null | NEXT Canada | next-canada | INNOVATION,TECHNOLOGY,STARTUP,TORONTO,CANADA | next_canada | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | NEXT Canada | A national non-profit dedicated to developing Canada's most promising entrepreneurs & innovators through 3 programs: @next36 @Next_Founders @_NextAI | 11b5a41c25fa | NEXTCanada | 464 | 147 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-09-20 | 2018-09-20 02:13:51 | 2018-09-20 | 2018-09-20 02:25:22 | 1 | false | en | 2018-09-20 | 2018-09-20 07:22:53 | 2 | 109e81894c68 | 1.830189 | 125 | 4 | 0 | Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management… | 5 | Virtual Rehab Founder Awarded United Nations “Expert” Status
Virtual Rehab’s evidence-based solution uses Virtual Reality, Artificial Intelligence, & Blockchain technology for Pain Management, Prevention of Substance Use Disorders, and Rehabilitation of Repeat Offenders.
Dear All,
Hope you are doing well and enjoying our recent updates.
Well, today, we have a very special update for all of you. It won’t be a lengthy one but definitely something to be extremely proud of — not only to Virtual Rehab but also to everyone who is supporting us throughout this exciting journey.
Ladies and Gentlemen, are you ready for the BIG news?
(Well you kind of know it from the title of this article, but we still need to ask)
So, are we readyyyyyyyyy ?????
Well folks, we are honored and we are very pleased to announce that our Founder and CEO, Dr. Raji Wahidy, has been awarded with “Expert” status by the United Nations Global Sustainable Consumption & Production (SCP) Programme with focus on Sustainable Lifestyle and Education.
The below set of expertise are the ones that Dr. Wahidy has been recognized with “Expert” status for:
Sustainability Themes — Human Rights, Education, Poverty Eradication, Other
Sectors of Activity — Education, Scientific Research, Development and Innovation
This is obviously not something for Virtual Rehab to celebrate alone. It is something that we should all celebrate and yet another milestone for Virtual Rehab.
We want to take this opportunity to thank YOU for your support and for spreading the word about our work to help the most vulnerable populations out there. We could not have achieved all these awards and recognition without your backing. We appreciate it very much.
“Expert” Status Recognition by the United Nations Global SCP Programme
Folks, that’s all we have for you for today. We hope that you are as excited as we are about this recent development. Now, in case you have any questions, then please do not hesitate to contact us. Drop in at our Telegram channel. We are here to support.
Our Private Sale will be OPEN very soon ! So, if you are interested in supporting us (and we hope you do), then please drop us a line at [email protected] and we would be happy to tell you more.
Oh, one more thing, kindly note that the KYC registration has been closed for now. We will launch our new website on the 1st of October and we will be launching it with a new KYC service provider. Please note that all submissions made up to this point will be accepted.
And always remember …
Be Safe and Make a Difference in this World !!!
Peace Out !
| Virtual Rehab Founder Awarded United Nations “Expert” Status | 952 | virtual-rehab-founder-awarded-united-nations-expert-status-109e81894c68 | 2018-09-20 | 2018-09-20 07:22:53 | https://medium.com/s/story/virtual-rehab-founder-awarded-united-nations-expert-status-109e81894c68 | false | 432 | null | null | null | null | null | null | null | null | null | Virtual Reality | virtual-reality | Virtual Reality | 30,193 | Virtual Rehab | Virtual Rehab's evidence-based solution uses #VR, #AI, & #blockchain technology for Prevention of Substance Use Disorders & Rehabilitation of Repeat Offenders | f0264e3a3a70 | VirtualRehab | 2,168 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-05-29 | 2018-05-29 10:53:37 | 2018-05-29 | 2018-05-29 10:55:45 | 1 | false | en | 2018-05-29 | 2018-05-29 10:55:45 | 11 | 109f471de617 | 4.683019 | 0 | 0 | 0 | The great thing about the ready availability of online market research today is the ease with which you can collect loads and loads of… | 4 |
Why Insight Departments Need to Invest in Digital Analytics Skills
The great thing about the ready availability of online market research today is the ease with which you can collect loads and loads of lovely data about your customers, am I right? Acres of bits of information about their likes, wants, needs, histories, motivations and values all detailed in digital qualitative and quantitative research output, agile and on-going. Throw your diligent social media listening output into the mix and what you’ve got is an awful lot of stuff.
But how to make sense of all that stuff? There’s the rub. Because unless you can analyse all of that data and then translate your findings into actionable insights for the business, you might as well have sat and watched back to back episodes of Gardeners’ World (not that there’s anything wrong with that)!
Unless you can analyse your digtal MRX data for actionable insight it’s redundant
To narrow the field, we’d better start with some terms of reference: in this article, I’ll be discussing the digital analysis of primary market research data, as opposed to for example, the performance analysis of digital marketing materials. Whilst both are key to business success, the skills sets required are unique. So what capabilities do insight departments need for the digital analysis of primary market research data?
Digital Qualitative Analysis
One of the biggest issues with the analysis of online qualitative data is the sheer volume of data output that the researcher needs to be able to cut through and make sense of. It is vital therefore, that any insight department has the skills within to effectively code data, to filter it for themes and turn these into actionable insights. Powerful tools such as AtlasTI and NVivo allow for data coding and thematic discovery but the learning curve is steep, so a dedicated training course is a must as is digital aptitude.
If you simply want to present frequency of words/terms in an attractive way, it’s pretty easy to generate a Wordle (though you’ll need to do a bit of data tidying first) — most IT savvy insight professionals could master this in minutes. However, I would argue that Wordle’s are just a very high-level data summary rather than a digital analysis.
An alternative solution is to use an online research software with integral search and tagging tools. These tend to be aimed towards researchers, as opposed to digital data analysts specifically, and as such are easier to operate. Though they might provide more of a high-level analysis than AtlasTI and NVivo, they certainly go further than a Wordle summary and results are often real-time. You will still need a level of digital software understanding in your insight department to set these up but no more than would be required to conduct digital research fieldwork with the same platform.
Digital Quantitative Analysis
Whilst Excel will see you a surprisingly long way with descriptive and basic statistical analysis, there are easier ways of doing a lot more with your digital quant data, not least SPSS. As a dedicated statistics software package used for logical batched and non-batched statistical analysis SPSS naturally requires a degree of bespoke know-how. It is however part of the curriculum in many Social Science and Management University courses, so the graduates in question are likely to have some level of familiarity with the digital SPSS analysis of quantitative data.
Another tool I am particularly fond of is Q (based on the R statistical analysis language). Q is designed specifically for market researchers and not only facilitates digital analysis but also allows for easy replication and templating of attractive charts and tables.
If you have insight team members who have used SPSS’s custom tables add-on, they will walk though Q. Creating tables in Q is easier than in SPSS and the program also includes helpful features such as the display of significance in cross-tabs and so on.
Social Media Listening
Social media analytics tends to falls into two camps; monitoring and listening. Monitoring and listening are different and require different digital analyst skills. Monitoring is primarily concerned with the metrics surrounding social media, i.e. hits and engagements, lending itself more to the performance analysis of digital marketing materials. Social media listening however, is about looking at the sentiment contained within relevant social media comments in order to inform business strategy and for the purposes of this blog will be viewed primary market research data.
In some respects, social media listening can be like being in a busy, noisy pub with a large group; it’s easy to get over-awed by the number of conversations that you could be tuning into, not knowing what to concentrate on at any one point. There are digital tools which can support the listening process. They range in complexity but with a little time invested in training the right researcher would certainly be able to master the basics.
The quality of social media listening insight is actually routed in the human ability to ask the right questions of the digital program analysing the data, and sense checking the outcomes, not their digital ability per se. There is still only so far that automated tools can go in correctly tagging and analysing sentiment in social posts. Further, it takes a very skilled insight professional to interpret the digital analysis for actionable business outcomes.
The quality of social media analytics insight lies in the ability to ask the right questions of the data
Big Data Analytics
Because of the need to manage huge sets of data, to query multiple data sources, and to crunch data beyond the scale that can easily be dealt with in mainstream GUI quantitative software, information handling and programming skills are big advantages in big data analysis. Those with experience of the R programming language — a particularly useful language for the statistical analysis of large data sets — are a desirable complement to any insight department responsible for big data analysis.
The tricky bit is the insight professional / big data digital analyst combo. You either need an individual who is both a programmer and insight professional — one who understands the needs of the business, the questions that need to be asked of the data, the insight goal, etc. — or a very close working relationship between your insight professional/s and your big data digital analyst.
In Summary
With the right training, your insight professionals can be taught to operate many of the digital analysis programs discussed above. Such training is undoubtedly a worthwhile investment in terms of both analytical speed and data reach but it is important to note that any digital analysis program is only as good as the person using it.
The most important skills required for a successful insight department are ultimately the same as in the pre-digital age: curiosity, an ability to get inside the minds of both customers and stakeholders, a thorough understanding of the business in order to ask the right questions of the data and the expertise to extract actionable insight and make useful recommendations.
| Why Insight Departments Need to Invest in Digital Analytics Skills | 0 | why-insight-departments-need-to-invest-in-digital-analytics-skills-109f471de617 | 2018-05-29 | 2018-05-29 10:55:45 | https://medium.com/s/story/why-insight-departments-need-to-invest-in-digital-analytics-skills-109f471de617 | false | 1,188 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | FlexMR | We empower brands to inform every decision at the speed of business by delivering on-demand insight and enterprise grade research technology. | bbd19498d99e | FlexMR | 446 | 914 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-03-13 | 2018-03-13 07:13:50 | 2018-03-13 | 2018-03-13 07:14:35 | 0 | false | en | 2018-03-13 | 2018-03-13 07:14:35 | 1 | 10a002380dca | 0.237736 | 1 | 0 | 0 | What is Data Warehouse? | 3 | Data Warehouse vs Data Mart: Know the Difference
What is Data Warehouse?
A Data Warehouse collects and manages data from varied sources to provide meaningful business insights.
It is a collection of data which is separate from the operational systems and supports the decision making of the company. In Data Warehouse data is stored from a historical perspective.
https://www.guru99.com/data-warehouse-vs-data-mart.html
| Data Warehouse vs Data Mart: Know the Difference | 1 | data-warehouse-vs-data-mart-know-the-difference-10a002380dca | 2018-03-30 | 2018-03-30 21:40:26 | https://medium.com/s/story/data-warehouse-vs-data-mart-know-the-difference-10a002380dca | false | 63 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | krishma antala | null | 7364732b42fe | krishmaantala.guru99 | 4 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-10-23 | 2017-10-23 22:25:23 | 2017-10-23 | 2017-10-23 22:29:21 | 2 | false | en | 2017-10-23 | 2017-10-23 22:29:21 | 4 | 10a24b444029 | 1.292767 | 3 | 0 | 0 | Remember Java and Oracle DBA boom in 2001 and then C# then Big Data after 10 years. What’s now? | 5 | Blockchain & Banks — No Bitcoin
Remember Java and Oracle DBA boom in 2001 and then C# then Big Data after 10 years. What’s now?
This is it for now…AI & ML → End of world until 2017 & 2018
Then? Very high probability of #blockchain
Blockchain is a distributed database & Bitcoin is cryptocurrency.
Visa B2B Connect | Visa
As a payments technology company, we always are looking for innovative technologies that might have the potential to…usa.visa.com
Mastercard opens access to its blockchain tech | ZDNet
Mastercard has opened its blockchain technology up to developers, allowing financial institutions and merchants on a…www.zdnet.com
Capital One Is Trying to Bring the Blockchain to Health Care
Capital One is teaming up with blockchain startup Gem and a number of other firms to test blockchain tech on health…fortune.com
BlockChain
Technology Blockchain could be truly revolutionary, but there are still questions to answer about the core technology…www.fidelitylabs.com
Blockchain is the technology that cannot be ignored because it will be like ignoring invention of planes or wireless or wifi. Those who will ignore the potential benefits of blockchain will be left behind. The power of decentralized computing makes it easier to avoid fraud or provide accurate inventory number. The future demands are speed, more savings, accuracy and peace of mind. Blockchain technology addresses these demands and actually goes beyond of it because of the underneath technology capabilities.
Good luck!!!
If you like it then please share.
| Blockchain & Banks — No Bitcoin | 3 | blockchain-banks-no-bitcoin-10a24b444029 | 2017-10-24 | 2017-10-24 14:00:59 | https://medium.com/s/story/blockchain-banks-no-bitcoin-10a24b444029 | false | 241 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Neeraj Sabharwal | Dad, Author, Blockchain, Quantum Computing, Cloud, Big Data, Learner | 24940f8dcf25 | neerajsabharwal | 89 | 169 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-07 | 2018-09-07 20:26:28 | 2018-09-08 | 2018-09-08 00:42:19 | 1 | false | en | 2018-09-08 | 2018-09-08 00:42:19 | 3 | 10a2701f08f3 | 1.698113 | 1 | 0 | 0 | Artiqox (AIQ) isn’t just a coin or token, it is a whole ecosystem that is based on 3 pillars including the crowdfunding of Artificial… | 5 | Artiqox: Bringing AI and Blockchain Technology together…
Artiqox (AIQ) isn’t just a coin or token, it is a whole ecosystem that is based on 3 pillars including the crowdfunding of Artificial Intelligence (AI) startups. The team strives to show that AI and cryptocurrency are both a vital and highly influential part of our future. Artificial Intelligence is already being used by many IT companies to help evolve their products. Artiqox is entirely privately funded by the team and through the sales from their merchandise.
The AIQ team is comprised of trained professionals who specialize in blockchain technology, project planning and core developments. The project started with Proof of Work (PoW) algorithm and at block 42000 they changed to AuxPoW to make merged mining possible, while still maintaining a fairly low max supply of 769 million coins. AIQ however cannot be CPU mined, but mining pools are available or you can rent mining equipment.
This team has been working tirelessly, all the while not rushing the outcome to make a product that will help transition our world into AI, and understanding all the good that can be done with it. With this said, the first big announcement from the AIQ team is the development of “Personal.” Personal is an app that will transform the way your data is used over the internet including such things as social media and banking. This app will be be able to handle over 1 million transactions per second making it the fastest data crawler in the history of information technology, not to mention the largest and safest. Personal will allow you, the user to control which data you want to disclose. If a company requests to use your data from the app, you will receive a remuneration for the transaction. For more detailed information on “Personal” you can visit https://drive.google.com/file/d/1ewBUenhMULBZ8CpDNJa23aYzOML_aEiZ/view
Artiqox is a project that truly is the way of our future with a very supportive community and Dev team. The team is also very active on telegram and are always updating the project as well as answering any questions one may have. AIQ is currently trading on one exchange, FreiExchange, giving investors the opportunity to get into the project early before it shoots to the moon! Check our their website at https://artiqox.com. Please do your own research on AIQ and any other cryptocurrency, as this is by no means financial advice.
| Artiqox: Bringing AI and Blockchain Technology together… | 5 | artiqox-bringing-ai-and-blockchain-technology-together-10a2701f08f3 | 2018-09-08 | 2018-09-08 00:42:19 | https://medium.com/s/story/artiqox-bringing-ai-and-blockchain-technology-together-10a2701f08f3 | false | 397 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | Vroomcx9 | null | e29a252be28 | cpctables | 10 | 12 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 873dc114f70d | 2017-11-10 | 2017-11-10 14:29:46 | 2017-11-13 | 2017-11-13 15:23:58 | 1 | false | en | 2017-11-13 | 2017-11-13 15:23:58 | 0 | 10a326f084a4 | 1.766038 | 2 | 0 | 0 | There is a book called “Differentiate or Die: Survival in Our Era of Killer Competition” by Jack Trout and Steve Rivkin which try to show… | 3 | Differentiate or Die
There is a book called “Differentiate or Die: Survival in Our Era of Killer Competition” by Jack Trout and Steve Rivkin which try to show you how to differentiate your products, services, and business in order to dominate the competition. Right now we are facing this challenge in our own markets, I believe. All the agencies working hard to present in different ways, sometimes, even the same product or service. And clients find it difficult to decide the best option.
One of my many surprises these days in Toronto has been to discover one of the agencies of the Village, Plastic Mobile. Behind this leading software development firm with focus on mobile apps in the financial, retail and loyalty space, we can find plenty of talented people working to develop new features for our tomorrow. They have an innovation lab where they are testing new products that may not have right now any specific function but that in the future they could help our clients to move forward.
Some of these features they are experimenting with:
Bluetooth Vending Machine: Thanks to an app on your phone and using Bluetooth connection they can digitalize a food vending machine allowing a totally different experience for the consumer.
AI Music Creator: They are trying to make AI to make some new music. After feeding with sample music AI is able to create a kind of musical algorithm.
Visual Recognition Door Key: They create a new lock for doors thanks to a tablet and face recognition technology. Just the ones on the system are allowed to get in.
Coffee Delivery Roomba: They are trying to make a prototype using Roomba device to deliver coffee to each of their desks.
Those projects may not have any utility right now for any client in particular but just think about the potential of these ideas for our near future. This may help Havas to be one step ahead of the competitors and be able to show our future clients what we can do for them. Not just the usual stuff of media and creative which we do right, by the way, but much more than that. We can add some innovative ideas to our proposals which may make our clients/brands more meaningful at the end of the day.
We have this potential. Let’s take advantage of it together!
(I would like to thank Joseph Totera, Technical Strategist at Plastic Mobile for his patience with me explaining all these innovations.)
| Differentiate or Die | 2 | differentiate-or-die-10a326f084a4 | 2017-11-16 | 2017-11-16 10:34:26 | https://medium.com/s/story/differentiate-or-die-10a326f084a4 | false | 415 | A global learning experience designed to give Havas employees the opportunity to see how the organization operates across its many agencies, cities & countries. | null | null | null | havas lofts | null | havas-lofts | ADVERTISING,MEDIA,TRAVEL,LEARNING,EMPLOYEE ENGAGEMENT | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Àngel Molist | Taradellenc afincat a Barcelona i treballant com Account Director a Ecselis (Havas Media). | cb69943b2126 | angelma | 4 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | # Install the latest version from GitHub:
install.packages("devtools")
devtools::install_github("socialcopsdev/rLandsat")
# Load the library
library(rLandsat)
# get all the product IDs for India, alternatively can define path and row
result = landsat_search(min_date = "2018-01-01", max_date = "2018-01-16", country = "India")
# inputting espa creds
espa_creds("yourusername", "yourpassword")
# getting available products
prods = espa_products(result$product_id)
prods = prods$master
# placing an espa order
result_order = espa_order(result$product_id, product = c("sr","sr_ndvi"),
projection = "lonlat",
order_note = "All India Jan 2018")
order_id = result_order$order_details$orderid
# getting order status
durl = espa_status(order_id = order_id, getSize = TRUE)
downurl = durl$order_details
# download; after the order is complete
landsat_download(download_url = downurl$product_dload_url, dest_file = getwd())
| 2 | 8481fb9e8242 | 2018-07-26 | 2018-07-26 14:54:35 | 2018-07-18 | 2018-07-18 16:30:44 | 3 | false | en | 2018-07-26 | 2018-07-26 16:07:35 | 12 | 10a6a432c893 | 4.248113 | 1 | 0 | 0 | Landsat is without a doubt one of the best sources of free satellite data today. Managed by NASA and the United States Geological Survey… | 5 | Announcing rLandsat, an R Package for Landsat 8 Data
Landsat is without a doubt one of the best sources of free satellite data today. Managed by NASA and the United States Geological Survey, the Landsat satellites have been capturing multi-spectral imagery for over 40 years. The latest satellite, Landsat 8, orbits the Earth every 16 days and captures more than 700 satellite images per day across 9 spectral bands and 2 thermal bands. Its imagery has been used for everything from finding drought-prone areas and monitoring coastal erosion to analyzing an area’s fire probability and setting the best routes for electricity lines.
When we first started using Landsat 8 data, we were a bit overwhelmed by the amount of knowledge it took to find and download the images that we wanted. There are different types of data and data products, different APIs to figure out, data requests to be filled, differing data structures… it’s all a bit intimidating!
To make this data more accessible to everyone in our data team, we built an open-source R package (called rLandsat) to handle every step of finding, requesting, and downloading Landsat 8 data. Now we’re excited to release rLandsat to the public to help anyone unlock the mysteries within Landsat 8 data! Check out the rLandsat repository here.
About Landsat 8 data
The Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) images cover 9 spectral bands and 2 thermal bands with a spatial resolution ranging from 15 to 100 meters.
Table from USGS
USGS gives access to both its raw and processed satellite images. Raw images are available on AWS S3 and Google Cloud Storage, where they can be downloaded immediately. Processed images are available with the EROS Science Processing Architecture (ESPA). Images are also available through a variety of data products, such as SR (Surface Reflectance), TOA (Top of Atmosphere) and BR (Brightness Temperature).
Accessing the processed data can be tricky. There are two different APIs — one by Development Seed for searching (called sat-api) and one by USGS for downloading (called espa-api). Download requests have to include the product ID, row and/or path for the data, then they must be approved by USGS, which can take anywhere from a couple minutes to a couple days. To make matters worse, the APIs input and output data with different structures.
Here are some additional resources you might want to read:
Read about the Landsat Collection (Pre Collection and Collection 1) here.
Watch this video to understand the difference between the data on ESPA and that on AWS S3/Google Cloud Storage, and why using ESPA is preferred over AWS’ Digital Numbers (DN).
Watch how the data is captured here.
Read about over 120 applications of Landsat 8 data here.
Overview of rLandsat
rLandsat is an R package that handles every step of finding and getting Landsat 8 data — no Python or API knowledge needed! It makes it easy to search for Landsat8 product IDs, place an order on USGS-ESPA and download the data along with the meta information in the perfect format from R.
Internally, it uses a combination of sat-api, espa-api and AWS S3 Landsat 8 metadata.
To run any of the functions starting with espa_, you need valid login credentials from ESPA-LSRD and you need to input them in your environment with espa_creds(username, password) for the functions to work properly.
You should also check the demo script (which downloads all the Landsat 8 data for India for January 2018) in the demo folder, or run demo("india_landsat") in R after loading this library.
What can you do on rLandsat?
landsat_search: Get Landsat 8 product IDs for certain time periods and countries (or define your own path and row). This search uses sat-api (developed by DevelopmentSeed, this also gives the download URLs for AWS S3) or the AWS Landsat master meta file, based on your input.
espa_product: For the specified Landsat 8 product IDs, get the products available from ESPA. This uses espa-api.
espa_order: Place an order to get the download links for the specified product IDs and the corresponding products. You can also specify the projection (AEA and Lon/Lat), the resampling method and the file format. This is better than downloading the data from AWS as this gives data from advanced products (like Surface Reflectance), which is necessary for creating most of the indices.
espa_status: Get the status of the order placed using espa_order. If the status is complete, the download URLs for each tile will also be available.
landsat_download: A small function to download multiple URLs using the download.file function. If each band is being downloaded individually from AWS, this function will create a folder (instead of a zip file) for each tile, grouping the bands.
How to install rLandsat
If you find a bug, please file an issue with steps to reproduce it on Github. Please use the same for any feature requests, enhancements or suggestions.
Example
References
sat-api (Development Seed): https://github.com/sat-utils/sat-api
espa-api (USGS-EROS): https://github.com/USGS-EROS/espa-api/
Google Server and AWS Landsat Data: http://krstn.eu/landsat-batch-download-from-google/
rLandsat repository: https://github.com/socialcopsdev/rLandsat
Cheers to open data 😊
Originally published at blog.socialcops.com on July 18, 2018.
| Announcing rLandsat, an R Package for Landsat 8 Data | 50 | announcing-rlandsat-an-r-package-for-landsat-8-data-10a6a432c893 | 2018-07-26 | 2018-07-26 16:07:35 | https://medium.com/s/story/announcing-rlandsat-an-r-package-for-landsat-8-data-10a6a432c893 | false | 980 | We are a diverse team of engineers, data scientists, economists and entrepreneurs sharing our experiences and learnings from building SocialCops - a company on a mission to make our world data intelligent | www.socialcops.com | null | socialcops | null | Inside SocialCops | inside-socialcops | TECHNOLOGY,STARTUP,DATA SCIENCE,ENGINEERING,TECH | social_cops | Satellite Technology | satellite-technology | Satellite Technology | 905 | SocialCops | We empower organizations and leaders across the world to become more data intelligent | WEF Technology Pioneer 2018 | www.socialcops.com | 5df62269ab94 | Social_Cops | 1,271 | 210 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 7b1a199a1ca7 | 2017-09-15 | 2017-09-15 20:03:54 | 2017-09-15 | 2017-09-15 20:04:17 | 0 | false | en | 2017-12-17 | 2017-12-17 19:12:38 | 3 | 10a71ff41768 | 6.049057 | 4 | 0 | 0 | Invisible at its core is both a service and a technology company. We take technology very seriously without fetishizing it. Technology is… | 5 | Set: 01
Path: Technologies: Henosis: Automata
Friday, 15 September 2017
Automata
Invisible at its core is both a service and a technology company. We take technology very seriously without fetishizing it. Technology is just a tool, one of many we have, to further the goals of our business. It just so happens that it is one of the best tools for what we’re trying to accomplish.
Invisible has been around for nearly two years now, starting humbly with just one product: a consumer app for saving and organizing snippets of text through email. We’ve since shifted most of our tech to be internal-facing. Our software makes our team members more efficient, so that they can better serve our clients. This shift has empowered us to be nimble and iterate rapidly in response to feedback, since our users are (mostly) our team members.
We’ve built, and let die, a lot of tech. The pieces that stuck around are the ones that were battle-tested and actually provided utility that justified their upkeep. The most ambitious piece of tech we’ve built is called Automata.
Why
At Invisible, we do your work for you, so you can do your real work.
Imagine you are hiring for your company and have to schedule a day of onsite interviews for a candidate. The process has the following constraints:
The candidate must speak with the CEO and CTO of the company for one hour.
The candidate must meet with three other people relevant to the position for one hour each.
The candidate will have lunch with the team for 1.5 hours.
All meetings must have a conference room reserved.
Interview times can override recurring meetings like Standups, but these should be minimized.
Once the schedule has been assembled that meets those constraints:
Create the relevant calendar invites and send them out.
Write an email and send it to the Candidate, giving them an overview of their schedule.
This seems like a fairly straightforward, while detailed, process, and a skilled executive assistant could probably manage it in a couple hours. Now, imagine scheduling tens of such interviews, having many instances of this task open at the same time. It’s likely that even the most diligent person will eventually make a mistake, skip a step, or drop the ball in some way.
In our business and personal lives, we do a lot of things every day that could be abstracted out into repeatable processes.
If you were the executive assistant executing this task, your first stab at abstracting it out might be to just make a detailed checklist, and then go through it step by step. This is already a huge improvement over trying to keep the entire process and its constraints in your head, but we can do better.
Computers have some major advantages over humans in that:
They never forget.
They never get bored.
They won’t skip steps.
They have no trouble keeping many things open and separate.
And with AI and Machine Learning and a large enough data set, they can even do classification and prediction tasks.
Computers have some major disadvantages over humans in that:
They are poor at making judgment calls that don’t fit within defined parameters.
They are not (yet) good at creative tasks, or ones that require human empathy.
They don’t always have all the information, and it’s not always possible nor practical to parameterize everything.
If we were to create a perfect system for executing detailed, repeatable tasks, using technology that is available today or in the near future, what would it look like? We would want to maximize the benefits of computers and minimize the costs, while doing the same for human actors that are part of this system.
Hence, we created Automata, a system that abstracts out repeatable processes and executes them, allowing the computers to automate away the steps that can be automated, and coordinate the human beings at key times when their input is needed.
Example
A computer with access to the entire company’s calendar could figure out a potential schedule for a candidate much faster than a human being. But, how do you tell the computer which people at the company are relevant to the candidate? How do you tell the computer which meetings are OK to override? How would a computer incorporate information that is not necessarily stored in the calendar, like that a particular person is (unscheduled) out sick for the day?
You could theoretically try to automate this process 100% by building a more robust calendar system that mapped to an organizational chart and contained every piece of metadata needed to build this schedule. You could create an elaborate decision tree that accounted for every edge-case scenario you could think of. But, this would still require fallible human beings to keep their calendars and org chart 100% up to date, and it would also require us to invest heavily in creating these systems around the calendar software already in use. It would be impractical, as we would have shifted the burden of doing this task to the rest of the company to manually update their data, and it would still invite mistakes.
A better approach might be, let the computer do what it’s good at, and let humans check its work. Empower the computer to suggest potential schedules, reducing the possibility space, and let humans look it over and make changes before sending the final email on behalf of your company.
This would greatly reduce the amount of active time a human has to spend on this process, allowing them to increase their throughput and/or spend their time on other more valuable tasks, while simultaneously reducing the error rate and amount of dropped balls.
How
That sounds nice and all, but how do we do it, really?
Playbooks
Our first challenge was, we needed a way to represent these detailed, repeatable processes in a way that a computer would understand. Even more importantly, we wanted this representation to be accessible to our employees who are not engineers.
We did a quick survey of existing tech in this space, but none fit our needs. So we created our own domain specific language, which we simply call Playbooks.
Playbooks, first implemented just as Google Sheets that follow a specific format, can be as simple as just a list of steps, but can also:
Ask the Client for input at key points and validate this input
Ask the Agent for input
Store and retrieve data in variables
Generate template text with these variables
Make decisions and execute different branches of the process based on this input or other parameters
Execute simple loops
Call other Playbooks (what we like to call Handoffs)
For any programmer, this seems straightforward enough, as you would expect any programming language to be able to do these things. Our key insight was that we had to make it simple enough that non-programmers (the people close to the actual execution of a process) could easily read, write and edit Playbooks.
It doesn’t take long to train someone to write these, and once the initial investment is made, the Playbook can be executed over and over.
Automata
Playbooks themselves are useful in that they force us to think about things step by step. But you still need the computer to actually parse and understand the Playbook, so that it can execute it. Because we do most of our coordination and communication over Slack, we decided that the natural fit for this would be a Slack bot that ties into our backend. We call this bot Automata.
Automata reads, parses, and validates Playbooks, letting the Playbook writer know of any errors or potential infinite loops. It also saves every version of a Playbook, so we can see how it has evolved over time, or rollback to a previous version if needed.
Through a series of commands, Humans tell Automata to start running a given Playbook for a client. Automata will execute each step, asking for input when necessary, or just displaying info back to the Client or the Agent. Automata is smart enough to skip steps that are not needed (it won’t ask questions it has the answer to).
In the future, Automata will get more integrations with existing technology, enabling it to fully automate some steps by just making API calls, making the execution of these processes even more efficient. For example, instead of just telling the Agent to send a calendar invite with the given parameters, it just makes the calendar invite itself and asks the Agent for confirmation.
Automata is also fairly sophisticated in that it can manage many open Instances of a Process at once, seamlessly switching among them when asking the client for input.
Human Agents
Computers can’t do it all. As mentioned, it’s often impossible or impractical to give the computer all of the data it needs to fully execute a task, and things often come up that simply cannot be programmed for ahead of time. Thus, we allow our Agents to pause a process at any time, directly set variables, communicate with the Client if necessary, or escalate the problem to another Agent with more access or context.
This is a key part of the system and I consider the Policies and training of agents an important piece of this technology.
Conclusion
Automata is the most direct expression of our thesis. We put our money where our mouth is and invested a lot of engineering and design time into it. Our next challenge is: how do we efficiently turn processes into Playbooks? More on that later.
Next
| Automata | 86 | automata-10a71ff41768 | 2018-04-06 | 2018-04-06 18:03:55 | https://medium.com/s/story/automata-10a71ff41768 | false | 1,603 | Humans to do the work, technology to coordinate humans. | null | francis.pedraza | null | Invisible: Technologies | invisible-technologies | INVISIBLE APPS,ORGANIZATION,TECHNOLOGY,SYSTEM,SOFTWARE | francispedraza | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Keenahn Tiberius Jung | CTO at Invisible Technologies. Autodidact, ignoramus, raconteur and sandwichianado. 92.7% beast. | 4c1bee89ba74 | keenahn | 208 | 262 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 32881626c9c9 | 2018-09-20 | 2018-09-20 14:19:05 | 2018-09-20 | 2018-09-20 14:33:35 | 1 | false | en | 2018-10-09 | 2018-10-09 13:58:35 | 4 | 10a72ab851b8 | 1.641509 | 3 | 0 | 0 | Human expertise can’t be matched… for the time being. As emerging technologies become the norm, a multitude of questions are inevitably… | 5 | Insidious humanoid
Are the robots taking over?
Human expertise can’t be matched… for the time being. As emerging technologies become the norm, a multitude of questions are inevitably raised.
Artificial Intelligence (AI) and machine learning are currently buzz words in the financial services sector, but will they benefit human progress or replace humans all together?
Robot advisers
TCC expert Phil Deeks commented on whether robo advice poses a threat to advisers in New Model Adviser. He concluded that crucially robo advisers will not appear overnight and take over, but rather will gradually integrate with financial advisers to assist them in making decisions.
It’s also clear that customers prefer advice from real people, not machines. As AI technology grows and becomes commonplace, will both advisers and end consumers be willing to trust and rely on robo advice?
Leveraging AI
Regtech product Recordsure, an associated company of TCC, provides unique insights into conversations by organising and processing data from a range of sources. With continual development and steering from humans, it’s a good example of how AI can save time and provide valuable insights. However, as FCA Insight commented, AI is currently no match for a 4 year old child! Machines can undertake deep learning (a process where algorithms are used to thoroughly measure potential outcomes and trends from sets of data) to measure areas such as risk and vulnerability of customers very effectively. They cannot however develop hypotheses from viewing the world around them — something which young children do all the time. Without this conceptual understanding of the world, machines will only be able to report objective truths without wider context or appreciation of circumstances… for the time being.
Conclusion
With the necessary steer and conceptual understanding from humans, the development of AI should be welcomed as a partnering hand in accomplishing our aims. That said, as our efforts to make machines more intelligent progresses, we must keep a close eye on their activity and be careful not to let them take the driving seat.
The plug…
At TCC we’ve seen the power that AI and machine learning have in streamlining and speeding up human processes. We’ve embedded the technology that we’ve developed into our strategy, so our expert team can complete the work they do to a greater degree, and with increased efficiency.
| Are the robots taking over? | 7 | are-the-robots-taking-over-10a72ab851b8 | 2018-10-09 | 2018-10-09 17:56:16 | https://medium.com/s/story/are-the-robots-taking-over-10a72ab851b8 | false | 382 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Fergus Mckie | Marketing monkey in the world of financial services, Christian, foodie, music, the outdoors, things with wheels.. | 9df889f1318 | fergusmachine | 40 | 65 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a9a766ab4197 | 2017-12-14 | 2017-12-14 22:52:59 | 2017-12-14 | 2017-12-14 23:12:15 | 3 | false | en | 2017-12-21 | 2017-12-21 03:46:11 | 33 | 10a7a3278dfd | 19.217925 | 0 | 0 | 0 | This is the latest installment in our occasional series of interviews with Earlybird technologists on a variety of issues relevant to our… | 5 | Chatting With Data Scientist Amanda Dobbyn About Analyzing Beer Styles
Earlybird data scientist Amanda Dobbyn presenting at a recent R Ladies Chicago meetup.
This is the latest installment in our occasional series of interviews with Earlybird technologists on a variety of issues relevant to our work. Have any questions or ideas for topics you’d like to see us cover? Drop us a note at [email protected].
Eddie VanBogaert, Partner: Alright! We’re here with Amanda Dobbyn, a data scientist here at Earlybird, and before we dig into one of your latest projects, let’s start with your role here at Earlybird. What did you do before joining us, and what’s a typical day or week in the life like?
Amanda Dobbyn, Data Scientist: Sure. So I came to Earlybird from UChicago where I did statistical analyses on experimental data in a cognitive neuroscience lab. There I had the chance to get steeped in cognitive psych and neuroscience literature — lots of cultural relativity, linguistic relativity, experiments testing how the language you speak influences the way you think — that sort of interesting academic stuff. Through this, I was able to get a solid foundation in responsible statistics: I did a lot of checking assumptions, making plots of residuals, measuring skew, that kind of thing. And you know your methods are going to be scrutinized by other academics in the journal space, so you better make sure you’re doing it right and anticipating criticism of your process and biases. If you’re going to [take the natural logarithm of] your data you’d better be able to back up why that was the right choice.
Anyway, academia was great — met tons of super smart and passionate people who are still my good friends — but it moved a little slowly for me, and I wanted to do something in quicker iteration cycles and have a more practical impact. And that’s when I found Earlybird. My last year and a half or so here has been an interesting mix of data science, some project management under [Earlybird co-founder Vlad Jornitski], and generally learning a lot about the tech industry and the data software tools being used in the private sector.
The data science that I do tends to be a good bit of hypothesis testing — both our clients’ hypotheses about their data and hypotheses we generate ourselves — as well as more exploratory stuff generally starting with visualizations and then branching off into things like network analysis and predictive modeling. Modeling, for instance, can help us expect when a certain customer might return to a service department or retail location, based on their past history, known demographic details, and even factors like what day of the month it is and other market trends. I generally like joining on third party data like that as a control to give us indications of how much of the effect we see is actually attributable to business decisions our clients have made, and how much is probably a result of external factors beyond their control.
We’re generally allowed good degree of creativity, a wide berth to suss out the parts of the data that seem odd or don’t make sense, and dive into them to see if it’s a situation where our insights can add some value. So I’ll do that type of end-to-end analysis, polish it up, and then present what I’ve found to our clients along with our prescriptive recommendations for how the client can use the data to inform their decisions in a particular functional area.
As for my day-to-day? Well, it always starts with morning stand-ups along with the rest of the crew where I wear my PM hat and get a sense of where everyone is, what they’re blocked on, where the bugs are, what’s the most pressing, what might be lower priority. Then, depending on the day, there’s a smattering of meetings mixed in with bursts of coding where I fit in work on client-facing data projects and sometimes analyses internal to our production operations here at Earlybird. During these I’m coding or bouncing ideas off other members of the team. In my PM role, at different stages of different projects, I might be in the conference room with a certain project’s team, whiteboarding out a kink in our business logic and brainstorming the best possible workarounds or fixes for it. Other times I’ll be talking to our clients to get a clearer sense of their vision for a certain feature, or checking people’s work in reviews and making sure things look good functionally, ensuring we’re on-track to put together a product that we’re going to be proud of.
Eddie: Cool. So, beer — beer is a different story… we have beer here sometimes. And you shared a project recently — a personal project, not an Earlybird project — with the R Ladies group here in Chicago as part of their Oktoberfest meetup. Tell me about that. What were your overarching questions, or what were you looking to discover?
Amanda: So, funny enough, this did actually start in the office. Our good friend [senior developer Kris Kroski] and I were having a talk after work one day — may or may not have been over some beers — and Kris shared a beer app he’s been building as a side project.
I’ll spare you the exact details, but a lot of what the app is trying to do is identify people’s ideal flavor palettes based on their ratings of different types of beer. As you know, there are a lot of different styles of beer, and we were discussing how you’d want to go about objectively measuring the intensity of flavors in a way that makes sense across these different styles. This is sort of an interesting problem because, for example, what might be quite hoppy for a wheat beer might be considered average or even low hoppiness for an IPA (India Pale Ale) — that sort of thing. So you want to set up some sort of [Bayesian] prior that’ll tell you what distribution of hoppiness you should expect from a wheat beer and that’ll be different from the distribution among IPAs.
That discussion led me to ask a bit of a different question: do styles meaningfully define true flavor boundaries in beer? A reason we might think they don’t is that a lot styles seem to emerge as an accident of circumstance or history, right — take the impact of German purity laws, for instance. So is the labeled style of a beer actually a useful construct for understanding what you’re about to drink, or is it a bit more random than that? And somewhat secondarily, I was also interested in more fully discovering what the craft brewing space looks like today, and seeing what patterns or trends I might be able to find in the data.
Kris showed me how he had access to some beer data through an online beer website called BreweryDB. It has a public API (application programming interface) and so all you have to do is create an API key and you can request their data back as JSON. So, instead of just stealing Kroski’s data, I set about writing a few scripts that would get me all the data I was interested in, built up my own MySQL database locally, and then just dove in.
I’ve been into beer for the past few years, home-brewed a bit — brewed some of President Obama’s White House Honey Ale with an old [ultimate frisbee] coach — and so I’ve been interested in how beer is brewed, the chemistry of it (at a very high level), and what makes various styles of beer taste so different. So, again, the main crux of my analysis was to see, across this wide range, whether styles do a good job of defining and classifying different kinds of beer.
Eddie: That’s great. Tell us how you started your analysis.
Amanda: So first I wanted to do a bit of factor-level reduction on my outcome variable to condense all these styles that had slightly different names but really were under the umbrella of a broader style, grouping them all into that main style heading. I did that by defining those broader categories — and you can certainly take issue with how I chose to define them, and please do — and then I looked for styles that contained the name of that category within them and lumped them under the broader heading. There was a little more nuance than that so I was able retain the difference between, for example, a Double India Pale Lager and an India Pale Lager, but that was a fun bit of string munging acrobatics.
Then, when I started trying to answer the main question, the first way I went at it was by using what’s called unsupervised clustering. I often like to begin with clustering because it lets you look at the data without biasing yourself by labeling it with the thing that that you’re interested in. So stepping back for a sec — what an unsupervised machine learning technique will do is approach a problem agnostic to what you’re looking to find, which in my case was the relationship between style and the different measurable dimensions of beer available through BreweryDB: ABV (alcohol by volume), IBUs (international bitterness units), and SRM (standard reference method), a standardized color system used by brewers. Then, once you cluster your datapoints using a set of input variables, you can see how those clusters map onto the outcome variable you’re interested. If they line up well, then you’re more inclined to believe that there is a meaningful relationship to be defined between your input variables and your response variable.
I should mention that the predictor variables I chose were a deliberate subset of the possible variables I could have used. This is actually the type of thing I have to think about at work quite a bit, and it really requires logic and reasoning more than anything else. Reason being that, instead of always blindly throwing all possible features into a model, it’s better to think things through and say, okay, what really belongs in this model and what doesn’t and why? If you’re not thorough, your model can be, at best, hard to interpret, and, at worst, just wrong or misleading.
So in the beer case, I came up with a way to reason about which variables were good candidates for being predictors (and which weren’t) by thinking up some heuristics for classifying variables as either beer inputs, outputs, or style-defined attributes. I consider outputs the best, and I’ll try to give some reasons why — I define inputs as things a brewer can directly control, like the temperature a beer is brewed at or the amount of a certain malt that’s added. An output would be something that a brewer can’t directly touch or affect, like ABV or IBU. Once the beer’s brewed, it’s brewed, and you can’t bump up the ABV of a beer without doing something it, something that no young beer should have to go through, like adding vodka or whatever. I considered outputs better candidates for these models because they’re what we as the drinkers interface with. If you’re a brewer and you’ve brewed something of indeterminate style and you’re trying to put a label on it, these are the cues that you use.
Inputs are problematic because you’ve got a chicken-and-egg problem of which direction the causality goes in — a brewer might have a certain style in mind and adjust the ingredients they put into the beer or the conditions under which it’s brewed such that it becomes that style. So if they set out to brew a kolsch, they’ll probably use kolschy ingredients and processes and then if, bam, it turns out more like a wheat beer they might still call it a kolsch anyway. I could be convinced that inputs are good things to consider (and in fact I ran a few models that did incorporate them), but what clearly is not a good predictor variable is a style-defined variable, such as glassware or serving temperature. Glassware is entirely dependent on style. You serve x-beer in y-vessel because that stein or tulip glass is prescribed for that style. It wouldn’t be useful to use it as a predictor because a glass is perfectly correlated with the beers that are served in that glass.
Another thing to note here is that these input features are all on different scales: IBU runs between about 0 and 120, ABV, for beers, is typically anywhere from 2.5% or 3.0% to 20.0% on the high end, and the color scale runs 0 to 40. So you’d typically standardize them to the same scale in order to make sure that the ones on the larger scales don’t have an inordinately large impact as compared to the ones on smaller scales.
With k-means clustering, the type of algorithm I was using, you provide a certain number of clusters that you want data segmented into, and the algorithm will sort each data point into the a certain cluster. So it’ll go through iteratively and say, okay, this data point, does it minimize the distance from here to the center of the cluster we’ve assigned it to? If not, okay, let’s put it into a different cluster. Those clusters can be of different sizes and get presented in multidimensional space, where I eventually added total number of hops and total number of added malts as inputs as well, to try and get some better differentiation.
So that’s where I started. I try to usually start my analyses in this sort of way, unless there’s a highly specific questions we’re trying to answer, and we have a strong reason for going straight into a supervised learning model. But clustering, in this case, was a good way for me to see the overall landscape at the start.
However, the limitation I ran into was that I was generating one graph at a time, on either the entire dataset the dataset filtered to a single cluster or a single style. What I was interested in looking for in the filtered case was how homogenous each style was when we look at the clusters the beers in that style were assigned to, and, on the flip side, when we look at a single cluster, whether the majority of the beers in that cluster come from a certain style. That’s a lot of graphs to be generating for 40-odd styles when you want to poke around. So I ended up building a small app that allows the user to choose to display certain things on the fly and the algorithm will rerun as necessary. You can vary the number of clusters, filter to certain styles, add and remove input variables, and see a bit of the underlying data.
Eddie: And you used Shiny to build the app, yes?
Amanda: Exactly. RStudio developed Shiny as a way for data scientists to better be able to play with their data without having to learn D3 [JavaScript library] and other heavier or more complex software development tools. They make it pretty easy to host your app on their servers, configure the type of instance you need, monitor its usage, all that good stuff.
So yeah, after the clustering approach where I was messing around to see whether, in looking at a single style, most of the data points in that style map up to one single cluster or whether it’s more spread across the board. Then, from there, I took a more supervised learning approach. The question there was a little different: can we accurately classify a given beer into its correct style given these predictor variables? Now, granted, this was being done without the sorts of variables that Kris is trying to gather with his consumer-facing app, such as ratings and flavor profiles; we’re relying on a small subset of possible variables, and of course working without the benefit of human feedback.
Eddie: Great point. So what’d you do next?
Amanda: The next thing I did was move into supervised learning. This is what a lot of machine learning focuses on, looking at the question of whether we can classify new data based on the data that came before it. You train a model on a bunch of data, and then you attempt to classify on top of that foundation.
I used a couple different methods. One was a random forest algorithm, which is essentially a collection of decision trees that all vote on a classification together. You might think that a decision tree by itself would be a decent way to classify data, like a basic taxonomy that groups high IBUs here or low ABVs there, moving all the way down the tree until you’ve got every possible style, every outcome variable. But using a single tree tends to overfit the data. Of course that’s one of the major problems in machine learning — overfitting the data you have — since you don’t know when there might be a flock of Black Swans you didn’t see coming in the next batch of data.
What a random forest does is intentionally inject randomness into an algorithm. You take a bunch of decision trees and train them only on some subset of the data, and sometimes using just a subset of the predictor variables. Naturally, working on a subset, they don’t train as closely to the entire population, but that’s actually a good thing, because it avoids overfitting the whole of your data.
Eddie: Can you expand a little more on why that’s so crucial? Maybe give an example that explains why overfitting a model is such an issue?
Amanda: Oh, yeah, sure — a model that’s overfit is one that relies too closely on the training data that you’ve given it, and that could bias its outcomes when you give it new testing data. This is a problem when the distribution of a variable in our sample set doesn’t accurately reflect the corresponding distribution in the full population. Generally we expect it to, if we’ve actually got a random sample, and if we take many random samples and we’re just interested in the population mean for instance, we can expect over repeated samples we’ll arrive at something close to the that thanks to the Central Limit Theorem. But, other times our sample is biased in some way that we don’t know about.
So let’s say we’re looking at the population of the United States, and for whatever reason, we accidentally only surveyed people in California. We train a model on only Californians trying to predict, I don’t know, say number of words spoken per minute — like how quickly people talk. We’ve got our model predicting how fast someone will talk given their age, race, level of schooling, favorite ice cream, whatever. But then if we test the model on people from New York, they’re going to be fundamentally different in some key ways from people in California. We’ll probably predict that they speak way slower than they actually do because we’ve only trained on these laid-back West Coasters.
Similar principle here. If we’re training only a single tree on say 80% of our data and testing on 20%, if we train too closely to that 80% then we run the risk of overfitting to whatever nuances showed up in that 80% and then when it comes time to test on the 20% we’re actually not able to classify that accurately. Same principle in the broader sense, so with our sample of data, these beers — and there must be more beers out there, I only have about 60,000 of them in this set…
Eddie: That’s a lot of beers!
Amanda: It is, but it’s still small data, and you don’t want to overfit a single tree to those 60,000 or so because it might not be representative of the total population of beers out there. So that’s where a random forest helps. And basically, at the end, each decision tree will classify given a beer into a single outcome, a single style, and then they’ll all vote. So suppose we have seven trees and four of them say it’s a stout and three of them say it’s a porter, then okay, we’ll label it a stout — that kind of deal, but not always that closely divided.
While we’re talking accuracy, I’d tend to get around 40% accuracy, so meaning that, when presented with our input variables for an unknown beer, the algorithm was able to correctly identify its labeled style a little less than half the time. But remember the premise, right? I wasn’t necessarily expecting that styles naturally demarcated the beer landscape well, so high accuracy wasn’t the goal like it normally is. Instead, the classification model was a test to see whether the variables that we had were enough, in and of themselves, to be a strong indicator of whether a beer was one style or another. A high degree of accuracy and we would have been able to be like yes, style does seem to be a useful construct in separating beers. But a low accuracy measure doesn’t necessarily mean that the beer landscape is just a mush of different styles; it could be that we just don’t have enough of the right features in the model. Which I think is a very real possibility.
So, yeah, haha, on its face the accuracy measure seems not great, but random chance would’ve been like 3%, so not the worst either. And that’s without doing a ton of hyperparameter tuning or anything — this wasn’t for work…
Eddie: Haha, yeah, we’ll wait until Revolution calls us up…
Amanda: Ah, yes, the dream! Anyway, I was pretty happy with that, given only alcohol, bitterness, and color. And with a lot of these, I think an educated consumer would be able to distinguish them if they tasted them, but only because maybe there’s a wheaty flavor or carbonation that obviously wasn’t part of our training data.
Eddie: Did you try any other methods?
Amanda: Yeah, so, after that, I decided, for contrast, to run a small neural net to essentially do the exact same task from a different perspective. Neural nets use a totally different architecture and are based very loosely on how neurons fire and wire in the human brain. In an artificial neural net you can generate a classification for a given input — and you can do a regression as well, but we were interested in classifying here — so small neural net, one hidden layer, really simple, just kind of dipping my toes into another distinct approach.
Eddie: And did that work better or worse than the random forest?
Amanda: It actually worked about the same — around 40% — which is, to the point, either a sign that we don’t have all the right variables or that the “lines” separating styles are fairly blurry. This sort of problem is well-suited to random forests, which are usually pretty good at classification when applied properly. In this case, yeah, maybe I could broaden the input variables or tune the models better, but perhaps it starts to suggest that the common styles don’t do as good of a job describing beers as everyone thinks.
Eddie: That’s really interesting. Was this what surprised you the most? Or was there something else in your findings that really stood out?
Amanda: Hmm… good question…
Eddie: Maybe you weren’t surprised by anything?
Amanda: So, this wasn’t an experimental finding, per se, but I was a little surprised by how dirty the data was. And that’s no knock on BreweryDB — their documentation was really great and they’re a community-run operation — but it’s a reminder that even well-curated databases can have data cleanliness issues. There was a good amount of beer names that, say, started with quotes and jumped to the top of the alphabetical list, that kind of thing.
I did do a little foray into hops to see whether if you added more hops — not by volume, but the total number of distinct varieties — the bitterness of the beer increases. If you think about it there’s no a priori reason to think that there’d necessarily be a causal relationship there because you could add a bucket of a single type of hops to one beer versus adding a few teaspoons each of five different types of hops to a second beer and the first beer with the bucket of a single type of hops would definitely be more bitter. But, still, not too surprising there was a significant trend positive: the more types of hops you had, the more bitter the beer tended to be on average, and that had some style implications.
Even though most of the findings weren’t too surprising, I still figured the project was worth showing to some people who might be interested in seeing the approach, or interested in contributing. So when asked, I drafted everything up (using an R markdown format that lets you combine codes and slides pretty seamlessly and plus is easier to version control than something like PowerPoint) and presented the findings at RLadies Chicago. I was really excited to see how people engaged with the project and tested my assumptions, methods, code, all that — on the spot. It was a great night and part of the reason why I’ve become a co-organizer of the group. I love the collaborative and insightful energy these women bring to data and programming problems, and the organization is a great space where those ideas are heard and valued. It’s taught me a ton already.
Eddie: That’s awesome. I was really glad to hear you did that.
Amanda hosting a recent Lunch-and-Learn presentation on behavioral economics at Earlybird.
Eddie: Switching gears a little — you mentioned data cleanliness being a factor — what would you tell a prospective client about using this sort of data or public datasets in general? Are cleanliness and availability still leading factors in businesses getting the most from data science efforts? Companies that are new to using data to solve problems — where are they stumbling?
Amanda: It really depends company to company. In my experience, newer companies tend to have cleaner data, at least for their internal operations, because it’s not so legacy and not cobbled together from what used to be flat files or converted from antiquated databases. But I think data like anything tends toward entropy. Unless you work to maintain a dataset, keeping up with changes and keeping up strong documentation, data tends toward dirtiness.
So I guess the advice I’d have for businesses — other than call us, haha — is to find any way possible to get around collecting or inputting data by hand. Humans are human, and if you must use a human-facing form, it’s better to use a dropdown or autocomplete — something with a set number of choices that’s stored as an enum in the database. That’s almost always better than having to go back later and do some NLP (natural language processing) stuff on any free text that’s been entered. Of course, there are plenty of times when that’s unavoidable, and we’re definitely not against using NLP around here…
Eddie: Those tools have gotten a lot better.
Amanda: Absolutely. But yeah, collecting fewer things manually. Nothing super advanced or overly technical, just a matter of keeping operational data as clean and useful as possible.
And that’s not to say that I don’t like cleaning up data. They say that 80% of a data scientist’s day is spent cleaning and munging data, and they’re not wrong. If anything it can be more. But in my book that’s the fun part. You’re wrangling chaos into neat tabular format of values that can be meaningfully compared to each other to uncover real relationships between things you’re interested in.
Eddie: Cool. That’s great. So let’s wrap it up here with some quick final questions: favorite kind of beer, least favorite kind of beer, and most overrated beer?
Amanda: Haha — least favorite and most overrated might be the same… I’m going to take the opportunity to knock sours here…
Eddie: Oh no!
Amanda: Yep. I’ll just say it: sours are barely beer. We’re talking like bastardized cider here. Now, I’m here for the ciders, but I’m not here for the sours. They’ll ruin your palette for the night, and I’m sure there’s a good one out there somewhere, but I certainly haven’t met it yet.
As for a favorite? Hmm… I started off being a real fan of wheat beers — especially German hefeweizens, you gotta love a good witbier — “vit” beer if you’re Belgian…
Eddie: And I am…
Amanda: Ha! Of course. But now, I’m on the IPA train and can’t seem to get off…
Eddie: Haha, nothing wrong with that. Anyway, thanks, Amanda, for sitting down and sharing some knowledge with us. We’ll include some links to your GitHub repository and presentation for the project, the Shiny app you mentioned, and encourage anyone reading to drop us a note at [email protected] if they have any questions, suggestions, or sour-loving hate mail.
Amanda Dobbyn is a data scientist at Earlybird and one of our project management leads. She holds a degree in psychology from the University of Chicago, where she was previously a research fellow in the Experience and Cognition Lab.
Transcript has been lightly edited for accuracy and readability.
| Chatting With Data Scientist Amanda Dobbyn About Analyzing Beer Styles | 0 | chatting-with-data-scientist-amanda-dobbyn-about-analyzing-beer-styles-10a7a3278dfd | 2018-04-19 | 2018-04-19 03:44:14 | https://medium.com/s/story/chatting-with-data-scientist-amanda-dobbyn-about-analyzing-beer-styles-10a7a3278dfd | false | 4,947 | Official company blog of Earlybird Software | null | earlybirdsoftware | null | The Earlyblog | earlybird-software | INFORMATION TECHNOLOGY,SOFTWARE DEVELOPMENT,BUSINESS INTELLIGENCE,SOFTWARE,DATA | earlybirddev | Machine Learning | machine-learning | Machine Learning | 51,320 | Earlybird | Chicago-based developer of custom cloud and mobile software, emphasizing solutions for smarter business operations. Learn more at earlybird.co. | eaccbc0234ec | Earlybird | 222 | 479 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2017-09-22 | 2017-09-22 08:44:13 | 2017-10-06 | 2017-10-06 07:05:01 | 2 | false | en | 2017-10-06 | 2017-10-06 13:27:27 | 4 | 10a7f4202445 | 1.968239 | 3 | 0 | 0 | Five Reasons why a marketer should be investing in actionable insights | 5 | Get, Set and Go: Actionable Insights
Five Reasons why a marketer should be investing in actionable insights
Time for Actionable Insights
A question that relentlessly persists in the minds of consumer marketing leaders across the globe — “Am I getting any business value out of the investments made in consumer intelligence tools?”
New technologies, whether in the area of social listening, deep individual profiling or audience intelligence, are collecting more data than ever before. Yet many marketers struggle for better ways to obtain value from their captive data, social data, as well as curated research initiatives. The significance of value extraction from various data sources is proven and so is the momentousness of using tools that provide not just insights but actionable insights using raw organic real-time data such as social data.
Here are five reasons why a marketer should invest in actionable insights:
Past buying behavior is a decent indicator of a potential customer but knowing what is happening real-time is essential for better marketing decisions as well as ‘Go to Market’ strategy for new products. An important analytics goal for the savvy marketer is to go beyond knowing what has happened to the best assessment of what will happen in the future. Frrole Scout does just that by providing real-time insights to marketers on personality traits, sentiments, mood, influencer intelligence, interests, affinities, and a 360 degree profiling of potential customers.
As per a survey done by MIT Sloan, top-performing organizations use analytics in critical business and marketing decisions five times more than the lower performers.
Understanding how consumers feel about a brand or the portfolio of brands by tracking the nuances in their social conversations helps marketers in uncovering newer strategies for customer retention and predicting brand and product health.
Predictive intelligence makes it easier to identify new growth opportunities and stay competitive at the same time. You may want to check out this white-paper on how to use social data for actionable insights.
Ability to boost campaign performances real-time rather than in hindsight when the campaign budget is already spent.
Audience Analysis by Frrole Scout
It is indeed time for marketers to be deeply customer aware and not just brand or campaign aware. Knowing what motivates a targeted customer is far more valuable than the number of campaign likes or shares on various social media channels. Actionable insights are seeds for future business gains and a worthwhile investment!
Amisha Sethi
Vice President of Global Marketing and Author of the National Bestseller
“It Doesn’t Hurt To Be Nice”
Frrole Inc.
[email protected]
| Get, Set and Go: Actionable Insights | 45 | get-set-and-go-actionable-insights-10a7f4202445 | 2018-05-09 | 2018-05-09 09:31:59 | https://medium.com/s/story/get-set-and-go-actionable-insights-10a7f4202445 | false | 420 | null | null | null | null | null | null | null | null | null | Marketing | marketing | Marketing | 170,910 | Amisha Sethi | Global Marketing Leader, an Author and a soul fitness enthusiast | 281f2212fb85 | amisha80.sethi | 8 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | load the libraries
library(data.table)
library(dtw)
library(ggplot2)
load data & model
data_df <- read.csv(“<data_path>”, header=T)
model_df <- read.csv(“<model_path>”, header=T)
Setting dataframes
setDT(model_df)
setDT(data_df)
dtw function
ComputeDTW <- function(model_dt_row){
curr_var_dt <- data_df[, c(model_dt_row$INDEPVAR1[1],
model_dt_row$INDEPVAR2[1],
model_dt_row$INDEPVAR3[1]), with=FALSE]
curr_var_mat <- data.matrix(curr_var_dt)
coeff_mat <- matrix(data=c(model_dt_row$INDEPVAR1_BETA[1],
model_dt_row$INDEPVAR2_BETA[1],
model_dt_row$INDEPVAR3_BETA[1]), nrow=3, ncol=1)
logodd_predictions <- curr_var_mat %*% coeff_mat + model_dt_row$INTERCEPT[1]
pd_predictions <- 1/(1+exp(-logodd_predictions[,1]))
curr_dtw <- dtw(pd_predictions, data_df$PD)
dtw_dist <- curr_dtw$distance
return(dtw_dist)
}
model dtw
model_dtw <- model_df[, list(dtw_dist = ComputeDTW(.SD)), by=(MODEL_NUM)]
model_dtw <- model_dtw[order(dtw_dist)]
plots function
PlotTrajectories <- function(model_dt_row){
curr_var_dt <- data_df[, c(model_dt_row$INDEPVAR1[1],
model_dt_row$INDEPVAR2[1],
model_dt_row$INDEPVAR3[1]), with=FALSE]
curr_var_mat <- data.matrix(curr_var_dt)
coeff_mat <- matrix(data=c(model_dt_row$INDEPVAR1_BETA[1],
model_dt_row$INDEPVAR2_BETA[1],
model_dt_row$INDEPVAR3_BETA[1]), nrow=3, ncol=1)
logodd_predictions <- curr_var_mat %*% coeff_mat + model_dt_row$INTERCEPT[1]
pd_predictions <- 1/(1+exp(-logodd_predictions[,1]))
pd_dt <- data.table(date = data_df$DATE, pd= data_df$PD, pd_pred = pd_predictions)
p <- ggplot(pd_dt) + geom_line(aes(date, pd, color=’red’)) +
geom_line(aes(date, pd_pred, color=’green’)) +
scale_color_identity(guide=’legend’, labels=c(‘Predicted_PD’, ‘PD’))
p
return(p)
}
plot trajectory
PlotTrajectories(model_df[MODEL_NUM == model_dtw$MODEL_NUM[1]])
| 15 | null | 2018-08-10 | 2018-08-10 05:18:04 | 2018-08-10 | 2018-08-10 05:42:42 | 0 | false | en | 2018-08-10 | 2018-08-10 05:42:42 | 1 | 10ab8dea5448 | 0.584906 | 2 | 0 | 0 | null | 1 | Dynamic Time Warping (matching two sequence of feature vectors {response -’y’} using R)
| Dynamic Time Warping (matching two sequence of feature vectors {response -’y’} using R) | 2 | dynamic-time-warping-matching-two-sequence-of-feature-vectors-response-y-using-r-10ab8dea5448 | 2018-08-10 | 2018-08-10 05:42:43 | https://medium.com/s/story/dynamic-time-warping-matching-two-sequence-of-feature-vectors-response-y-using-r-10ab8dea5448 | false | 155 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Aditi Tiwari | "I believe that everything under the sun can be viewed as a matrix element. Relations define equations and equations forms matrix." | e235ac409d7b | aditi.jec31 | 3 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 93c8eb6cb539 | 2018-04-24 | 2018-04-24 16:26:00 | 2018-04-24 | 2018-04-24 16:53:07 | 2 | false | en | 2018-05-03 | 2018-05-03 11:53:06 | 12 | 10ac2db4ebdd | 6.594654 | 5 | 0 | 0 | Steven Spielberg’s ‘Ready Player One’ succeeds at depicting a virtual immersive world on the big screen. | 5 | Spielberg’s ‘Ready Player One’ reminds us that the human at the center of Virtual Reality is key to its success as an industry
Steven Spielberg’s ‘Ready Player One’ succeeds at depicting a virtual immersive world on the big screen. However, an article published this month in the Financial Times, ‘Steven Spielberg’s ‘Ready Player One’ fails to lift virtual reality’, makes the case that the film has not been the breakthrough moment that the virtual reality (VR) industry needs to become more mainstream.
In the article, the FT’s Silicon Valley reporter Tim Bradshaw discusses the current state of the VR industry and, among other things, proposes that ‘Ready Player One’ falls short of expectations with respect to increasing public awareness and interest in VR.
Warner Bros. Pictures
This is worth contemplating, especially since the film itself may be the closest reference point for those that have little-to-no first-hand knowledge of VR. But is it really a failure in this regard? Viewed from a different perspective, the movie advocates for the humans at the center of this nascent technology — not necessarily for the technology itself.
The film is not intended to be a VR sales tool — it’s a positive, imaginative tale about the core of what every good VR experience should be: the person at the center of that experience and their emotions, hopes and dreams. In that way, ‘Ready Player One’ may successfully make the point more effectively than any other VR film before it — a point that is so often missed by developers and pundits in the space: that people must come FIRST.
Which brings us back to the industry itself.
VR has been, to a degree, a victim of over-inflated expectations and mistaken associations with failed technologies like 3D TV. Clickbait headlines, both positive and negative have been used by marketers and journalists in favor of generating traffic, rather than providing thoughtful analysis.
Patience and perspective have been missing in the response to VR so far, and that along with slow adoption has temporarily damaged its reputation as a viable entertainment choice for mainstream consumers. To be sure, the state of user experience with the setup and use of expensive PC-VR hardware is barbaric, compared to where it will be in a few years.
Even some early proponents of VR gaming are throwing in the towel. The developer of the space simulator game Eve Online, CCP games, recently announced they would be reducing their investment in VR development for the time being, due to lackluster market growth.
Everything just feels like a disaster, doesn’t it? So what’s happening? Nothing new, as it turns out.
The VR technology industry is progressing through cycles of emotional and technical development as it has so many times before:
“This is going to change the world. Everything is different.” Early experimentation with VR technology births unique “developer demos” which are little more than science fair exhibits.
“This is going to take the world by storm. Let’s cash in!” Taking old thinking (in this case, from movies/tv and video games) and attempting to leap ahead of the impending adoption wave by pushing commercial titles out the door that are ill conceived and fail to consider and capitalize upon the true benefits of the new technology.
“They’re just growing pains. Let’s tie our product to existing brands to speed up adoption.” When the industry doesn’t grow as fast as anticipated, turn to brand recognition to differentiate product. Many independent studios find themselves in the position of becoming service organizations, creating “experiences” for larger entertainment companies to keep the lights on. Numerous flashy expensive projects, but none that really capture the long-term VR user.
“We were too early.” Early creators like CCP exit, returning to what they know well.
“Bright light! Squirrel!” Some studios turn their back on VR, believing that they may have a brighter future in AR. Surely, the money will be there, they think.
“VR is dead.” The obligatory “I told you so” and “we’ve been here before” as experts weigh in on the death of VR.
If all this sounds familiar, it is because it is. We’ve been here before with smartphones (prior to the iPhone) and even the internet itself.
These early failures are actually positive events for the industry.
In CCP’s case, their early foray into the industry (a space combat simulator called Eve: Valkyrie) resulted in a spectacular entry from a graphical perspective. However, it could be argued that it also flew in the face of the qualities that make make VR so special and unique. This was about the hunt, explosions and chase. It was about engaging the primitive areas of our brain that focus on pursuit — ironically, those areas that cause us to laser focus on an objective and lose the sense of presence and immersion that make VR so special. It’s one of the key reasons players are at once amazed, then bored by many VR games.
Most titles are not designed for VR and what makes it special. This is a lesson the industry is only beginning to understand. The first wave of fast-to-market VR is dead, having built on old paradigms.
Regardless of this type of negativity surrounding VR, there has been an increase of interest in VR, as of late, thanks to portrayals in mainstream media. The popularity of the Rick and Morty adult cartoon series and the subsequent VR game release on console saw virtual reality trending rapidly around the net. There is indication here that people are still very interested in this technology and it is not a ‘flash in the pan’ novelty as some assume it to be. It’s simply evolving and its supporters remain optimistic and hopeful.
Why? Throughout both the positive and negative reception, one fundamental idea remains true about virtual reality: it captures the imagination.
This is a quality that Spielberg’s ‘Ready Player One’ (based upon Ernest Cline’s novel) communicated very effectively. VR is at once more magical and more powerful than any other medium we have yet seen. Our software and hardware simply has to catch up to the opportunity it presents. More importantly, developers must also force themselves to abandon many old notions of game design and gameplay in favor of those that truly work in VR.
Once they do, VR will begin to experience its tipping point.
Warner Bros. Pictures
In the film, VR is beyond that point. It is presented as a mainstream, pedestrian technology. The hero of the story (along with many others) use the technology to effortlessly enter a world of possibilities and adventure — qualities lacking in their deteriorating real world.
The movie is a quest of discovery; an adventure with its own pitfalls and rewards. It is a parable for life itself and the film delivers a message — one of friendship, coming of age and discovering who you are as well as the conflict and hardships you may have to face. Its message is not centered around VR as a technology. The movie simply accepts that VR will become part of the fabric of our lives in the not-too-distant future. Instead, more importantly, its message focuses on the people at the center of its great adventure.
Gaming universes have become relatable to people — especially when there are underlying and compelling stories behind the mechanics. They represent a type of life experience, replete with memories. Social games are more than they appear and within VR they become a parallel chance to realize a version of one’s personal narrative — an opportunity to become our better (or different) selves.
In that way, ‘Ready Player One’ represents the bigger purpose behind VR — it is a living, breathing universe where human beings interact, create and build. It has true meaning and value to those who engage in it — far beyond a 20-minute run on a regular video game translation.
VR faces many technical and design challenges. But solving each of these challenges moves us closer to a day when virtual technologies become a positive force in all our lives.
Virtual reality tech will get there, as other pioneering technologies have done time and time again in the past — but it will only be successful if developers focus on the human at the center of the experience. Spielberg’s movie captures the essence of this perfectly. This is the message that needs to resonate in this new and young industry.
A leap of faith and understanding of what’s possible is required for the magic of VR to come to life and be enjoyed by everyone.
More and more people are discovering the VR experience for themselves — those that would have otherwise rejected the idea are surprised and delighted once they actually get inside a headset and see it with their own eyes. The FT article also mentions the importance of location-based VR experiences — such as within arcade environments — which is a solid step towards exposing wider audiences to VR and helping people form opinions with a better informed mindset.
Just weeks ago, few would have predicted ‘Ready Player One’ would be one of Steven Spielberg’s biggest hit in a decade. It is also his sixth-largest money maker of all-time.
Spielberg captured the essence of VR and the audience has responded. The human being and their potential must be at the heart of the experience. When developers focus on this, VR will see its day in the mainstream.
Ciaran Foley is CEO of Ukledo and Immersive Entertainment, Inc. a Southern California virtual reality software company developing a new virtual engagement platform called Virtual Universe (VU).
__________________________________________________________________
Learn more about Virtual Universe and VU token by visiting our website and signing up for email updates, visiting our Github, following us on Twitter, Facebook, Linkedin, and Instagram, or being part the discussion on Telegram andDiscord.
| Spielberg’s ‘Ready Player One’ reminds us that the human at the center of Virtual Reality is key. | 167 | spielbergs-ready-player-one-reminds-us-that-the-human-at-the-center-of-virtual-reality-is-key-10ac2db4ebdd | 2018-06-09 | 2018-06-09 12:58:36 | https://medium.com/s/story/spielbergs-ready-player-one-reminds-us-that-the-human-at-the-center-of-virtual-reality-is-key-10ac2db4ebdd | false | 1,646 | Virtual Universe (VU) is an epic, story-driven open world game in LivingVR™ powered by AI, VR, and blockchain. The VU Token powers the economy as a currency. | null | VUtoken | null | VU Token | vutoken | VIRTUAL REALITY,ARTIFICIAL,BLOCKCHAIN,AUGMENTED REALITY,GAME DEVELOPMENT | VUtoken | Virtual Reality | virtual-reality | Virtual Reality | 30,193 | VU Token | Virtual Universe (VU) is an epic, story-driven open world game in LivingVR™ powered by AI, VR, and blockchain. The VU Token powers the economy as a currency. | e6a3b524b90a | VUtoken | 54 | 1 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | a8e7beaf5510 | 2018-07-07 | 2018-07-07 22:28:48 | 2018-07-07 | 2018-07-07 22:30:09 | 3 | false | en | 2018-07-26 | 2018-07-26 17:05:46 | 2 | 10acd8610dd | 2.580189 | 2 | 0 | 0 | Cancer is the cause of 1 in 8 deaths globally. In the United States, there are 1.6 million new cancer cases in 2016 with about 600,000… | 5 | The Future of Cancer Diagnostics vol. 1
Image from Google
Cancer is the cause of 1 in 8 deaths globally. In the United States, there are 1.6 million new cancer cases in 2016 with about 600,000 deaths every year. It is also well known that early detection is instrumental in effective treatment and management of the disease. For example, lung cancers diagnosed at stage one generally has a five-year survival rate of over 50% — if the same cancer is diagnosed just two years later, the survival rate drops to under 10%. In additional to improving clinical techniques, technologies targeting presymptomatic diagnosis — where the cancer is found even before any symptoms appear — is currently the hot topic for pharmaceutical and healthcare companies alike. However, since symptoms are what drive people to the hospital, people that believe they are healthy tend to have a much lower tolerance for medical invasiveness. Thus, one of the main challenges of these technologies is that they would have to be somehow be incorporated into the everyday lives of normal, healthy people. For today’s article, I will be looking at one startup tackling this problem in cancer diagnostics, a topic that is very close to to my own PhD research area.
Division of Cancer Control and Population Sciences (2016)
Freenome (2016)
Image from Google.com
Freenome is a California based startup that utilizes AI to recognize certain disease markers in blood samples. By detecting certain patterns of DNA fragments in a blood sample, Freenome’s algorithms aim to predict very early stage cancer and other disease developments. The startup recently closed a $65 Million Series A and is currently backed by Google Ventures, Polarias Partners and other investors. It has ongoing partnerships with UCSF, Moores Cancer Center, and the Massachusetts General Hospital.
My Perspective:
On an invasiveness scale, blood tests are actually about as non-invasive as you can get while still acquiring large amounts of useful information. Getting your blood taken won’t be as convenient as a cancer diagnosing skin patch (which would be incredible) but plenty of people do give regular blood samples for other reasons. If the test is inexpensive and accurate enough, I could see it being implemented as an optional portion of an annual physical checkup. However, since extracting free DNA from blood samples in a daunting task in itself, I am concerned on how much blood each test would require- which will affect the invasiveness of this test. Perhaps, Freenome could seek a partnership with the next startup that I will review in this series (hint hint). Also, previously, I had said “accurate enough” because the test does not have to be 100% conclusive; I’m sure the founders didn’t aim for Freenome to be the final confirmation of whether a person has cancer or not. It just has to be good enough to warrant further diagnostics with potentially more expensive and invasive technologies, like MRI/CT. One side note is that, due to the Theranos scandal, any lifesciences startup involving blood test just has that extra burden of proof in demonstrating their technologies to investors. However ,with strong partners, investors, and a solid technology base, Freenome is definitely one of the life science startups to watch out for in 2018.
Edit: Part II and part III of this series are now out!
| The Future of Cancer Diagnostics vol. 1 | 2 | the-future-of-cancer-diagnostics-vol-1-10acd8610dd | 2018-07-26 | 2018-07-26 17:05:46 | https://medium.com/s/story/the-future-of-cancer-diagnostics-vol-1-10acd8610dd | false | 538 | Startup Review | null | null | null | StartupReview | null | startupreview | STARTUP,ENTREPRENEURSHIP,ENTREPRENEUR,BUSINESS,REVIEW | null | Cancer | cancer | Cancer | 17,070 | Leon Wang | Leon is a PhD candidate at Princeton University researching cancer diagnostics and therapy | 2d798a42856b | mingming_45223 | 7 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-04 | 2018-02-04 21:46:53 | 2018-02-05 | 2018-02-05 22:47:07 | 3 | false | en | 2018-02-06 | 2018-02-06 00:30:12 | 0 | 10ace5f0c7aa | 2.633019 | 1 | 0 | 0 | A personal trainer in your pocket sounds like a strange idea, but as AI and cognitive applications evolve, this concept is an increasingly… | 3 | Cognitive App Design
A personal trainer in your pocket sounds like a strange idea, but as AI and cognitive applications evolve, this concept is an increasingly attainable reality. From training plans for running a marathon to gym workouts, there are numerous apps out there with the purpose of helping users reach their fitness goals. There are some apps that market themselves as the “ultimate personal training app,” but all of them seem to fall short in providing a hyper-personalized training/workout experience.
A personal training app with AI could change the game for anyone who cannot afford to spend the time or money on a personal trainer, and who wants to get in shape or reach a personal fitness goal. The app could modify your workouts, diet, hydration, and music playlists as you use it. It would use the power of google assistant as it tailors the experience for each user. It could also connect to your spending accounts and track your purchases in order to see what food, workout apparel, etc. you are buying and make meal plan and purchasing suggestions off of that history. The app could connect to Spotify, Apple Music, or other music platforms to create workout playlists based on the user’s music preferences. It could schedule workouts around the user’s calendar that the app has access to, and notify the user of the daily workout and food plan. Using location services, it could provide tailored recommendations for nearby meals/snacks that would satisfy the daily food plan if the user had a busy schedule and couldn’t eat at home or pack their lunch.
The app would track the the user’s workout performance through either their fitbit/fitness tracker or the GPS in the app which would allow it to learn if the user started walking during their run, at what point they started walking, and for how long or if they failed to complete a part of the scheduled workout. The app could ask why they started walking, skipped a set of squats, etc. The user could provide input that the app could work off of to optimize the next day’s plan. For example, “I saw that you didn’t workout the full 40 minutes yesterday, and you skipped your second shoulder circuit. Could you tell me why?” The user could say, “My right shoulder started hurting me.” The app could take that information into account and alter the workout plan for tomorrow to avoid a shoulder workout, take a rest day, or provide recovery tactics like stretches and icing for the user to do.
It could also learn the user’s eating/hydration habits and tailor its recommendations around that. For example, it could say “Hey, I notice that at around 3pm every day you stop at Starbucks for a latte and a muffin. Tomorrow why don’t you bring a snack like one of the Larabars you purchased at the grocery store and swap your latte for one with almond milk in it instead?” This would optimize their spending habits to help them stick to their meal plan and purchase healthier options.
Apps like this one could optimize the user’s workout experience, because it would make the plans hyper-personalized by taking into account their goals, music preference, eating habits, daily schedules, injuries, and more like a good personal trainer would.
| Cognitive App Design | 1 | cognitive-app-design-10ace5f0c7aa | 2018-02-06 | 2018-02-06 02:21:54 | https://medium.com/s/story/cognitive-app-design-10ace5f0c7aa | false | 552 | null | null | null | null | null | null | null | null | null | Fitness | fitness | Fitness | 52,592 | Kate Hodges | null | d5a6228c299c | katehodges_72981 | 17 | 18 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-01-19 | 2018-01-19 07:35:49 | 2018-01-19 | 2018-01-19 10:27:31 | 4 | false | en | 2018-01-20 | 2018-01-20 08:44:10 | 1 | 10ad2c46b1ac | 1.662264 | 4 | 0 | 0 | History | 5 | Part 1: Machine Learning — Introduction
History
Arthur Lee Samuel, an American pioneer in the field of computer gaming and artificial intelligence coined the term “machine learning” in 1959. He built a Samuel Checkers-playing Program that appears to be the world’s first self-learning program, and as such a very early demonstration of the fundamental concept of artificial intelligence (AI).
What is Machine Learning ?
Arthur Lee Samuel’s definition of Machine Learning (1959)
Field of study that gives computers the ability to learn without being explicitly programmed.
Tom Mitchell ‘s definition of Machine Learning (1998)
Well-posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.
Combining both the definitions we get
Machine learning provide systems the ability to automatically learn and improve from experience without being explicitly programmed.
Applications in Real World
There are infinite number of real world applications but here are some latest ones.
Gmail Smart Reply
Apple Face ID in iPhone
Amazon Product Recommendations
Types of Machine Learning Algorithms
Supervised learning: learning model built using labeled training data that allows us to make predictions about unseen or future data.
Unsupervised learning: using unlabeled training data find patterns in the data.
Reinforcement learning: In this learning, the machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions.
Originally published at blog.manishbisht.me.
| Part 1: Machine Learning — Introduction | 55 | part-1-machine-learning-introduction-10ad2c46b1ac | 2018-02-19 | 2018-02-19 11:10:58 | https://medium.com/s/story/part-1-machine-learning-introduction-10ad2c46b1ac | false | 255 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Manish Bisht | Student Developer GSoC ’17 at phpMyAdmin | Intern at BlueCube Network(2016) | My Corner of Internet: https://manishbisht.me | ed17b81d6783 | manishbisht | 105 | 57 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-08 | 2018-06-08 18:03:06 | 2018-06-08 | 2018-06-08 18:18:54 | 0 | false | en | 2018-06-08 | 2018-06-08 18:27:02 | 3 | 10ae81553578 | 2.875472 | 2 | 1 | 0 | 2018–2020 are going to be years that will be seen in history as the dawn of an era when the only way you can trust what you are hearing /… | 4 | The dawn of digital media forgery, and the sunset of trust
2018–2020 are going to be years that will be seen in history as the dawn of an era when the only way you can trust what you are hearing / seeing someone say is to see it in-person. It will be the sunest of the era when you can trust that what you are hearing from a person speaking who is captured and rebroadcast digitally is actually what they said.
This is because the application of machine learning and other artificial intelligence techniques is advancing into areas we (society) have historically taken for granted. Three things combined in my mind over the last several days to give me pause.
1 Manipulated video. Researchers collaborating across institutions have created software that can process a video image of one person speaking, and render that person’s facial expressions accurately onto another person’s face. The dry, scholarly demonstration video is jaw-dropping to watch. This technology can be used to create a video that accurately makes it appear that any person said something that they never actually said.
Imagine, for instance, that a political action committee (PAC) for any political office used this to fabricate their opponent saying something damning that opponent did not in fact say. Even if the PAC owned up and said “This is just a simulation” in text below the image, voters could easily be left with the perception the candidate actually said the damning thing.
The same technique could also be used in legal cases, or to ruin the reputation of innocent, otherwise-upstanding people. For someone with enough motivation, there will soon be a technique to fabricate reality.
2 Manipulated audio. Last week I heard the investor “pitch” from a startup company seeking capital from an angel group to which I belong. This company can do for the spoken word what the group above does for video: The software can convert the spoken word of one person into the spoken word of another quite accurately. The active speaker can speak using the words, rhythym & pace of their own, and the software creates a resulting audio file that renders that speech in the voice of, say, Jeff Bezos. Or, Barach Obama. Or any other speaker from whom the company can obtain a small, clean sample set of spoken speech on which its machine learning algorithms can train.
Imagine a rogue state (nation) creating an audio of another nation’s leader saying something damning, and combing that with the manipulated video image above. Now, it is not simply the image accurately reflecting the fabricated dialog; the voice you hear on the “recorded” video sounds accurately like the leader the rogue state is trying to damage. Post this on YouTube, and the damage is done.
And the same application to legal or reputation situations above applies here, too.
3 Fake news allegations. The use of the term “fake news” to generate mistrust in media reporting has already eroded consumer trust across the world. The ability to simply claim bias on the part of a reporter has raised the bar for the accuracy and diligence a reporter now must use prior to telling a story. And at a time when that deeper diligence is required, the ability for that reporter to trust what she sees in digital form will be rapidly disappearing.
To their credit, the startup company that gave me the presentation of manipulated audio is adding (invisible) audio-watermarking to their technology, and providing tools to detect whether a given audio file has their watermark. However, it must be stated that if these founders can build this technology, others can, too — and those others are not obligated to add the audio-watermarking. The rogue state actor exemplified above would not likely be interested in adding such fraud detection.
This can certainly leave us concerned. Just at the moment when many of us feel unmoored, unable to trust institutions, we often turn to seeing “what somebody actually said” — vs. what they’re rumored / reported to have said — as the way to ground our beliefs. I fear that ability to get properly grounded is about to disappear.
This leaves me with two final things to consider:
I don’t know how we re-create digital media trust. (And by “media”, I mean the files — not the institutions.) And no, I’m not at all thinking blockchain solves the problem.
The next IT security software companies to get really big will be those companies providing software to detect digital media forgeries.
| The dawn of digital media forgery, and the sunset of trust | 3 | the-dawn-of-digital-media-forgery-and-the-sunset-of-trust-10ae81553578 | 2018-06-11 | 2018-06-11 05:08:13 | https://medium.com/s/story/the-dawn-of-digital-media-forgery-and-the-sunset-of-trust-10ae81553578 | false | 762 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Jay Batson | Startup guy, cyclist, musician, DJ | 29a20485472e | startupdj | 637 | 270 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | d04de1933102 | 2017-10-01 | 2017-10-01 08:52:09 | 2017-09-27 | 2017-09-27 23:59:00 | 3 | false | en | 2018-06-08 | 2018-06-08 22:16:16 | 13 | 10aeff1e89e0 | 7.757547 | 40 | 2 | 0 | null | 5 | Dark Market Regression: Calculating the Price Distribution of Cocaine from Market Listings
tl;dr There’s a hidden Amazon.com of illegal drugs: I scraped the entire “Cocaine” category, then made a bot that can intelligently price a kilo. Don’t do drugs.
DARK WEB DRUG MARKET ANALYSIS
Project Objective: Use machine learning regression models to predict a continuous numeric variable from any web-scraped data set.
Selected Subject: Price distributions on the hidden markets of the mysterious dark web! Money, mystery, and machine learning.
Description: Turns out it is remarkably easy for anyone with internet access to visit dark web marketplaces and browse product listings. In this project I use Python to simulate the behavior of a human browsing these markets, selectively collect and save information from each market page this browsing agent views, and finally use the collected data in aggregate to construct a predictive pricing model.
(Optional Action Adventure Story Framing)
After bragging a little too loudly in a seedy Mexican cantina about your magic data science powers of prediction, you have been kidnapped by a forward-thinking drug cartel. They have developed a plan to sell their stock of cocaine on the internet. They demand that you help them develop a pricing model that will give them the most profit. If you do not, your life will be forfeit!
You, knowing nothing about cocaine or drug markets, immediately panic. Your life flashes before your eyes as the reality of your tragic end sets in. Eventually, the panic subsides and you remember that if you can just browse the market, you might be able to pick up on some patterns and save your life…
THE (HIDDEN) DOMAIN
Dark web marketplaces (cryptomarkets) are internet markets that facilitate anonymous buying and selling. Anonymity means that many of these markets trade illegal goods, as it is inherently difficult for law enforcement to intercept information or identify users.
While black markets have existed as long as regulated commerce itself, dark web markets were born somewhat recently when 4 technologies combined:
Anonymous internet browsing (e.g. Tor and the Onion network)
Virtual currencies (e.g. Bitcoin)
Escrow (conditional money transfer)
Vendor feedback systems (Amazon.com-like ratings of sellers)
Total cash flow through dark web markets is hard to estimate, but indicators show it as substantial and rising. The biggest vendors can earn millions of dollars per year.
The market studied for this project is called Dream Market.
In order to find a target variable suitable for linear regression, we’ll isolate our study to a single product type and try to learn its pricing scheme. For this analysis I choose to focus specifically on the cocaine sub-market. Cocaine listings consistently:
report quantity in terms of the same metric scale (grams), and
report quality in terms of numerical percentages (e.g. 90% pure).
These features give us anchors to evaluate each listing relative to others of its type, and make comparisons relative to a standard unit 1 gram 100% pure.
THE MARKET
Browsing Dream Market reveals a few things:
There are about 5,000 product listings in the Cocaine category.
Prices trend linearly with quantity, but some vendors sell their cocaine for less than others.
Vendors ship from around the world, but most listings are from Europe, North America, and other English speaking regions.
Vendors are selective about which countries they are willing to ship to.
Many vendors will ship to any address worldwide
Some vendors explicitly refuse to deliver to the US, Australia, and other countries that have strict drug laws or border control.
Shipping costs are explicitly specified in the listing.
Shipping costs seem to correlate according to typical international shipping rates for small packages and letters.
Many vendors offer more expensive shipping options that offer more “stealth”, meaning more care is taken to disguise the package from detection, and it is sent via a tracked carrier to ensure it arrives at the intended destination.
The main factor that determines price seems to be quantity, but there are some other less obvious factors too.
While the only raw numerical quantities attached to each listing are BTC Prices and Ratings, there are some important quantities represented as text in the product listing title:
how many “grams” the offer is for
what “percentage purity” the cocaine is
These seem like they will be the most important features for estimating how to price a standard unit of cocaine.
I decide to deploy some tools to capture all the data relating to these patterns we’ve noticed.
THE TOOLS
BeautifulSoup automates the process of capturing information from HTML tags based on patterns I specify. For example, to collect the title strings of each cocaine listing, I use BeautifulSoup to search all the HTML of each search results page for tags that have class=productTitle, and save the text contents of any such tag found.
Selenium WebDriver automates browsing behavior. In this case, its primary function is simply to go to the market listings and periodically click to the next page of search results, so that BeautifulSoup can then scrape the data. I set a sleep timeout in the code so that the function would make http requests at a reasonably slow rate.
Pandas to tabulate the data with Python, manipulate it, and stage it for analysis.
Matplotlib and Seaborn, handy Python libraries for charting and visualizing data
Scikit Learn for regression models and other machine learning methods.
[Image: Automated Browsing Behavior with Selenium WebDriver]
THE DATA
I build a dictionary of page objects, which includes:
product listing
listing title
listing price
vendor name
vendor rating
number of ratings
ships to / from
etc.
The two most important numeric predictors, product quantity and quality (# of grams, % purity), are embedded in the title string. I use regular expressions to parse these string values from each title string (where present), and transform these values to numerical quantities. For example “24 Grams 92% Pure Cocaine” yields the values grams = 24and quality = 92 in the dataset.
Vendors use country code strings to specify where they ship from, and where they are willing to ship orders to.
For example, a vendor in Great Britain may list shipping as “GB — EU, US”, indicating they ship to destinations in the Europe or the United States.
In order to use this information as part of my feature set, I transform these strings into corresponding “dummy” boolean values. That is, for each data point I create new columns for each possible origin and destination country, containing values of either True or False to indicate whether the vendor has listed the country in the product listing. For example: Ships to US: False
After each page dictionary is built (i.e. one pass of the code over the website), the data collection function saves the data as a JSON file (e.g. page12.json). This is done so that information is not lost if the connection is interrupted during the collection process, which can take several minutes to hours. Whenever we want to work with collected data, we merge the JSON files together to form a Pandas data frame.
THE COCAINE
The cleaned dataset yielded approximately 1,500 product listings for cocaine.
Here they are if you care to browse yourself!
dream_market_cocaine_listings.xls
Sheet1 product_title, grams, quality, btc_price, escrow, product_link, rating, ships_from_to, successful_transactions…docs.google.com
Aside on Interesting Findings
There are a lot of interesting patterns in this data, but I’ll just point out a few relevant to our scenario:
Of all countries represented, the highest proportion of listings have their shipping origin in the Netherlands (NL). This doesn’t imply they are also the highest in volume of sales, but they are likely correlated. Based on this data, I would guess that the Netherlands has a thriving cocaine industry. NL vendors also seem to price competitively.
As of July 15th, 2017, cocaine costs around $90 USD per gram. (median price per gram):
Prices go up substantially for anything shipped to or from Australia:
This chart shows the relative average cost (BTC) per gram, by country. (Note: the countries to the right without error-bars have only very small samples, so they are less reliable indicators.)
* charts generated from data using Seaborn
THE MACHINE LEARNING
In order to synthesize all of the numeric information we are now privy to, I next turn to scikit-learn and its libraries for machine learning models. In particular, I want to evaluate how well models in the linear regression family and decision tree family of models fit my data.
Model Types Evaluated
Linear Models
Linear Regression w/o regularization
LASSO Regression (L1 regularization)
Ridge Regression (L2 regularization)
Decision Tree Models
Random Forests
Gradient Boosted Trees
To prepare the data, I separate the target variable (Y = Price) from the predictor features (X = everything else). I drop any variables in X that leak information about price (such as cost per unit). I’m left with the following set of predictor variables:
X (Predictors)
Number of Grams
Percentage Quality
Rating out of 5.00
Count of successful transactions for vendor on Dream Market
Escrow offered? [0/1]
Shipping Origin ([0/1] for each possible country in the dataset)
Shipping Destination ([0/1] for each possible country in the dataset)
Y (Target)
Listed Price
I split the data into random training and test sets (pandas dataframes) so I can evaluate performance using scikit-learn. Since I can’t fully account for stratification within the groups that I’m not accounting for, I take an average of scores over multiple evaluations.
Of the linear models, simple linear regression performed the best, with an average cross-validation R² “score” of around 0.89, meaning it accounts for about 89% of the actual variance.
Of the decision tree models, the Gradient Boosted trees approach resulted in the best prediction performance, yielding scores around 0.95. The best learning rate I observed to be 0.05, and the other options were kept at the default setting for the sci-kit learn library.
The model that resulted from the Gradient Boosted tree method picked up on a feature that revealed that 1-star ratings within the past 1 month were charateristic with vendors selling at lower prices.
Prediction: Pricing a Kilogram
(Note: I employed forex_python to convert bitcoin prices to other currencies.)
I evaluate the prediction according to each of the two models described above, as well as naive baseline:
Naive approach: Take median price of 1 gram and multiply by 1000.
Resulting price estimate: ~$90,000
Review: Too expensive, no actual listings are anywhere near this high.
Linear Regression Model: Fit a line to all samples and find the value at grams = 1000.
Resulting price estimate: ~$40,000
Review: Seems reasonable. But a model that account for more variance may give us a better price…
Gradient Boosted Tree Model: Fit a tree and adjust the tree to address errors.
Resulting price estimate: ~$50,000 (Best estimate)
Review: Closest to actual prices listed for a kilogram. Model accounts for most of the observed variance.
THE BIG IDEAS
Darknet markets: large-scale, anonymous trade of goods, especially drugs. Accessible to anyone on the internet.
You can scrape information from dark net websites to get data about products.
Aggregating market listings can tell us about the relative value of goods offered, and how that value varies.
We can use machine learning to model the pricing intuitions of drug sellers.
(Optional Action Adventure Story Conclusion)
The drug cartel is impressed with your hacking skills, and they agree to adjust the pricing of their international trade according to your model. Not only do they let you live, but to your dismay, they promote you to lieutenant and place you in charge of accounting! You immediately begin formulating an escape in secret. Surely random forest models can help…
-Skip Everling
| Dark Market Regression: Calculating the Price Distribution of Cocaine from Market Listings | 238 | dark-market-regression-calculating-the-price-distribution-of-cocaine-from-market-listings-10aeff1e89e0 | 2018-06-08 | 2018-06-08 22:16:18 | https://medium.com/s/story/dark-market-regression-calculating-the-price-distribution-of-cocaine-from-market-listings-10aeff1e89e0 | false | 1,910 | content by David “Skipper” Everling | null | null | null | thought-skipper | thought-skipper | DATA SCIENCE,PHILOSOPHY,PSYCHOLOGY,MACHINE LEARNING,CULTURE | null | Data Science | data-science | Data Science | 33,617 | Skipper | A thoughtful writer. AI Developer Evangelist at Clarifai. | 35b0f0852e31 | everling | 119 | 95 | 20,181,104 | null | null | null | null | null | null |
|
0 | val evaluator = new MulticlassClassificationEvaluator()
.setLabelCol("CategoryNameIndex")
.setPredictionCol("prediction")
.setMetricName("accuracy")
val accuracy = evaluator.evaluate(predictions)
println("Accuracy= " + accuracy)
spark.stop()
Accuracy = 0.935483870967742
| 4 | cd7a7b35ed21 | 2017-12-27 | 2017-12-27 21:34:10 | 2017-12-27 | 2017-12-27 21:35:09 | 1 | false | en | 2017-12-29 | 2017-12-29 15:46:29 | 3 | 10af5931c5a3 | 0.939623 | 1 | 0 | 0 | In the previous lesson, we presented the training and testing of our model. This was on the basis of creating our ML pipeline. The… | 5 | Model Evaluation(Part 5)
In the previous lesson, we presented the training and testing of our model. This was on the basis of creating our ML pipeline. The following figure illustrates the workflow.
In this final lesson, we set out to evaluate our model. It can be performed using:
Next, is to determine the accuracy of the model as follows:
We print the accuracy to the console and neatly end the spark job with spark.stop().
And here is the output:
About the Author
Taiwo O. Adetiloye is a datascience nerd, very interested in large scale data processing and analytics using AI and ML frameworks like Spark, Keras, Tensorflow and MxNet. Favorite programming languages are Scala, Python and Go-lang.
He is constantly developing himself as an information technologist working and acquiring new skill sets to help him in his career growth. He is a highly motivated individual, good communicator and great team player with a passion for creativity and a drive for excellence.
You can reach me by email at [email protected] or connect with me on LinkedIn.
| Model Evaluation(Part 5) | 5 | model-evaluation-part-5-10af5931c5a3 | 2018-05-10 | 2018-05-10 16:14:57 | https://medium.com/s/story/model-evaluation-part-5-10af5931c5a3 | false | 196 | In this tutorial, we set out to analyze the Amazon product dataset using SparkMLlib. The training data set includes ASIN, Brand Name, Category Name, Product Title, Image URL. Our objective is to use Scala programming language to write a classifier utilizing key product features. | null | null | null | Analyzing the Amazon Product Data Set using SparkMLlib LogisticRegression Classification Model | analyzing-the-amazon-product-data-set-using | DATA SCIENCE,SCALA,ARTIFICIAL INTELLIGENCE,SPARK,TENSORFLOW | mavencodeapps | Machine Learning | machine-learning | Machine Learning | 51,320 | Taiwo Adetiloye | Taiwo O. Adetiloye is a datascience nerd, very interested in large scale data processing and analytics using AI and ML frameworks like Spark, Keras, Tensorflow. | 39afb6815752 | taiwo_99678 | 5 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | b913255c3130 | 2017-11-11 | 2017-11-11 15:35:48 | 2017-11-11 | 2017-11-11 15:37:08 | 1 | false | en | 2017-11-11 | 2017-11-11 15:37:08 | 14 | 10b06c2b19a5 | 4.928302 | 0 | 0 | 0 | The robot, which is named as Sophia, got the citizenship of Saudi Arabia last month, but many of the people are not showing their happiness… | 5 | SOPHIA THE ROBOT’S CO-CREATOR SAYS THE BOT MAY NOT BE BONAFIDE AI, BUT INSTEAD IT IS A PEARL
The robot, which is named as Sophia, got the citizenship of Saudi Arabia last month, but many of the people are not showing their happiness about it.
RELATED POST: NASA’S HUMANOID ROBOT TO TEST FOR ULTIMATE MARS CHALLENGE
Some were bothered about the perspective of Sophia itself — a robot that is in like manner a media star, with magazine cover-shoots, network show appearances, and even a talk to the UN. Masters in the field all over ruin Sophia as critical of AI development and say that notwithstanding the way that the bot is shown quite recently like two or three programming updates a long way from human-level awareness, it’s more about illusion than intelligence.
For Ben Goertzel, chief scientist at Hanson Robotics, the company that made Sophia, the condition is conflicting, indeed. Geoertzel said in his interview with the Verge, it was “not impeccable” that some thought of Sophia as having reproduced general understanding or AGI (the industry term for human-indistinguishable learning) in any case, he perceived that the confused judgment had its upsides.
ALSO READ: NASA’S ROCKET TO DEEP SPACE MAY NOT BE PREPARE UNTIL 2020
During his career as a researcher, most of the people believed that I will never achieve human-level AI, but, now the public thinks we are already there. According to his opinion, overestimate is better than the underestimate, our odds of making machines cleverer than humans. He said, ““I’m a huge AGI optimist, and I believe we will get there in five to ten years from now.”
He accepts that Sophia’s introduction disturbs specialists, however, shields the bot by saying it passes on something unique to the people. “In the event that I advise individuals I’m utilizing probabilistic rationale to do thinking on how best to prune the backward chaining inference trees that emerge in our logic engine, they have no clue what I’m discussing about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable.” He says there’s a more obvious preferred standpoint also: in this present reality where AI capacity and interest is sucked towards big tech company in Silicon Valley, Sophia can function as a stabilizer; something that gets the attraction, and with that, funding. “What does a startup get out of having massive international publicity?” he says. “This is obvious.”
ALSO READ: ALPHABET’S WAYMO IS ABOUT TO LAUNCH DRIVERLESS CAR
Most people would agree that Sophia isn’t unintelligent, either. As Goertzel, who is as of now constructing a “decentralized market for AI,” calls attention to, it makes employment of a wide number of AI strategies. There’s face tracking, emotion recognition, and robotic movements produced by deep neural networks. What’s more, albeit the vast majority of Sophia’s dialogues originate from a straightforward choice tree (a similar tech utilized by chatbots; when you say X, it answers Y), what it says is coordinated with these other inputs to a one of a kind mold. It’s not pivotal in the way that work coming out of Companies like DeepMind or university labs, yet it’s not a toy.
“None of this is the thing that I would call AGI, however, nor is it simple to get working,” says Goertzel. “Additionally, it is totally cutting edge with respect to the dynamic joining of perception, action, and dialogue.”
The last item is certainly enrapturing, and despite Sophia’s frequently stilted and ungainly exchange, watchers seem, by all accounts, to be left with a sentiment something more. A lot of this impact can be credited to make by Hanson Robotics coordinator David Hanson, who, for quite a while, was a Walt Disney Imagineer, building figures for the company’s theme parks. It’s Hanson who every now and again exaggerates Sophia’s capacity for consciousness, uncovering to Jimmy Kimmel earlier this year that the robot was “generally alive,” for example.
ALSO READ: CHINA HAS A REAL TECH EDGE OVER THE US, AND IT’S CULTURAL
While we can without a lot of an extent get lost fighting the thinking and semantics of judging what is and isn’t “alive,” it’s all the more clear to express that this declaration is frightfully beguiling.
ALSO READ: TOP CHALLENGES FACED BY THE WOMEN ENTREPRENEURS
Furthermore, when asked about comments from scholastics like Bryson, who suggest that giving robots rights corrupts human ones, Goertzel vehemently contradicts this thought. He says Saudi Arabia’s decision to give Sophia citizenship exhibits the country’s need to be more powerful. “Observationally, in Saudi Arabia, the permitting of robot rights is all in all related with increases rather than reduces all-around human rights,” he says, demonstrating late changes like allowing Jewish people to work in the country, and the decision to give women the benefit to drive.
Critics may deliver reply that giving one robot “rights” in an administration expected to pull in thought with respect to a lavishly funded tech conference doesn’t show any particular ethical probity. Furthermore, really, in a country where homosexuality is punishable by death and transient workers are kept in slave-like conditions, some may explain it as the correct reverse: as e an tra affirmation of “rights” being managed without due regard. As Bryson put it: “How might it impact people in case they figure you can have a citizen that you can buy?”
ALSO READ: 5 TIPS THAT WILL HELP YOU LAUNCH YOUR STARTUP
Goertzel says these are significant questions, and he assumes that Hanson Robotics and Sophia are enacting reasonable exchange. It’s without a doubt something he himself enjoys. When I ask with respect to whether he assumes Siri is justifying citizenship, he says it would be “pretty funny,” however raises qualities that Sophia has that Siri doesn’t, like uniqueness, and a physical presence. “What precisely degree are these properties crucial for being seen as deserving of rights?’” he asks. “[It’s] an interesting direction for thinking.”
Goertzel says that at the present rate of progress, society will most likely be constrained to reconsider thoughts like rights, and possibly larger part administers framework. He offers the (to an incredible degree unrealistic and viably avoidable) instance of some individual 3D-printing a gathering of robots on account of voting rights keeping the ultimate objective to swing a race. “If you adjusted them to vote undeniably, and they all autonomously work the same way, by then out of the blue you have a dictatorship by robots,” he says.
ALSO READ: TIPS FOR ORGANIZING YOUR SMALL BUSINESS
In case you believe the super-insightful AIs are for all intents and purposes around the twist, by then these future crushing inquiries. But enough real and current threats in the field of artificial intelligence can be found out — from bias in the algorithms handing out prison sentences to AI-powered surveillance states — to impact these hypothetical issues to seem like indulgent sideshows. Various researchers and experts say what’s by and by required is a prevalent open perception of AI; of its present capacities and obstructions. With respect to taking note of that call, Sophia is from every angle achieving more harm than awesome.
For Goertzel and Hanson Robotics, there are numerous factors at play. When I ask with respect to why Sophia keeps getting media appearances and emerging as genuinely newsworthy, Goertzel’s answer is fundamental: “People love [them], they both disturb and enchant people. Whatever else they are, they are fantastic works of art.”
| SOPHIA THE ROBOT’S CO-CREATOR SAYS THE BOT MAY NOT BE BONAFIDE AI, BUT INSTEAD IT IS A PEARL | 0 | sophia-the-robots-co-creator-says-the-bot-may-not-be-bonafide-ai-but-instead-it-is-a-pearl-10b06c2b19a5 | 2017-11-11 | 2017-11-11 15:37:09 | https://medium.com/s/story/sophia-the-robots-co-creator-says-the-bot-may-not-be-bonafide-ai-but-instead-it-is-a-pearl-10b06c2b19a5 | false | 1,253 | Tycoonstory is the largest Online Network for Entrepreneurs & Startups. | null | Tycoonstorymedia | null | Tycoonstory | tycoonstory | ENTREPRENEURSHIP,ENTREPRENEUR,ENTREPRENEURS,STARTUPS,STARTUP | tycoonstoryco | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Tycoon Story | null | 2a59e19bd435 | tycoonstory | 106 | 151 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 284538178f0a | 2018-02-05 | 2018-02-05 23:18:45 | 2018-02-05 | 2018-02-05 23:22:34 | 4 | false | en | 2018-02-05 | 2018-02-05 23:58:55 | 0 | 10b0ab56b9a6 | 2.662264 | 1 | 0 | 0 | Skiing is one of the oldest and simplest sports in the world. All you need are two pieces of wood, a bunch of snow, and a pretty big hill… | 2 | Cognitive Design Application Idea: Ski-Teacher
Skiing is one of the oldest and simplest sports in the world. All you need are two pieces of wood, a bunch of snow, and a pretty big hill. In my personal experience skiing (a single time in Colorado), I thought it was one of the most difficult things I’ve tried. There are new people that try to learn how to ski every day, and many need to use instructors in a group or private setting to really learn the basics. Then all the fun of the sport can be enjoyed safely and people can start to get better. Most of these instructors are great, but they could bolster or even replaced with the Ski-Teacher. I would personally enjoy an application like this because of my really poor skiing skills, but also since I want to learn and get better at the sport. Ski-Teacher would use a mobile application, a variety of motion sensors, and a really cool visual feedback system via the ski goggles.
(via Google Images)
The mobile app would be simple and provide information, especially on weather and current ski conditions. But the most important buttons would be learn and receive feedback. This would allow the user to pick up their current training session or start learning how to ski from the beginning. The learning would all be cognitively based and initiate with the most basic activities while recommending the simplest areas to practice. Once the user becomes more skilled, their “level” will increase and offer up harder skills to learn. It will also focus on areas of weakness to make sure the user isn’t forgetting any skills. Finally, the receive feedback option wouldn’t try to help the user learn new skills, but it would allow them to do whatever they want on the ski slopes and provide continuous feedback on how they’re doing compared to proper skiing form.
Sample of image of main screen in Ski-Teacher mobile app. There are three main buttons along with information regarding the user’s level and the current city.
In order to get all this data to provide feedback and help the user, Ski-Teacher will come with sensors that can be attached to the skis and ski poles. These sensors will collect data on acceleration, movement, and orientation of the equipment which will be transformed into something the user can understand regarding their current speed. The data could also be compared to other skiers that have used the same slope and use this to provide the feedback.
Layout of where sensors would belong on skis and ski poles.
All this data exists with the Ski-Teacher, and it will be implemented visually through the ski goggles. The goggles will have a heads-up display that the user can see and also interact with using their eyes. The concept is very similar to Google Glass. The eye movements will also be tracked and connected to the data which will provide feedback since this is an area that could potentially be improved on the user’s end.
Example of heads up display within ski goggles while on a slope. There is plenty of information, but obstacles can still be clearly observed.
The goal of Ski-Teacher is to help new or pro skiers improve their skills since there is always improvement that can be made. The users would benefit because they could get receive lessons whenever and wherever with a tool that could totally adapt to their current environment.
| Cognitive Design Application Idea: Ski-Teacher | 1 | cognitive-design-application-idea-ski-teacher-10b0ab56b9a6 | 2018-02-06 | 2018-02-06 01:50:34 | https://medium.com/s/story/cognitive-design-application-idea-ski-teacher-10b0ab56b9a6 | false | 520 | Course by Jennifer Sukis, Design Principal for AI and Machine Learning at IBM. Written and edited by students of the Advanced Design for AI course at the University of Texas's Center for Integrated Design, Austin. Visit online at www.integrateddesign.utexas.edu | null | utsdct | null | Advanced Design for Artificial Intelligence | advanced-design-for-ai | ARTIFICIAL INTELLIGENCE,COGNITIVE,DESIGN,PRODUCT DESIGN,UNIVERSITY OF TEXAS | utsdct | Skiing | skiing | Skiing | 1,911 | Ahsan Hussain | null | 7522020efc26 | ahsanhussain | 4 | 4 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-05-28 | 2018-05-28 04:03:25 | 2018-05-28 | 2018-05-28 04:04:43 | 1 | false | en | 2018-06-08 | 2018-06-08 03:38:16 | 0 | 10b1ce6e5aad | 0.69434 | 3 | 0 | 0 | EXIT PEOPLE™ is a hybrid of social network with subscriber data base — an intellectual platform and community for everyone involved in the… | 5 | EXIT PLATFORM™ — observing the modules. Module 2 — EXIT PEOPLE™
EXIT PEOPLE™ is a hybrid of social network with subscriber data base — an intellectual platform and community for everyone involved in the projects implementation: founders, team members, investors, acquirers, experts, advisers, auditors, corporate lawyers, etc.
It stores CVs and other useful information as users profiles and by interaction with other modules is able to offer a valid and reliable system of ratings and references safeguarded by the blockchain technology.
No more need for hours of personal meetings and face-to-face evaluations: our platform will link you with the best and trustworthy partners. The module also manages interior communication and announcements via smart contracts all based on user-provided criteria. Platform’s community contributes directly to its development and performance.
| EXIT PLATFORM™ — observing the modules. Module 2 — EXIT PEOPLE™ | 145 | exit-platform-observing-the-modules-module-2-exit-people-10b1ce6e5aad | 2018-06-08 | 2018-06-08 03:38:16 | https://medium.com/s/story/exit-platform-observing-the-modules-module-2-exit-people-10b1ce6e5aad | false | 131 | null | null | null | null | null | null | null | null | null | Exitfactory | exitfactory | Exitfactory | 13 | Exit Factory | Mission: Creating new global economy of trendsetting high income investments | b3e42cf029d6 | exit_factory | 6 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 6fc55de34f53 | 2018-08-22 | 2018-08-22 21:13:45 | 2018-09-03 | 2018-09-03 15:32:33 | 4 | false | en | 2018-09-03 | 2018-09-03 15:32:33 | 15 | 10b2fa29eec1 | 4.688679 | 6 | 0 | 0 | My impressions of the British Computer Vision Summer School | 2 | Two research trends in Computer Vision (and one caveat)
My impressions of the British Computer Vision Summer School
Last month, the British Machine Vision Association organised the Computer Vision Summer School at the University of East Anglia, Norwich. This is an annual 4-day workshop where young computer vision practitioners can listen to UK leading academic experts on the various research aspects of the field. The talks ranged from introductory courses (such as colour, low-level vision) to the latest research trends, this year including active vision, probabilistic generative models and, as it always has to be, deep learning. Here are my top picks.
Trend #1: An attempt to know what we don’t know
There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns — the ones we don’t know we don’t know. — Controversial claim by former US Secretary D. Rumsfeld
Computer Vision models can nowadays be trained to categorise images in over 1000 different classes with an impressive error rate of less than 4%, which is deemed better than human performance on the task. However, what happens if we want to generate new images? How can we be sure that we are learning all the features relatively to each class? If we are given a dataset that won’t contain all possible scenarios in the world (think self-driverless cars), how can we propagate our prior knowledge through our model?
The answer to most of these questions entails introducing a principled probabilistic analysis, as described by Prof Neill Cambpell (University of Bath) during his talk at the BMVA summit (think learning probability distributions of data, given weights, rather than “hard” individual scores). By learning probability distributions of the possible scenarios, one can (i) impose prior knowledge on the system, specified by the user (ii) make the confidences outputted by the model more palatable and robust to data contingency (knowning what we don’t know, huh?) and (iii) generate new data. For instance, using suitable priors for the system, one can learn a manifold of likely images, that can transition smoothly and have a representatively low probability for unlikely ones (check out this interactive demo on how to generate new fonts by learning a manifold of “cool” fonts!).
Learning a manifold of cool fonts (http://cs.bath.ac.uk/~nc537/papers/siggraph14_learning_fonts.pdf). Image kindly provided by Prof Campbell.
Probabilistic generative models can also go deep (see what I did there). Prof Campbell mentioned the principles of deep stochastic belief networks that are able to propagate not only weights but also uncertainty across a neural network. Bayesian deep learning is indeed a very hot trend that combines the best of deep learning with salient features of generative processes such as smoothness prior, ability of training with very small datasets and, perhaps more importantly, interpretable associated confidences to better assess when we are way far from the training set. A whole workshop devoted to the topic will take place at NIPS 2018.
Trend #2: From 2D to 3D (and 4D, and counting…)
Models in two dimensions are relatively well studied and, as mentioned, for some of the tasks (such as image classification), we can achieve reasonable performance with off-the-shelf tools. However, there is much we need to do when it comes to understanding scenes in three dimensions, sequences of actions and, effectively, reacting to the environment. This was the subject of two of the lectures at the BMVA summit (Active Vision, by Dr Nicola Belloto, University of Lincoln and 3D Computer Vision by Will Smith, University of York).
Perhaps the most surprising task where Computer Vision still does not provide a satisfactory solution is grasping. This is a problem where Active Vision has to interact with object recognition, as well as depth estimation (and ultimately 3D shape), choice of stable points for a variety of shapes, textures, weights, and so on. Grasping remains a challenge and a very active area of research, especially when it comes to trying to build five-fingers versatile robots. As one might correctly guess, the prospective applications are numerous, such as automated agriculture and healthcare.
Check out this 2016 (!) video of a robot trying to pick various objects.
…and one caveat
In 2014, a group of researchers showed how to trick neural models by adding tiny perturbations, imperceptible to the human eyes, to the images. Shockingly, [Szegedy et al] exhibited a method to force state-of-the-art image neural networks to misclassify all images in the training data. This contradicted a folkloric claim that such models have a smooth boundary across classes and raised a big red flag towards our understanding of neural networks.
Two images whose difference has negligible norm might be misclassified.
Four years passed, and our understanding of the behaviour of such networks has only marginally improved, although we now know that it is possible to fool them universally, and we can generate such examples adversarially as a training strategy. However, incidents involving deep learning systems only point to an increasing need of introducing better priors for machine learning models. Perhaps a combination of those with probabilistic generative models (see Trend #1 above) will prove handy in applications when stronger guarantees are needed.
Bonus: During the very last talk of the summit, Prof Adrian Clark (University of Essex) discussed the need for rigorous assessment of Computer Vision and, in fact, of any Machine Learning system. He mentioned the pitfalls of comparing algorithms with one single metric, such as the error rate/accuracy, in one single dataset as in huge competitions such as ImageNet, without any second order analysis (RIP error bars). Building on that observation, he introduced a couple of methods that should be employed to improve rigour when comparing vision models.
Indeed, Filament AI and Prof Clarke are part of a joint InnovateUK project to incorporate this and other insights into Computer Vision software tools.
And to the winners…
Last but not least (fine, just last…), the main social activity of the program was a typical northeastern pub night out, with a rather (a)typical pub quiz, as you have never seen before. Who would tell that Computer Vision folks are so competitive when it comes to pub quiz?
An unusual pub quiz
Further reading (references)
[Szegedy et al] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. International Conference on Learning Representations, 2014.
[Campbell and Kautz] Learning a Manifold of Fonts, In ACM Transactions on Graphics (SIGGRAPH) 33(4), 2014
| Two research trends in Computer Vision (and one caveat) | 112 | two-research-trends-in-computer-vision-and-one-caveat-10b2fa29eec1 | 2018-09-03 | 2018-09-03 16:13:37 | https://medium.com/s/story/two-research-trends-in-computer-vision-and-one-caveat-10b2fa29eec1 | false | 1,057 | We are a team of designers and developers focused on bringing AI & Machine Learning technologies to organisations. | null | null | null | Filament-AI | filament-ai | MACHINE LEARNING,MACHINE LEARNING AI,ARTIFICIAL INTELLIGENCE,CHATBOTS,CHATBOT DESIGN | FilamentAI | Machine Learning | machine-learning | Machine Learning | 51,320 | Antonio Campello | null | a24226700f0e | antoniocampello | 43 | 42 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-04-23 | 2018-04-23 16:57:48 | 2018-04-23 | 2018-04-23 16:58:23 | 0 | false | en | 2018-04-23 | 2018-04-23 16:58:23 | 3 | 10b5322ac1b3 | 2.426415 | 2 | 0 | 0 | The cornerstones of a successful financial organization are efficiency, security and offering a high ROI for themselves and their… | 5 | 4 Machine Learning (ML) Applications in Finance
The cornerstones of a successful financial organization are efficiency, security and offering a high ROI for themselves and their customers. Machine learning (ML) is starting to gain traction in the industry as it helps businesses run their systems smoothly without having to overspend to scale up operations. The technology is especially important for the finance world due to the need for accuracy, the high volume of work processes they deal with and the fact that everything is digitized nowadays.
With ML, companies have developed new ways of making investment predictions, managing customer portfolios, marketing and preventing fraud. These are all key elements of ensuring a brand’s reputation remains intact by satisfying their customers with timely advice in an organized and consistent manner. With the right software professionals digitizing and automating your business’ systems and operations, you can enhance your ROI without sacrificing your customer’s needs. Intelligent automation company WorkFusion offers a self-service Robotic Process Automation solution called RPA Express designed to integrate core systems and automate transactions.
Here are four ML applications in the finance world:
1) Making Investment Predictions
Trading services using ML technology have been helping companies maximize their investing opportunities with smart algorithms that know when to buy and when to sell a stock. Effective investing software allow investors to automatically place an order on a stock when it reaches a certain price, or sell it the per-share price drops below a certain figure. ML can also make investing recommendations based on automated analysis of market trends.
Hedge funds have reaped the benefits of these work processes as they’ve shifted away from traditional predictive analysis methods and adopted ML algorithms. The likes of JPMorgan and Morgan Stanley have already developed automated investment advisors that run on ML, improving their bottom line in a way that your company can too.
2) Managing Portfolios
The technology is also capable of maximizing a customer’s risk with algorithms that adjust an investor’s financial portfolio based on the goals and risk tolerance of the user. The idea is to create a customized portfolio that attains personal information such as age, income, current financial assets and desired retirement age, and offers users investment opportunities based on their situation.
This software then spreads investments across a variety of asset classes to reach the user’s goals. These ML algorithms change as a user’s goals shift, while also considering changes in the market in real time.
3) Marketing
A perhaps unheralded role of ML in finance is its ability to make predictions based on past behaviors in order to improve an organization’s marketing campaign. Software can examine web activity, mobile app usage and feedback from previous ad campaigns to predict how effective a new marketing strategy will be for a customer. Marketing executives have been experiencing great success in the fintech world once by implementing this technology. The number of machine-learning-based advertising startups has increased dramatically in the last year, suggesting that the technology is the next big trend in marketing.
4) Detecting Fraud
We live in a time of data breaches due to more financial companies becoming digitized, which means their systems are containing large amounts of valuable company data. Previous financial fraud detection systems were reliant on large, complex sets of rules that required a lot of manpower. However, ML has helped to streamline the fraud detection and prevention process by learning and calibrating new threats, or even potential threats.
With the technology, systems can monitor and detect anomalies and flag them for security teams to tackle. ML algorithms compare each transaction against account history in order to monitor whether or not a transaction is fraudulent. The technology can also, in real time, flag unusual activity such as large cash withdrawals or out-of-state purchases, delaying a transaction until a human can make a decision.
| 4 Machine Learning (ML) Applications in Finance | 10 | 4-machine-learning-ml-applications-in-finance-10b5322ac1b3 | 2018-04-24 | 2018-04-24 10:00:33 | https://medium.com/s/story/4-machine-learning-ml-applications-in-finance-10b5322ac1b3 | false | 643 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Karl Utermohlen | Tech writer focusing on AI, ML, apps and cybersecurity. MFA in Creative Writing from the U of Idaho. Writes for PSafe, Upwork, First Page Sage, WeContent, IP. | 31382c5e0d8d | karl.utermohlen | 314 | 35 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-11-25 | 2017-11-25 11:54:22 | 2017-11-25 | 2017-11-25 12:07:44 | 1 | false | en | 2017-11-25 | 2017-11-25 12:07:44 | 4 | 10b82f71c5b7 | 3.154717 | 3 | 1 | 0 | A few days back, I read an article by Rachel Arthur “Artificial Intelligence Dominates the Retail Conversation at Shoptalk Europe” and… | 5 | Artificial Intelligence Impact on Humans
A few days back, I read an article by Rachel Arthur “Artificial Intelligence Dominates the Retail Conversation at Shoptalk Europe” and shared it on social media. It said the following about future of AI in retail:
“By 2020, 85% of customer interaction in retail will be managed by AI, according to Gartner, multiple speakers at Shoptalk Europe said. And 30% of all companies will employ AI to augment at least one of their primary sales processes by the same time period, they further added.”
A friend responded with a very good question on my post:
“What benefit will that bring to the blue collar class or humans in general? Will it create more collective wealth or render vast majority of semi skilled humans jobless?”
That’s a pertinent question. Here is how I responded to him:
Let me first clarify what benefit in general a customer gets if a retailer is using innovation and technology to enhance customer experience: If you go to a retail store and its not using any innovation, usually the customer experience you get is crappy, its just a transaction and there is no understanding of the customer’s interests and their likes/dislikes thus leaving customer unhappy. If the brands and retailers are using innovation and technology to learn more about their customers, they can give them more personalised and enriched experience.
Example: a mother who has bought pampers from a retail store, will be able to get offers/discounts on baby lotion, baby milk and other baby products on her next store visit. A person regularly buying pet food will get a special offer/discount on his next purchase whereas others pay standard price (a reward for him for being a loyal customer).
Now, coming to your original question, the debate of technology causing job losses is not new. First industrial revolution or second or third were all driven by new technologies. “Sapiens: A Brief History of Humankind” is a good book that walks you through history and how we all got to where we are today. People who upgraded their skillset by unlearning and relearning survived whereas others who lagged behind didn’t do very well. We are on the cusp of Fourth industrial revolution according to Dr Klaus Schwab. Innovators and disruptors are leading the way. We need to UNLEARN and RELEARN. Smart phones are now in the reach of blue collared people as well. They can either waste their time Facebook’ing, WhatsApp’ing, watching talkshows or discussing politics all day long eventually rendering them jobless. I know someone who lost his job in a traditional company and is now unable to get employed. The problem is not lack of jobs in the market (in fact there are plenty of jobs in the market), the problem is that the old jobs are becoming obsolete and people performing those jobs are becoming redundant (think of traditional Telecom operators). The person I know who lost his job is spending all the time on WhatsApp gossiping with his friends instead of upgrading his skillset to unlearn and relearn some new skillset.
Its so easy to learn anything new these days. You don’t need to go anywhere. All you need is a smartphone (not even a laptop or PC) and an internet connect. You can learn new skills (data science/AI/Machine Learning, programming, UX design, UI design) on websites like CourseEra, Udemy, KhanAcademy, YouTube (a lot of useful content available for free), Canva (graphic designing), listen to various Podcasts. So, in my view, those humans who will adapt and embrace technology by learning new things, will benefit immensely from it. They will be able to use the power of AI and Machines to enrich lives. Not only they will prosper but they enable their family members and relatives as well to learn new skills and get new jobs (thus all of them becoming better off financially). So yes, as a result of embracing technology and using it to enrich lives, more collective wealth is a possibility. And that is how I choose to see it: AI and Tech enabling humans, enriching human lives, giving an opportunity to those semi-skilled humans to upgrade their skills and learn anything anywhere as long as you have smartphone and you are online.
Lastly, a friend David Nordfors who is running this initiative Innovation for Jobs, wrote in his article and I firmly believe in it while running our AI based startup: “Think people-centered economy. Care about people. We use machines to raise the value of people.”
Do you think AI will enable blue-collared humans to learn new skills or render them jobless? Please feel free to share your view in the comments. Thanks
| Artificial Intelligence Impact on Humans | 35 | artificial-intelligence-impact-on-humans-10b82f71c5b7 | 2018-06-12 | 2018-06-12 10:18:52 | https://medium.com/s/story/artificial-intelligence-impact-on-humans-10b82f71c5b7 | false | 783 | null | null | null | null | null | null | null | null | null | Technology | technology | Technology | 166,125 | Ali Hasnain | CEO DealSmash, Entrepreneur with a passion for IT/AI/Mobile/Startups/Retail/making this world a better place.Love languages,traveling. Live outside comfortzone. | 25a894bea2ea | ali.shah | 5 | 9 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-07-21 | 2017-07-21 00:14:49 | 2018-02-09 | 2018-02-09 16:52:16 | 1 | false | en | 2018-03-16 | 2018-03-16 02:25:48 | 1 | 10b92a53e0a8 | 1.471698 | 3 | 0 | 0 | There are many Algorithms that exists in market, which claim to summarize videos. In this blog, I will go through some of common methods… | 2 | Video Summarization: An survey of existing algorithms
There are many Algorithms that exists in market, which claim to summarize videos. In this blog, I will go through some of common methods that are used, and discuss outcomes that are obtained.
Code to this, can be obtained at Github Repository.
shruti-jadon/Video-Summarization
Video-Summarization - Experimenting with different Summarizing techniques on SumMe Datasetgithub.com
For this project I used both keyframe extraction Methods. For static keyframe extraction, we extract low level features using uniform sampling, image histograms, SIFT and image features from Convolutional Neural Network (CNN) trained on ImageNet. We also use different clustering methods including K-means and Gaussian clustering. We use video skims around the selected keyframes to make the summary fore fluid and comprehensible for humans. We take inspiration from the VSUMM method which is a prominent method in video summarization.
Methods Used:
Uniform Sampling
Image histogram
Scale Invariant Feature Transform
VSUMM: This technique has been one of the fundamental techniques in video summarization in the unsupervised setup. The algorithm uses the standard K-means algorithm to cluster features extracted from each frame. Color histograms are proposed to be used in one paper. Color histograms are 3-D tensors, where each pixel’s values in the RGB channels determines the bin it goes into. Since each channel value ranges in 0 − 255, usually, 16 bins are taken for each channel resulting in a 16X16X16 tensor. Due to computational reasons, a simplified version of this histogram was computed, where each channel was treated separately, resulting in feature vectors for each frame belonging to R 48 . The nest step suggested for clustering is slightly different. But, the simplified color histograms give comparable performance to the true color histograms. The features extracted from VGG16 at the 2nd fully connected layer were tried, and clustered using kmeans.
ResNet16 on ImageNet: Just same as VGG16, but only on ResNet16 and the last layer is considered as features. This we can obtain by chopping off the layer before the loss function.
Sample Results(Frames) Obtained for Videos.
| Video Summarization: An survey of existing algorithms | 23 | video-summarization-an-survey-of-existing-algorithms-10b92a53e0a8 | 2018-03-16 | 2018-03-16 02:25:50 | https://medium.com/s/story/video-summarization-an-survey-of-existing-algorithms-10b92a53e0a8 | false | 337 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Shruti Jadon | ML Research Engineer @ Quantiphi, Graduated from UMass Amherst. LinkedIn: https://www.linkedin.com/in/shrutijadon/. Website: https://www.sjadon.com/ | f5e970dd9786 | shrutijadon10104776 | 28 | 20 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 80fa87d516d3 | 2018-08-06 | 2018-08-06 10:34:46 | 2018-08-06 | 2018-08-06 10:38:34 | 1 | false | en | 2018-08-06 | 2018-08-06 10:49:20 | 2 | 10b95d628bdc | 2.177358 | 0 | 0 | 0 | ‘The modern definition of artificial intelligence (or AI) is “the study and design of intelligent agents” where an intelligent agent is a… | 4 | AI in Finance — It’s All About the Customer
‘The modern definition of artificial intelligence (or AI) is “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.’
-Science Daily
Though the definition of AI has changed from what John McCarthy gathered in 1956. Today, the term pertains to how machines can imitate human intelligence. In our tech-laden business environment, all or most work is done using smart devices. And with the advent of the Internet of Things (IoT) and trend of business digitization, the amount of data created by any business will need more than a couple of humans to comprehend the stories that develop.
However, AI isn’t at a stage where AI can make decisions similar to human consciousness. ‘Narrow AI’, is the stage we are at. AI in its nascent stage is the use of programs to collate and comprehend data in a pre-defined manner so that we humans don’t have to do rudimentary work over and over again.
Coming back to the topic of AI in finance, data creation is the most in this field. Here, AI focusses on completing specific tasks in a matter of microseconds. Finance companies are making investments in implementing AI in employee-facing and consumer-facing applications. However, more must to done to fill the gap between consumer expectations and solutions being provided.
Forrester, in 2017, published a report that the gap between firms that have embraced the digital and traditional firms will widen in the years to come. To say the least, digital disruption will make traditional firms obsolete in a few years.
The main reason for this is, end-users need financial institutions to adapt to disruption at the speed of technological advancements in the consumer sector. The need of the hour for businesses is then to understand the change in demand, to create an intuitive environment where businesses understand matters such as user behavior and trends, uncovering new insights and smarter decision-making.
Hence, Banking and Finance Systems (BFS) need to create strategies where AI is a core part of their mobile-first strategies. They need to go above and beyond an elementary digital strategy and invest in processes that ‘involve’ the customer in innovative ways with AI-driven processes.
Another point that is usually ignored is the cost-reducing characteristic of AI. AI can help businesses save costs. This is well known. However, we normally ignore the customer-facing element in the equation.
AI can also help customers save costs by helping businesses offer customized products/services in more efficient ways. The advantages of AI in finance is as follow:
· Enhanced data extraction
· Normalizing numbers (data analysis)
· Instant answers
· Data regulation compliance
· Risk assessment
· Fraud detection
· Personalized consulting
With advances in portfolio management, algorithmic trading and insurance, the future value of AI in finance and banking is increasing by the hour. The numbers game is at hand, and AI-enabled mobile apps are the future for businesses willing to face an increasingly Darwinian financial global market.
| AI in Finance — It’s All About the Customer | 0 | ai-in-finance-its-all-about-the-customer-10b95d628bdc | 2018-08-06 | 2018-08-06 10:49:20 | https://medium.com/s/story/ai-in-finance-its-all-about-the-customer-10b95d628bdc | false | 524 | All about Future Technology and Trends | null | null | null | FutureTechMedia | null | futuretechmedia | ARTIFICIAL INTELLIGENCE,INTERNET OF THINGS,AUGMENTED REALITY,VIRTUAL REALITY,DIGITAL TRANSFORMATION | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Kumar Nikate | Stargazer and TechGeek. | 193a6ef548d0 | kumarnikate | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-24 | 2017-12-24 20:42:01 | 2017-12-24 | 2017-12-24 21:23:38 | 1 | false | en | 2017-12-27 | 2017-12-27 17:19:12 | 15 | 10b9acdf21fa | 4.596226 | 1 | 0 | 0 | It’s time to look back on 2017 and, with some perspective and humor, look ahead to what awaits in 2018. I have always liked year-end… | 5 | Tech awards for 2017 — AI, Bitcoin, VR, Blockchain, MarTech, Social Media and 2018 outlook
It’s time to look back on 2017 and, with some perspective and humor, look ahead to what awaits in 2018. I have always liked year-end awards. When I ran GameDaily, our Person of the Year and top 10 runners up were always our most popular. So here are my personal for 2017.
Being Evil — Social Media. 2017 was when we learned that Facebook, Twitter and Google failed to have adequate protections about abuse of their ad platforms by people trying to influence elections, promote terrorism, pedophelia and fake news. We need to disagree with Sheryl Sandberg since Facebook IS a media company, though they claim to be a platform without taking responsibility for suppressing or promoting content based on their biases. These companies arrogantly sent lawyers instead of their CEOs to testify in front of Congress in October and sheepishly had no good answers as to why they all “didn’t know” what was happening on their platforms. In fact, Prof Scott Galloway makes a compelling argument on this video why the Big 4 Tech companies should be broken up and HBS has an article on the same topic by James Haskett The bottom line — these monopolistic companies have shown they cannot be trusted with control of data that they acquired through massive growth.
Biggest Disappointment — VR. I love the virtual reality technology, promise and excitement. However the lack of business models for most developers is forcing many startups and established game and entertainment companies to scale back or abandon their VR efforts. Also who really wants to be seen wearing one of those dorky headsets? I see the best use cases in B2B simulation and training but consumer adoption at scale is a long way off. My chips are on AR for a consumer opportunity.
Most Overhyped — AI. I think artificial intelligence is cool, promising and has many use cases. I prefer the term “augmented intelligence” as complimenting human actions, emotions and instincts. Every event I attended in 2017 was about “AI changing the world”. AI was interchanged with machine learning, chatbots, voice services like Alexa, neural networks until there was so much jibberish in the AI salad bowl that panels sounded like navel-gazing about the Future without much clear direction. Panels at a recent event included “AI will drop your customer support by 90%” and “Expert Panel Discussion on Future of AI and New Technologies.” Here are 51 CEOs making AI predictions for 2018 and MIT sharing their 7 Deadly Sins of AI Predictions. AI products and services will be helpful but replacing us with machines has been called a threat to civilization by Elon Musk and others. Personal data aggregation, appending and modeling are problems as we’ve seen data service providers weaponize customer data to power big tech screw ups at Facebook, Uber, Google, Yahoo and so many others. Hopefully 2018 will provide some practical, measurable AI use cases that are helpful and don’t involve Faustian deals surrendering our personal data in order to use these services.
Most FOMO — Bitcoin. (Not to be conflated with blockchain, see below) A year ago, most people didn’t care or hadn’t heard of Bitcoin when it was worth about $980. Then it went to $5,000, $10,000, $20,000 then back down to $12,000 — well you get the picture. Rational people went from predicting the world was ending to projecting Bitcoin at $1 million and everything in between. Personally, I’ve been involved in Bitcoin since 2012 and I’m bullish on Bitcoin, Bitcoin Cash, Ethereum, EOS and many other cryptos longer term as decentralization of financial markets will be a major democratizing trend.
Most Potential — Blockchain. There is so much going on in this space — decentralized fill-in-the-blank projects, ICOs, new tokens, etc. I see blockchain as Web 4.0 as it removes the monopoly of truth from governments, huge corporations, networks and data aggregators. I think of good blockchain projects as having 4 critical elements: 1) Decentralization that dramatically increases efficiency and lowers costs, 2) A great team of talented, good, ethical people, 3) Improving privacy, security and enabling transactions, 4) Connecting more people in a way not previously possible or feasible. These points are what I use as my starting point when considering an advisory or consulting role with a blockchain company. Yes, there are bad actors, scams, stupid ICOs, regulatory hurdles and a utility vs. security token issue. But think back to the first time someone told you about a hot new trend (self driving cars, virtual goods, Uber, the Internet). They all sounded stupid and crazy at first but emerged as important advances. 2018 will be a year of tremendous innovation for blockchain infrastructure, companies, projects and entrepreneurs. I am looking forward to increase my already frenetic pace of learning to greater serve this exciting space.
Most Almost-Mature — MarTech. Now most companies understand the tie-in between marketing, sales and growth. It has taken a long time. Marketing clouds, data services, content marketing, social listening and ABM are now almost mature so companies can buy SaaS products and services that help them be more efficient, accountable and better serve their customers on all platforms. I was very close to this world while at Oracle. The MarTech LumaScape is a mess that still needs to be streamlined in the year ahead.
Wishes for a better year ahead Many of my friends and colleagues have gone through major career and work/life transitions in the past year. Some took new directions by choice, others have faced transformations, downsizing and purges that were blatant ageism in the tech world. Too many over 40 professionals kept hearing they were “over qualified”. In 2017, society confronted the rampant sexism and abuse scandals by bad men against many women. Positive changes are finally happening. Now, we need to confront the reality of ugly and illegal age-based discrimination afflicting so many women and men. They silently struggle to find new meaningful roles in tech companies that seldom welcome those over 40. According to an Indeed survey, 43% of respondents worry about losing their jobs because of their age and 18% worry about it “all the time.” This is illegal, wrong and needs to stop.
Finally I hope you all can open your eyes and hearts and think of those you can help with you talents, passion and wisdom. Trying new things, stepping into the unknown can be scary but the rewards will come. Try something new. Take on a passion project. Ask how you can help others and roll up your sleeves. Persistence will open new doors. I like this piece on failure, persistence and success using the Wright Brothers as a case study. I hope you enjoyed my awards and may you and yours have a wonderful and blessed 2018.
Feel free to comment or send me a note with your thoughts. Thanks! Mark
| Tech awards for 2017 — AI, Bitcoin, VR, Blockchain, MarTech, Social Media and 2018 outlook | 2 | my-awards-for-2017-ai-vr-bitcoin-blockchain-martech-and-2018-outlook-10b9acdf21fa | 2017-12-27 | 2017-12-27 17:19:12 | https://medium.com/s/story/my-awards-for-2017-ai-vr-bitcoin-blockchain-martech-and-2018-outlook-10b9acdf21fa | false | 1,165 | null | null | null | null | null | null | null | null | null | Bitcoin | bitcoin | Bitcoin | 141,486 | markfriedler | Entrepreneur, CEO. Sales,Bus Dev,Mktg leader, builder. Blockchain, crypto, ICO trusted advisor to 12 platforms, Fintech, Dapps. Past in games, SaaS, media. | 9a1e070fe668 | markfriedler | 264 | 137 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-09-05 | 2018-09-05 18:13:31 | 2018-09-10 | 2018-09-10 14:01:01 | 1 | false | en | 2018-09-10 | 2018-09-10 14:01:01 | 2 | 10b9db47dcc9 | 1.226415 | 8 | 0 | 0 | At Petuum, we firmly believe that humans can and should collaborate with AI technology to build things that make the world better. Our… | 5 |
Petuum and Cleveland Clinic join forces for AI XPRIZE competition
At Petuum, we firmly believe that humans can and should collaborate with AI technology to build things that make the world better. Our mission is to industrialize AI and make the technology widely accessible and usable so that, with our intuitive machine learning building blocks, anyone can create an AI solution that addresses challenges big and small.
We are excited to announce that we are collaborating with Cleveland Clinic, a leading academic medical center, on the submission of an AI-powered diagnostic tool to the IBM Watson AI XPRIZE challenge. XPRIZE is an open challenge sponsored by IBM Watson to develop AI technology that collaborates with humans to address societal challenges. The multi-year competition closed its latest submission round on September 8 with 62 teams competing worldwide.
Petuum is working with Cleveland Clinic to deploy, test, and validate an Artificial Intelligence Diagnosis Engine (AIDE) tool which will provide clinical diagnostic support to physicians and other healthcare workers. The AIDE tool is powered by Petuum’s AI platform, which applies advanced machine learning algorithms to medical record data. The tool includes a Diagnostic Coding Solution with ICD-10 or equivalent code prediction and diagnoses reconciliation.
The primary goal of this tool is to increase the accuracy and efficiency of the diagnostic process, thereby decreasing the amount of time healthcare providers spend on tedious tasks like paperwork and data entry, and enabling them to spend more quality time with patients.
We are proud to work with such an innovative hospital and research center, and we’re excited to continue collaborating on AI applications for healthcare.
| Petuum and Cleveland Clinic join forces for AI XPRIZE competition | 94 | petuum-and-cleveland-clinic-join-forces-for-ai-xprize-competition-10b9db47dcc9 | 2018-09-10 | 2018-09-10 14:01:01 | https://medium.com/s/story/petuum-and-cleveland-clinic-join-forces-for-ai-xprize-competition-10b9db47dcc9 | false | 272 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Petuum, Inc. | One Machine Learning Platform to Serve Many Industries: Petuum, Inc. is a startup building a revolutionary AI & ML solution development platform | c0fa6af5e77f | Petuum | 365 | 24 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 863f502aede2 | 2017-09-08 | 2017-09-08 17:57:15 | 2017-09-08 | 2017-09-08 18:13:37 | 11 | false | en | 2017-09-15 | 2017-09-15 14:48:56 | 2 | 10baae3a4b48 | 11.669811 | 5 | 0 | 0 | Synced meets the people transforming a former steel town into a metropolis powered by artificial intelligence and robotics. | 5 | Pittsburgh’s Pivot to Artificial Intelligence
Synced meets the people transforming a former steel town into a metropolis powered by artificial intelligence and robotics.
When the steel mills shut down, America’s Rust Belt cities struggled to deal with stagnant economies, stale industries, and deteriorating infrastructures. Its strength in education and healthcare was all that saved Pittsburgh, Pennsylvania from bankruptcy.
Some 30 years later, buzzwords like “maker”, “tech”, and “innovation” scroll across screens at Pittsburgh International Airport, heralding the city’s rebirth as one of the most important AI hubs in America.
At the centre of Pittsburgh’s revival is Carnegie Mellon University, ranked #6 worldwide in computer science by Times Higher Education. The university recently announced a new artificial intelligence R&D initiative, “CMU AI,” which knits together more than 100 faculty members and 1,000 students, kickstarting a new chapter in the city’s intelligent evolution.
Carnegie Mellon University: American Robotics in Pittsburgh
Carnegie Mellon University has America’s largest robotics research center, setting it apart from other STEM universities such as Caltech, MIT, and Stanford. Professor Martial Hebert, Director of the Robotics Institute, tells us, “when the institute was founded there was a decision that it should have its own budgets, faculty, and researchers. This created a unique identity.”
In 1979 CMU secured US$5 million (roughly 32 million today) from Westinghouse Electric President Tom Murrin to fund the Robotics Institute. Nine years later the institute granted its first PhD in robotics.
In 1980 Professor Marc Raibert created CMU’s Leg Lab, which later gestated the famous spinoff company Boston Dynamics. Professor Red Whittaker developed a robot vehicle to help clean up the Three Mile Island nuclear meltdown in 1979. Professor Takeo Kanade’s virtualized reality system “EyeVision” broadcast 3-D replays of the 2001 NFL Super Bowl. And long before big tech companies and auto-manufacturers jumped in, NavLab introduced the first self-driving vehicle.
Built by CMU’s Tartan rescue team for disaster response, CHIMP is 150 cm tall robot with a 25 cm reach radius. It won third place at DARPA’s 2015 Robotics Challenge (DRC).
Today the Robotics Institute has 116 faculty members, 33 labs, and 98 ongoing projects under its umbrella.
To further commercialize research and development, the National Robotics Engineering Center (NREC) opened in 1996 with the support of NASA. The center receives both government and corporate contracts for agriculture, mining, nuclear, space, and defence projects. “Government contracts are long-term that run for five to ten years,” explains Professor Hebert, “while corporate contracts run for one to three years. We prefer programs that address ideas we can built on.”
Students are deeply involved in the process of spawning next-generation robots. In the basement of Cohon University Center, members of the Robotics Club tinker with drones, quadcopters, humanoid robots, and other such machines.
Robotics Club President Sean Reidy, Grad Student Rep Brad Powell, and Training Officer Oliver Zhang (left to right).
Brad Powell, a master student in electrical engineering, says Randy Pausch’s lecture “Really Achieving Your Childhood Dreams” inspired him to apply to Carnegie Mellon University. “It’s the creative freedom and integrity that attracted me here. Many people join the Robotics Club for the opportunity to work on research projects. I am currently working on the Lunar Rover project for Red Whittaker. There are other students working in the NavLab or computer vision lab.”
Students appreciate not only the top-level tutoring, but also the close relationships with their professors. As one student tells us, “We know [Associate Teaching Professor] David Kosbie will take time out of his day to coach students, not just on computer science, but also on being an adult. A friend of mine happened to walk with him across the lawn, and by the end of the walk had received a new workout regiment and advice on diet problems!”
Carnegie Mellon University: The Frontier of Machine Learning Research
“CoBot” greets visitors to CMU’s Gates Hillman Center. The robot can deliver messages, transport objects, and escort visitors. In a very human way, CoBot will ask passersby for help if it runs into a problem; and generate an excuse if it’s late. As an instrument to study real-time navigation and multi-robot multi-task planning, Cobot is constantly “learning” new capabilities.
There are currently four unique CoBots costing around $10,000 each at CMU. Equipped with a screen interface, motorized wheels, a LIDAR sensor, and a kinect depth-camera with six-meter range view. CoBot 4 has only camera sensors and no LIDAR to help with navigation. — Image courtesy of CMU
Professor Manuela Veloso, Head of CMU’s Machine Learning Department, is the chief researcher behind CoBot. With an M.Sc. in electrical engineering from the Instituto Superior Técnico in Lisbon, Professor Veloso first joined CMU as a PhD student in the late 1980s, driven by her passion for industrial automation. When asked about the “techie” appearance of her robots, she dismisses the idea that robots should be built to resemble humans: “Robots should stay like robots. You don’t make a fridge look like a human. I care about whether they work autonomously, not how they look.”
This functional view of machine intelligence carries its genes from the school’s earliest founders. “Computer science research started at CMU in the early 1950s, when Herbert Simon and Allen Newell co-founded the Graduate School of Industrial Administration, conducting research on symbolic reasoning and computation,” says Professor Veloso. Newell went on to win the 1975 Turing award, and Simon was awarded the 1978 Nobel Prize in Economic Sciences. “I remember Allen Newell saying to us, ‘It’s easy to talk about what you would like a computer to do, but it’s hard to make them actually do it’.”
Professor Veloso has made many media appearances explaining her field for the general public. She also helped launch the RoboCup initiative in 1997. In 2015 her CMU team CMDragons won first place.
“You need to understand that artificial intelligence will not go away, it will be more and more present in our lives. A lot of the stuff we have today is called human-computer interaction (HCI). But I don’t call it HCI,” says Professor Veloso. “It’s human-AI interaction — we are interacting with artificial intelligence. And this is a new field of study.”
The Machine Learning Department is under the School of Computer Science, one of the seven CMU schools. It has 22 core faculty members, 40 faculty members, and approximately 60 PhD students.
In recent years there has been an upsurge of interest in deep learning from international applicants. Chenghui Zhou is one of six Chinese students out of a total of eight accepted into the machine learning PhD program last year.
Zhou’s thesis involves enabling a robot to guide the blind using deep reinforcement learning. She says the requirements proposed by her supervisor Professor Veloso are very specific, for example, “when the program sees a person greeting its blind owner, it will stop and remind the agent to wave back.” As a first-year student, she is still working on how to make the robot intercept a moving agent.
Zhou says she didn’t know what she signed up for when accepting her admission offer. “My dad was a computer science professor, and when I was a kid I always thought he was very idle. Being a PhD student really means working 24/7, but also feeling very unproductive at times. Actually my family is against girls doing a PhD. I think that is a myth, I think girls can do just as well.” Zhou may find inspiration in her supervisor Professor Veloso, who was keynote speaker at the 2015 Grace Hopper Celebration of Women in Computing conference, and has done much to encourage women in computing.
Former head of the department Professor Tom Mitchell has an office right across the hall from Zhou. His current research involves using statistical machine learning algorithms to analyze fMRI data, teaching mobile phones to learn from user instructions, and a software system called Never Ending Language Learning (NELL).
Professor Tom Mitchell has been teaching at CMU since 1986. He has published numerous books on machine learning and won several awards in the field.
NELL is an endlessly inferencing machine that categorizes semantic knowledge on the web. It has been running nonstop since 2010, making it an open-ended approach to the limitations of the former expert system that failed.
There are eight different algorithms deployed on NELL, each helping to affirm the others’ observations. Explains Professor Mitchell, “One method is looking at the statistics of surrounding clauses, like when you say ‘mayor of Pittsburgh’, Pittsburgh is probably a city. The other method is looking at character sub-strings like “burgh”, which is a common suffix for cities. If algorithms make independent errors they can help each other correct them.”
One of NELL’s recent learned facts marked at 100.0 confidence is that Christopher Nolan directed the movie “Batman Begins.” This piece of information is stored in NELL’s knowledge repository to help the system become “wiser.”
NELL also fact-checks with a system called Never Ending Image Learner (NEIL) from Professor Abhinav Gupta’s research team, whereby the two systems exchange inferences on visual and semantic knowledge respectively. “NEIL is crawling, collecting, and classifying images. It is communicating with NELL, and they are teaching teach other,” says Mitchell. In the process, NELL and NEIL may help machines break free from the present constraints of labelled data.
University of Pittsburgh: AI Augmented “Eds and Meds”
Over the past few years, Professor Andrew Schwartz has been working on a neural prosthetic project that helps immobilized patients restore arm and hand functions. Based at the University of Pittsburgh’s Neurobiology Department, Professor Schwartz merges domain expertise with a group of 20 researchers including electric engineers, bioengineers, statisticians, and machine learning scientists.
“We have electrodes on the brain that send signal waves, while decoded signals are used to move prosthetic arms,” explains Professor Schwartz. “The system get two types of information streams, one from the camera’s visual artificial sensor, and the other from the brain. They are merged to make the robot work.” The robot arm is benefiting from machines’ increasing capability to analyze massive parallel streams of data.
Professor Andrew Schwartz offers to shake a robotic arm controlled by signals from a patient’s brain. — Image courtesy of UPMC
Professor Schwartz believes that the present research on neural prosthetics, as pre-approved by the FDA, still requires two to three years of refinement: “We still want to remove cords from the skull for wireless transmission, and develop more dexterous arms with better materials.” Moreover, he admits we still don’t have a complete understanding of how the brain makes everything work. “Humans have 10 degrees of freedom on our arms and shoulders and 20 degrees of freedom in our hands. At this moment we still don’t know how brain signals get transmitted to the spinal cord and limbs.”
Supporting Professor Schwartz’s research is the University of Pittsburgh Medical Center (UPMC) — the largest healthcare and insurance provider in Pennsylvania, with 3.3 million members, 25 merged hospitals and 3,800 practicing physicians. The hospital’s innovation unit UPMC Enterprises shares an office with Google at Bakery Square, where there are currently 250 employees including data scientists and technologists.
UPMC was the birthplace of PACS, one of the earliest medical data archiving systems. The hospital receives petabytes worth of data, an amount that is doubling every 18 months. Keeping up with the data volume for real-time decision-making has become a priority.
“If our hospital beds are filled we have failed,” says Dr. Rasu Shrestha, Chief Innovation Officer at UPMC and Executive Vice President at UPMC Enterprises. Unlike the stereotypically drab hospital environment, UPMC Enterprise’s open-concept office is “hip” like its upstairs neighbour Google, with large murals, a pinball machine, and walls covered with stickies.
A few years back, UPMC set up Covergence, a tablet-based platform which helps physicians access patient records. The project however failed to scale outside of UPMC, and was shut down. Dr. Shrestha explains that “at hospitals we don’t like to say ‘failing’, but at the innovation unit our approach is ‘agile’ and we let bad ideas fail fast.”
The innovation unit also invests in healthcare startups, one example being Vivifyhealth. “We monitor patient vitals with their consent. If the patient knows that they are falling sick, or that they are about to fall sick, we can prevent that episode and intervene.” says Dr. Shrestha. “Patients leave the hospital with [tracking] technology instead of pills.”
The current developments in artificial intelligence are in alignment with the hospital’s strategy. “There are many applications based on structured textual data including medical, allergies and lab values; and on unstructured data including surgical discharge summaries, radiology reports, and lab summaries. There are increasing capabilities in pixel data as well for image pattern recognition,” explains Dr. Shrestha.
UPMC has invested close to US$2 billion in technological innovation. In 2017 the hospital announced an R&D partnership with Microsoft, which involves the use of cloud and AI technology to help digitize its massive reservoir of paper documentation.
IAM Robotics: Founding a Warehouse Robotics Startup
After earning his PhD in Robotics, Mechanical Engineering and Electrical Engineering from the University of Florida, Tom Galluzzo came to Pittsburgh in 2009 to work at NREC as a robotics engineer. Four years later, he founded IAM Robotics, a company selling computer vision augmented robot arms for inventory retrieval.
IAM Robotics is situated in an industrial outskirt of Pittsburgh. The team includes engineers from CMU, NREC, University of Pittsburgh, and Pennsylvania State. For Galluzzo, a benefit of starting in Pittsburgh is access to 100 qualified CMU robotics graduates every year.
Prior to founding his own company, Galluzzo worked on the DARPA ARM-S project at NREC, where he learned the limitations of anthropomorphic hands on robots. “Those systems are intricate, easy to break, very expensive, bulky and cannot reach into small areas.” He says that project showed him “what was possible and what wasn’t.”
The company’s product robot called “Swift” uses an onboard RGB camera and gripper to retrieve items from a shelf. The gripper is capable of 1.5 lbs of suction power, which is adaptable for boxes and packaged goods. Swift’s computer vision system can see a wide breadth of items in different orientations, while the sensor’s depth component has its own infrared projectors projecting patterns on the scene. The robot is also connected to a scanner called “Flash,” with which a human helper scans new products into the system before assigning pick-up tasks to the robot.
Swift’s performance is equivalent to one full time worker, with one battery charge sustaining the machine for 10 hours. A business installing the system can expect to break even in two to three years.
IAM Robotics’ present customers include Rochester Drug Cooperative, one of the largest healthcare distributors in the US. Galluzzo sees many opportunities for Swift in E-commerce logistics demanding pick and pack services: “People in the US spend 40 billion hours shopping for consumer goods. To replace that, it would take more than the entire US unemployment pool. Without physical automation, there’s no way to scale pick, pack and ship.”
Like most American industrial robot companies, IAM Robotics outsources compartment design, and does in-house assembly of FANUC robot arms and sensors.
Galluzzo says a big hurdle in Pittsburgh is funding. “Without doubt, that took a long time. We had seed funding from Innovation Works, but it’s still challenging.” Galluzzo told Synch that IAM Robotics is hoping to announce good news soon.
Swartz Center for CMU based Ecosystem
The Swartz Center for Entrepreneurship plays a central role in leveraging Pittsburgh’s AI startups and spinoffs. It was founded in 2015 with a US$31 million gift from alumnus and venture capitalist James Swartz to support CMU entrepreneurial activity.
According to a report by Ernst & Young and Innovation Works, “from 2012 to 2016, 318 unique companies attracted funding totalling roughly US$1.7 billion in Pittsburgh”. The most-invested categories were software, life sciences (biotech, medical devices, healthcare IT and health care services), followed by hardware (robotics and electronics).
Six years ago, former serial entrepreneur and venture capitalist Dave Mawhinney was hired by the centre to revive the local startup scene. Mawhinney’s office is located on the second floor of Tepper Business School. He recently flew back from Silicon Valley, and is meeting with investors from Taiwan.
Dave Mawhinney showcases Anki, a robotics startup founded by Carnegie Mellon University alumnus Boris Sofman.
Mawhinney says many international investors are knocking on the door of Pittsburgh’s innovation scene, with particular interest from China. “There has been a lot of tire kicking, and we’re starting to see potential. We are working with the firm Sinovation on several potential deals. Chinese money is funding companies in the US now.”
Mawhinney is however concerned about Pittsburgh’s lack of early risk venture capital from domestic investors, and local companies’ reluctance to deal with startups. “It’s different in Silicon Valley, where today’s startup will be a mature company in ten years, and so people are willing to take the risk.”
AI, IP, and an Innovative Economy
In 2016, the city’s three biggest universities produced 145 patents, up 43% from 2012. Intellectual property licensing is one of the business niches for CMU’s NREC, which currently holds 659 distinct patents.
In a recent high-profile case, CMU sued semiconductor giant Marvell for IP infringement. The case settled for US$750 million last year, the second-highest amount in patent compensation history. Proceeds went to patent holder Professor José Moura and his former student Aleksandar Kavcic. A year later the duo, along with Professor Moura’s wife Professor Veloso, donated US$16.5 million to CMU for data science and engineering research.
The Center for Technology Transfer and Enterprise Creation (CCTEC) was created to walk startups through the legal process, and help them navigate intellectual property issues.
In contrast to the bleak 1980s, Pittsburgh’s campuses today are vibrant, its robots are robust and the city has lost its patina of rust. An intelligent engine is driving the transformation by attracting new people, new ideas, and new energy.
This is the third article within featured series AI and Globalization, click to read Building AI Superclusters in Canada, Canvassing Switzerland’s AI Landscape
Journalist: Meghan Han | Editor: Michael Sarazen | Producer: Chain Zhang
| Pittsburgh’s Pivot to Artificial Intelligence | 103 | pittsburghs-pivot-to-artificial-intelligence-10baae3a4b48 | 2018-06-18 | 2018-06-18 01:32:11 | https://medium.com/s/story/pittsburghs-pivot-to-artificial-intelligence-10baae3a4b48 | false | 2,748 | We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights. | null | SyncedGlobal | null | SyncedReview | syncedreview | ARTIFICIAL INTELLIGENCE,MACHINE INTELLIGENCE,MACHINE LEARNING,ROBOTICS,SELF DRIVING CARS | Synced_Global | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Synced | AI Technology & Industry Review - www.syncedreview.com || www.jiqizhixin.com || Subscribe: http://goo.gl/Q4cP3B | 960feca52112 | Synced | 8,138 | 15 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-06-24 | 2018-06-24 20:12:50 | 2018-06-24 | 2018-06-24 20:14:34 | 1 | false | en | 2018-06-24 | 2018-06-24 20:14:34 | 1 | 10baf41ada8c | 0.822642 | 0 | 0 | 0 | Experienced development manager Christian O’Meara has handled corporate development, sales, and recruitment for technology management… | 4 | Getting Employees Ready for AI and Machine Learning
Experienced development manager Christian O’Meara has handled corporate development, sales, and recruitment for technology management companies for almost 20 years. In 2005, Christian O’Meara co-founded Logic20/20, a consulting firm that advises businesses undergoing digital transformation and automation.
Advances in artificial intelligence technologies have made machine learning indispensable in the business world, where automating data processing and analyzing tasks is becoming increasingly more commonplace.
Machine learning can collect consumer and other relevant data to identify sales trends, replicate customer behavior patterns, and predict long-term outcomes. Employees must have a deep understanding of AI and machine learning in order apply the data it produces effectively.
Though nearly three-quarters of surveyed companies anticipate incorporating AI into their business model, less than 3 percent are investing in training their current employees in this area. To keep a competitive edge, companies today must offer staff in all departments ongoing training and professional development in machine learning and AI concepts.
| Getting Employees Ready for AI and Machine Learning | 0 | getting-employees-ready-for-ai-and-machine-learning-10baf41ada8c | 2018-06-24 | 2018-06-24 20:14:34 | https://medium.com/s/story/getting-employees-ready-for-ai-and-machine-learning-10baf41ada8c | false | 165 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Christian O'Meara | Since 2005, Christian O’Meara has served as the CEO of Logic20/20 Inc., which he cofounded. | 49eddad9563a | ChristianOMeara | 0 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-02-13 | 2018-02-13 13:06:46 | 2018-02-13 | 2018-02-13 13:18:20 | 3 | false | en | 2018-03-12 | 2018-03-12 12:53:25 | 45 | 10bc47cda22c | 7.519811 | 1 | 0 | 0 | Since the inception of Artificial Intelligence (AI) as a computer science in 1956, there have been a number of serious AI winters — periods… | 5 | Image — Ramdas Ware
Why The Next AI Winter Will Not Happen
Since the inception of Artificial Intelligence (AI) as a computer science in 1956, there have been a number of serious AI winters — periods in which AI funding, hence research and development stalled, based on disappointing results and lack of progress in the field.
Some people warn that a next AI winter is imminent. I don’t believe so.
Why a next AI winter
Almost all of the widely publicized impressive breakthroughs in Artificial Intelligence of the last few years — roughly since 2012 with application of GPUs and large data sets — are based on one specific type of AI algorithm: Deep Learning, an application of Machine Learning.
Not the least of AI scientists, Geoff Hinton — sometimes dubbed the “father of Deep Learning” — has openly cast doubt on the broad applicability of Deep Learning. Hinton has doubts about one specific technique mostly used in Convolutional Neural Networks, called ‘back propagation’. In Hinton’s words: “I don’t think [back propagation] is how the brain works.”
Besides, even with the fastest hardware configurations and largest data sets in the world Machine Learning has not overcome some fundamental mathematical flaws, such as its sensitivity for adversarial examples and eg. the risk of overfitting or being stuck in a local minimum.
There still are obstacles to overcome for Machine Learning to become mainstream.
There are many well documented (eg. by Gary Marcus of NYU or Ion Stoica et al. of UC Berkeley) obstacles to overcome for Machine Learning, before it will mature into a technology that will be applicable in business as easily as electricity.
Roughly summarized, specific AI algorithms are a ‘one trick pony’, hardly transferable into other application fields (‘transfer learning’). That is why it is called Artificial Narrow Intelligence, after all. [The reason for this is simple: the structure — the ‘weights’ and ‘biases’ in the mathematics to be precise — is tailored to the specific dataset used for training the neural network. Different dataset, different structure.]
And there still are a lot of technical — and organizational — obstacles to overcome when you actually build a working, useful AI solution, eg. getting your hand on huge amounts of suitable data (cleansed and feature engineered). Not to speak of the ethical discussions esp. in consumer applications on eg. explainable AI, use of biased datasets and privacy.
The AI hype currently is probably at the peak of inflated expectations
Last but not least, AI — like every overhyped new technology wave — will follow the Gartner hype cycle. AI now is probably at the ‘peak of inflated expectations’. So the next few years, it will inevitably go through the ‘trough of disillusionment’.
And yet …
Why not a next AI winter
Simply said, the floodgates are open … for good.
First of all, many of the doomsday stories for AI refer to Artificial General Intelligence, systems as smart as humans. These predictions are right, in my opinion. No Terminator, no Ex Machina. And forget about Sophia. There will be no Artificial General Intelligence any time soon, despite existing singularity theories.
The floodgates are open … for good!
And some say, Hinton’s rejection of back propagation was merely a publicity stunt to promote another technique, the so called capsule networks. Maybe, maybe not. Besides, Hinton speaks of simulating the brain. That is not necessarily the purpose of Machine Learning.
But we must not throw the baby out with the bathwater. Artificial Narrow Intelligence is here, it works, and it is here to stay. And it will improve … dramatically.
Why? Well. There is one all-determining difference between the past AI winters and the situation today.
Investments
Too much money has already been invested by big corporates, big venture capitalists and big governments to simply accept depreciation of those investments. That is just not going to happen. We’re over the threshold. There is and will be tremendous pressure for results. It’s just how it works.
Too much money has been poored into AI to accept depreciation of such investments.
And it is widely expected, that investments in AI will dramatically increase in the coming years, from the 2017 estimated US$ 5 Billion (startups) up to some say US$ 37 Billion or more in 2025. And don’t forget, the main reason for the AI winters was the lack of funding.
So who is funding AI today? Well, practically everybody! That’s another reason why it won’t stop any time soon.
The main reason for the AI winters was a lack of funding.
Of course there are the US tech giants, Google, Amazon, Facebook, Apple who claim to be an AI First company. Not to forget IBM, Microsoft and the likes. Billions of investments in AI. And companies like Intel and Nvidia, for example. But don’t forget Chinese big tech, eg. Alibaba, TenCent, Baidu, Huawei. Alibaba announced to invest US$ 15 Billion in AI and related technologies such as quantum computing. [Alibaba recently developed an AI that is better at reading a specific dataset than humans.]
But also, all vertical industries are investing heavily in AI, eg. the automotive industry, running for the gauntlet of the autonomous vehicle. Financial services invest heavily in applications like algorithmic trading and credit card fraud prevention. Even agrobusiness is experimenting largely with precision agriculture.
But not just companies. The Chinese government announced in the summer of 2017 their very serious ambition of world domination in AI in 2035. Recently plans were announced for a US$ 2 Billion AI research park in the Mentougou area, west of Beijing. China is already implementing a nation-wide system of AI-powered face recognition.
And so on, and so on. The list is endless.
No more one trick pony
All of today’s AI that really works is Narrow AI. Each individual AI solution (artificial neural network) can only be applied to solve one specific problem (dataset). For each new problem you have to build a new solution. Well, things are changing. A lot of research is done in the field of Transfer Learning. And the first significant results are emerging.
Researchers of Google’s Deepmind have developed AlphaZero — successor of AlphaGo Zero, which was the successor of AlphaGo, which defeated the world champion of Go. AlphaZero not only learned itself to play Go, but also the games of Chess and Shogi at superhuman level, in a matter of hours, without any prior knowledge of those games. And without significant changes to the algorithm.
In the scientific community there is broad consensus that this is a major step forward. At the same time, this is still far away from any form of General AI. Eg. the games of Go, Chess and Shogi are very bounded, regulated environments with a straightforward goal (how to win). So rather, it is a form of Narrow AI+, slightly less narrow than Narrow AI. But still, it is a fundamental step forward.
Other factors
Besides capital and transfer learning, there are five other important factors pushing AI development. These are: fast hardware, big data, smart algorithms, continued research and talented people.
Hardware — Even — as some expect — if Moore’s Law will no longer be tenable in the near future, developments in parallel computing (GPUs) and quantum computing look promising enough to assert that hardware — or computing power, for that matter — will not become the limiting factor. Also, new algorithms like One Shot Learning and many other are being experimented with, promising lower requirements of processing power and data quantities.
Data — We will drown in data — we already do. Techniques for data cleansing and feature engineering are improving rapidly. The trick here will be to collect enough — labeled — data for your specific AI algorithm from your specific application area.
Algorithms & Research — When doubting the broad applicability of back propagation, Geoff Hinton said: “To push materially ahead, entirely new methods [=algorithms] will probably have to be invented. (…) The future depends on some graduate student who is deeply suspicious of everything I have said.”
That suspicious graduate student is very likely working hard at this moment — eg. in the field of transfer learning — to setup her AI startup company. I know from my own experience with the scientific AI community in my home country, the Netherlands, that a lot of innovative AI algorithmic work is being done. New successful algorithms will be invented, for sure. And that graduate student may well be Chinese, by the way.
Besides, most of the successful AI algorithms today are based on Supervised Learning, but there are many other families of algorithms that can and will be explored. Think of combinations of algorithms, such as used by Google Deepmind in AlphaZero. And combinations with other technologies, for that matter. Just think of Decentralized AI, eg. combining Blockchain and AI as the foundation of DAOs, Decentralized Autonomous Organizations (eg. Terra0.org) or in decentralized learning with device-centric AI on users’ smartphones and other devices, eg. through Google’s Federated Learning.
Talent — Maybe the biggest roadblock the coming years will be the limited supply of talented people, both technical (AI engineers, data scientists) and non-technical (eg. product managers, change managers, legal & compliance experts, innovative business managers) to implement AI in companies and organizations.
Major limiting factor
Current estimates are that globally there are about 10,000 in-depth technical AI experts, 300,000 AI ‘practitioners and researchers’ and already millions of roles available for people with AI and AI related skills.
It may sound odd, but I do not worry so much about the supply of AI scientists. You do not need millions of scientists globally, you need, let’s say, 10,000 or so very, very good scientists globally to create real breakthroughs in research and development.
If there will be any slowdown of AI, it will be from slow adoption and lack of AI pioneers.
I have more worries about the supply of the practitioners, the (technical) engineers and non-technical AI related experts in companies and organizations, because implementing and working with AI solutions has a totally different dynamic than working with ‘traditional’ IT and digital solutions. And you will need millions of them, globally tens of millions.
If there will be any factor slowing down adoption of AI in organizations (and society for that matter), it will be resistance to change (fear of job loss, need for retraining), slow adoption and lack of technical and non-technical AI pioneers, who will show the way to their colleagues in their respective companies and organizations.
But that will not cause an AI winter, at worst it will cause a prolongued AI spring.
Case Greenfield
. . .
Thank you for reading my post. Here at Medium and at LinkedIn I regularly write about management, work and life with Artificial Intelligence. If you would like to read my future posts then simply join my network here or click ‘Follow’. Also feel free to connect on Twitter.
Follow, Clap or Share if you appreciate this topic.
Published earlier on LinkedIn: https://www.linkedin.com/pulse/why-next-ai-winter-happen-kees-groeneveld/
| Why The Next AI Winter Will Not Happen | 5 | why-the-next-ai-winter-will-not-happen-10bc47cda22c | 2018-03-12 | 2018-03-12 12:53:26 | https://medium.com/s/story/why-the-next-ai-winter-will-not-happen-10bc47cda22c | false | 1,847 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Case Greenfield | Founder & CEO at Aiandus, Home of Analytics Translators | 83961017d4e2 | casegreenfield | 16 | 14 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | cc02b7244ed9 | 2018-06-14 | 2018-06-14 07:01:05 | 2018-06-14 | 2018-06-14 07:08:13 | 0 | false | en | 2018-06-14 | 2018-06-14 07:08:13 | 10 | 10c05e7ba5c8 | 2.109434 | 0 | 0 | 0 | PRODUCTS & SERVICES | 5 | Tech & Telecom news — Jun 14, 2018
PRODUCTS & SERVICES
Video
After just one day, the regulatory ok for AT&T’s acquisition of Time Warner has started the “consolidation frenzy” that many anticipated, with Comcast revealing yesterday an all-cash $65bn bid for 21st C Fox, competing with a $52bn stock offer from Disney. Comcast’s CEO even claimed that this (M&A) is “what we do well” (Story)
The implications of all this for final customers are still unclear, but some predict that the (PayTV) bundle “won’t be disrupted” by internet companies, as access operators will now offer content from their media subsidiaries for free, or within aggressive bundles. This will likely coexist with Netflix and other streaming apps (Story)
Games
Meanwhile, video games could follow the path of TV, and game streaming apps may become a reality. This was a key issue at the E3 gaming event this week, with EA showing a service and with Microsoft announcing they’re building their own streaming app. Technical hurdles remain, but could be on the way to being solved (Story)
In this context, the “elephant in the room” could obviously be Netflix, which has up to now denied its entry into gaming, but who seems conscious of the opportunity, and just announced a new game-related “interactive” TV series for kids about Minecraft, with a “choose your own adventure” approach (Story)
HARDWARE ENABLERS
Networks
The public relations battle around 5G continues the US, with Nokia, Ericsson and Samsung working with all operators. Now T-Mobile just announced a “major milestone in delivering true mobile 5G”, as they completed the first bidirectional 5G data session on a standard-compliant 5G NR system, on the 28GHz band (Story)
ZTE keeps suffering in its recovery from the blows received from the US authorities. After a heavy impact in the stock market this week, now they’ve revealed that they’re looking to secure a $11bn credit line form Chinese banks, to offset the effects of the ban (initially estimated as $2bn revenues, as a minimum) (Story)
SOFTWARE ENABLERS
Artificial Intelligence
Samsung is creating a new corporate venture fund, Q Fund, to invest in AI startups at an early stage, as the company’s vision is that it is now “AI’s turn to eat software”. They even listed some problems that the startups should investigate, including video scene understanding and human-computer interactions (Story)
Google wants to tap potential talent and applications for AI in emerging markets, including Africa, and they’ve just announced that they plan to open a new research center in Ghana, which will work closely with local universities and policy makers to solve problems in healthcare, agriculture and education (Story)
Privacy
Europe’s new ePrivacy rules, complementary to GDPR and due late 2018, are causing concerns as potential inhibitors of digital innovation in the region. The problem is that they could forbid the processing of data communications among users, even in cases when it is an “ancillary” feature linked to another service (Story)
Blockchain
Elph, a new startup, wants to be the “Netscape for crypto”, by building a portal and a unique ID system for users to find and use blockchain-based, “distributed” apps. The value of this is unclear today, as there are still too few of these apps to make it really useful, but the company is betting that this will change over time (Story)
| Tech & Telecom news — Jun 14, 2018 | 0 | tech-telecom-news-jun-14-2018-10c05e7ba5c8 | 2018-06-14 | 2018-06-14 07:08:15 | https://medium.com/s/story/tech-telecom-news-jun-14-2018-10c05e7ba5c8 | false | 559 | The most interesting news in technology and telecoms, every day | null | null | null | Tech / Telecom News | tech-telecom-news | TECHNOLOGY,TELECOM,VIDEO,CLOUD,ARTIFICIAL INTELLIGENCE | winwood66 | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | C Gavilanes | food, football and tech / [email protected] | a1bb7d576c0f | winwood66 | 605 | 92 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-01-27 | 2018-01-27 06:39:49 | 2018-01-27 | 2018-01-27 06:43:09 | 1 | false | my | 2018-01-27 | 2018-01-27 06:51:24 | 3 | 10c12ff2f774 | 1.2 | 0 | 0 | 0 | VR , AR , 360 , platform , Game , Treadmill , 3D , 3Dmapping , Tation Brush စသည္ျဖင့္ ေပါ့ VR game , VR video , VR shop ရွာေဖြသိမွတ္သေလာက္… | 4 | VR AR Usefullink
VR , AR , 360 , platform , Game , Treadmill , 3D , 3Dmapping , Tation Brush စသည္ျဖင့္ ေပါ့ VR game , VR video , VR shop ရွာေဖြသိမွတ္သေလာက္ တင္ျပေပးထားပါတယ္ အဓိက platform ေတြ Support ေတြျဖစ္ပါတယ္
Vive
Rafit
360
oculus
Daydream
Cardboard
Vr Trademill
Stream
Omini
Cardboard
parrot
facebook .Google. Htc. Samsaung . Reality Market ေတြအျပဳိင္က်ဲသလုိ Drone ေတြကလည္းမေခပါ Dji တုိ႔ Parrot တုိ႔ေပါ့ Gamer အတြက္က Playstation V တုိ႔ Stream တုိ႔ကလဲ လုိက္ပါတယ္ facebook က Developer F8 ပဲြတုိ႔ Facebook ေနာက္ ၁၀ နွစ္စီမံကိန္းတုိ႔လဲရွိပါေသးတယ္ BIGATE ကေတာ့ ဓာတ္ခဲြခန္းကေနထုတ္ Pure Meat အစာထုတ္လုပ္တဲ့ Project မွာပါ၀င္မယ္တဲ့ ဇူကာဘတ္ကေတာ့ VR နဲ႔ေပါင္းတဲ့ AI အတု ထုတ္ပါမယ္တဲ့ Blockchain ေၾကာင့္ finance နယ္ပယ္ေျပာင္းေတာ့မယ္ 3Dprojection ကလဲ TV ေတြ ၀ားစားေတာ့မယ္ စတီဖင္ေဟာ့ကင္းရဲ႕ Quantum နည္းပညာကလဲ ျမင္တဲ့အတုိင္းပါပဲ alibaba နဲ႔ amazon ကလည္း VR နဲ႔ AR ေပါင္းတဲ့ shop Project စေနပါၿပီ AI ဆုိတဲ့ အသိညဏ္တုကလဲ python အေျခခံတဲ့ Bot နဲ႔ လာေနၿပီ google ကလဲ fi နဲ႔ Internet ေတြအကုန္ယူေတာ့မယ္ online Education ကလဲ Udemy တုိ႔က လုပ္ေနပီ Github ကလဲ ကုတ္နဲ႔ World ကုိထိမ္းမယ္
https://www.vive.com/
……………………………
https://wevr.com/
……...........................
http://www.valvesoftware.com/
……………………………
https://developer.viveport.com
……………………………
https://www.oculus.com/
……………………………
https://developer.oculus.com/
……………………………
https://samsungvr.com/
……………………………
https://www.tiltbrush.com
……………………………
https://vr.google.com/daydream
……………………………
https://www.omnivirt.com
……………………………
https://viromedia.com/
……………………………
http://welcome.vrtify.com
……………………………
http://www.vrstudios.com/
……………………………
http://www.virtuix.com/
……………………………
https://www.eonreality.com/
……………………………
http://www.airpano.com/
……………………………
https://facebook360.fb.com/360-photos
……………………………
https://360.io
……………..................
https://veer.tv
……………………………
http://www.instavr.co/
……………………………
https://pixvana.com/
.................................
http://www.obsessvr.com/
……………………………
https://inceptionvr.com/
……………………………
https://www.playstation.com/en-us/explore/playstation-vr/
……………………………
https://www.virzoom.com/
……………………………
https://www.smart2vr.com/
……………………………
https://with.in/
………........................
http://vrappmaker.com/
……………………………
http://vr-design-app.com/
……………………………
https://www.virtalis.com
……………………………
http://www.worldviz.com
……………………………
https://www.parrot.com/global/business-solutions/parrot-bebop-pro-3d-modeling#bebop-pro-3d-modeling-all-in-one-3d-modeling-solution
……………………………
https://www.parrot.com/global/business-solutions/ebee-sq-pix4dag-edition#parrot-ebee-sq-
……………………………
http://www.wizdish.com/
……………………………
http://arvrlist.com/
အေရးပါတဲ့ WEBLINK ေတြျဖစ္လုိ႔ share ၿပီး Save ထားလုိက္ဗ်ာျပန္ရွာရင္လြယ္တယ္ ဒီ weblink ေတြေလ့လာပီးရင္ေတာ့ 😷 က်န္တာကုိယ့္အပုိင္းပဲ https://youtu.be/kPMHcanq0xM လက္ထဲကကုိင္ထားတဲ့ဖုန္းကုိ ျပစ္ရေတာ့မယ္
Thiha ( Nay Say)
Via Group. ( VR မဟုတ္ပါ 😉 )
https://m.facebook.com/story.php?story_fbid=1871873559495963&id=100000200113085
| VR AR Usefullink | 0 | vr-ar-usefullink-10c12ff2f774 | 2018-05-09 | 2018-05-09 13:30:57 | https://medium.com/s/story/vr-ar-usefullink-10c12ff2f774 | false | 265 | null | null | null | null | null | null | null | null | null | Virtual Reality | virtual-reality | Virtual Reality | 30,193 | Thi Ha Paing | I am Via Myanmar Group founder . I inrest Myanmar Country Devlopment and IT communition . My Future Goal In IT Business Geek | 7c10a600981d | viagroup | 80 | 59 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-09-21 | 2017-09-21 04:49:58 | 2017-09-21 | 2017-09-21 04:48:24 | 7 | false | en | 2017-09-21 | 2017-09-21 11:46:03 | 11 | 10c22f3e1231 | 4.223585 | 1 | 0 | 0 | Finding Optimism from Nature. And Elon. | 1 | Who’s Afraid of the Big Bad AI?
Finding Optimism from Nature. And Elon.
We’ve Got This
I believe AI will make things better. I believe data science is living up to its potential by driving AI forward. I even base my strategic business decisions on a future with AI.
Is this optimism statistically justifiable? Or am I just a cylon agent? Let’s take a look…
Is the start of AI the end of humanity? (AI referring to truly-intelligent-AI, not the machine learning we already have.)
Elon Musk, Stephen Hawking, Elon Musk again and Elon Musk again again have all warned us of the danger — AI will rip us to shreds (both figuratively and literally).
These names cannot be ignored. The dangers cannot be ignored.
AI is scary.
Terrifying Potential Dystopian Futures
Maybe everyone is worried for nothing.
Like flying cars, Betamax and Vegemite, AI might not be widely accepted. There must be a major benefit (perceived or real) to actually get AI off the ground.
Let’s take a step back and see if we even want to bother with developing self-evolving software. Let’s see if it’s inevitable.
Taking evolution evolution as precedent for AI evolution, we see diverse ancestors evolve to a predictable endpoint when presented with similar environmental opportunities — that’s why we have convergent evolution.
For example: birds, bats and pterosaurs all evolved flappy bits because flying is awesome. They did this completely independent of each other.
All Evolved From Different Lineages. Only One Evolved Honey Garlic Flavor (so far).
We have a society striving toward automation. From self-driving cars to conversational algorithms to just doing our jobs — we are shifting more and more work to machines. Converging on AI.
This is already occurring at the highest levels. Businesses, even massive established ones, are being disrupted with automation. Walmart grew an unassailable market share in retail — until Amazon did everything better using tech and started to threaten this status quo.
AI creates advantages. Better AI creates better advantages.
Technology (both hardware and software) is removing the limitations on machine intelligence. Commercial algorithms no longer require an enterprise budget and IT department. We can run open source AI on a laptop and get reasonable results.
AI technologies are more and more advanced while becoming more and more accessible. Everybody seems in on the AI future. In fact, if you are reading this, you are probably an AI enthusiast — and enabler.
AI is Money in the Bank. Or Blockchain if You Swing that Way.
The desire for automation to offload our crappiest tasks and the ability to easily implement tech are creating the conditions for AI to be a major priority for everyone. (And flying with AI is awesome too.)
What’s better than telling a computer what to do? Not telling a computer what to do and it figures it out anyway. The conditions are ripe.
AI is going to happen.
Comparing to evolution has to be taken with a grain of salt, considering evolution moves glacially (sometimes literally), while tech is rapidly exploding like nothing ever seen before ever.
Looking at it, old school biological evolution does have lulls of stability — but also intense periods of very rapid action.
Scientific Pronunciation for Red Area is a ‘WHA-BAM!’ of Evolutionary Activity
A breakthrough may disrupt the entire ecosystem, followed by a period of stability where everything adjusts to a new order. Actually, that sounds like the pattern of developing technology as well.
New tech discoveries are followed by the chance to make sense of what just happened — like a great AI product being open sourced or finally figuring out blockchain architectures(my current nemesis).
Unfortunately, tech makes the periods of stability short. Very short.
So — we cannot expect to learn a new AI paradigm from scratch before the next disruption hits.
So — we need to be prepared beforehand for major AI disruption.
So — we need a body of experts and enthusiasts keeping up with our contemporary AI starting right now.
Ends Up, You’re my Only Hope
And there is my hope. Because people like you are interested and stay informed — yes you, dear reader.
It’s okay to be optimistic during these dangerous times with AI running full tilt straight for us. We just need to keep our eyes open by staying informed.
Elon Musk — who thinks building AI is summoning the demon — sponsors OpenAI. He sees the danger, knows it’s inevitable, and wants to be ready. Ready by supporting a responsible, open development direction.
You don’t need to be a billionaire with a rocket to Mars to make a difference. As mentioned earlier, never has technology been so accessible — including the tools building toward a future full of AI. Our expertise will be what steers our success and protects us from harm.
Yes, there are dangers. Yes, there are consequences we must live with. This just means AI is just another test for us humans. The fact we made it this far gives me hope.
As advances in AI revolutionize our everyday lives, those armed with the knowledge to navigate or create AI will keep humanity relevant in an automated world.
Your interest today will save us tomorrow. Share your knowledge. Share your interest.
Now get out there and build something. You’re saving the world.
I Trust We’ll get AI Mostly Kinda Right. Like Everything Else Humans Do.
| Who’s Afraid of the Big Bad AI? | 4 | whos-afraid-of-the-big-bad-ai-10c22f3e1231 | 2018-06-01 | 2018-06-01 23:52:29 | https://medium.com/s/story/whos-afraid-of-the-big-bad-ai-10c22f3e1231 | false | 841 | null | null | null | null | null | null | null | null | null | Artificial Intelligence | artificial-intelligence | Artificial Intelligence | 66,154 | Charles Bird | Data zealot. Lover of metrics. AI Believer. COO at Paper. | 8b10e36cb620 | charles_10245 | 289 | 11 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 5096a92ca008 | 2017-11-13 | 2017-11-13 17:01:48 | 2017-11-13 | 2017-11-13 17:03:36 | 0 | false | en | 2017-11-13 | 2017-11-13 17:03:36 | 3 | 10c478cb4fbc | 4.524528 | 31 | 4 | 0 | Last week, Eric Schmidt, Chairman of Alphabet, predicted that China will “rapidly overtake the U.S. in artificial intelligence… in as… | 5 | AI, Quantum & Entrepreneurship in China
Last week, Eric Schmidt, Chairman of Alphabet, predicted that China will “rapidly overtake the U.S. in artificial intelligence… in as little as five years.”
Last month, China announced plans to open a $10 billion quantum computing research center in 2020.
Bottom line, China is aggressively investing in exponential technologies, pursuing a bold goal of becoming the global AI superpower by 2030.
Based on what I’ve observed from China’s entrepreneurial scene, I believe they have a real shot of hitting that goal.
As I described in a previous tech blog, I recently traveled to China with a group of my Abundance 360 members, where I was hosted by my friend Kai Fu Lee, the founder of Sinnovation Ventures.
On one of our first nights, Kai Fu invited us to a special dinner at DaDong Roast, which specializes in Pei King duck, where we shared an 18-course meal.
The meal was amazing, and Kai Fu’s dinner conversation provided us priceless insights on Chinese entrepreneurs.
Three topics opened my eyes. Here’s the wisdom I’d like to share with you.
1. The Entrepreneurial Culture in China
Chinese entrepreneurship has exploded onto the scene and changed significantly over the past 10 years.
IMHO, one significant way that Chinese entrepreneurs vary from their American counterparts is in work ethic. The mantra I found in the startups I visited in Beijing and Shanghai was “9–9–6” — meaning the employees only needed to work from 9 am to 9 pm, 6 days a week.
Another concept Kai-Fu shared over dinner was the almost ‘dictatorial’ leadership of the Founder/CEO. In China, it’s not uncommon for the Founder/CEO to own the majority of the company, or at least 30–40 percent. It’s also the case that what the CEO says is gospel. Period, no debate. There is no minority or dissenting opinion. When the CEO says “March,” the company asks, “which way?”
When Kai Fu started Sinnovation (his $1B+ venture fund), there were few active angel investors. Today, China has a rich ecosystem of angel, venture capital and government-funded innovation parks.
As venture capital in China has evolved, so too has the mindset of the entrepreneur.
Kai Fu recalled an early investment he made in which, after an unfortunate streak, the entrepreneur came to him, almost in tears, apologizing for losing his money and promising he would earn it back for him in another way. Kai Fu comforted the entrepreneur and said there was no such need.
Only a few years later, the situation was vastly different. An entrepreneur who was going through a similar unfortunate streak came to Kai Fu and told him he only had $2 million left of his initial $12 million investment. He informed him he saw no value in returning the money, and instead was going to take the last $2 million and use it as a final push to see if the company could succeed. He then promised Kai Fu if he failed, he would remember what Kai Fu did for him, and as such, possibly give Sinnovation an opportunity to invest in him with his next company.
2. Chinese Companies Are No Longer Just “Copycats”
During dinner, Kai Fu lamented that 10 years ago, it would be fair to call Chinese companies copycats of American companies. Five years ago, the claim would be controversial. Today, however, Kai Fu is clear that claim is entirely false.
While smart Chinese startups will still look at what American companies are doing, and build on trends, today it’s now becoming a wise business practice for American tech giants to analyze Chinese companies. If you look at many new features of Facebook’s Messenger, it seems to very closely mirror TenCent’s WeChat.
Interestingly, tight government controls in China have actually spurred innovation. Take TV, for example, a highly regulated industry. Because of this regulation, most entertainment in China is consumed on the Internet or by phone. Game shows, reality shows, and more will be entirely centered online.
Kai Fu told us about one of his investments in a company that helps create Chinese singing sensations. They take girls in from a young age, school them and, regardless of talent, help build their presence and brand as singers. Once ready, these singers are pushed across all the available platforms, and superstars are born. The company recognizes their role in this superstar status, though, which is why it takes a 50 percent cut of all earnings.
This company is just one example of how Chinese entrepreneurs take advantage of China’s unique position, market and culture.
3. China’s Artificial Intelligence Play
Kai Fu wrapped up his talk with a brief introduction into the expansive AI industry in China. I previously discussed Face++, a Sinnovation investment, which is creating radically efficient facial recognition technology. Face++ is light years ahead of anyone else globally at recognition in live videos. However, Face++ is just one of the incredible advances in AI coming out of China.
Baidu, one of China’s most valuable tech companies, started out as just a search company. However, they now run one of the country’s leading self-driving car programs.
Baidu’s goal is to create a software suite atop existing hardware that will control all self-driving aspects of a vehicle, but also be able to provide additional services such as HD mapping and more.
Another interesting application came from another of Sinnovation’s investments, Smart Finance Group (SFG). Given most payments are mobile (through WeChat or Alipay), only ~20 percent of the population in China have a credit history. This makes it very difficult for individuals in China to acquire a loan.
SFG’s mobile application takes in user data (as much as the user allows), and based on the information provided, uses an AI agent to create a financial profile with the power to offer an instant loan. This loan can be deposited directly into their WeChat or Alipay account, and is typically approved in minutes. Unlike American loan companies, they avoid default and long-term debt by only providing a one-month loan with 10% interest. Borrow $200, and you pay back $220 by the following month.
Artificial intelligence is exploding in China, and Kai Fu believes it will touch every single industry.
The only constant is change, and the rate of change is constantly increasing.
In the next 10 years, we’ll see tremendous changes on the geopolitical front and the global entrepreneurial scene caused by technological empowerment.
China is an entrepreneurial hotbed that cannot be ignored. I’m monitoring it closely. Are you?
Join Me
1. A360 Executive Mastermind: This is the sort of conversation I explore at my Executive Mastermind group called Abundance 360. The program is highly selective, for 360 abundance and exponentially minded CEOs (running $10M to $10B companies). If you’d like to be considered, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
2. Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.
Abundance-Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
| AI, Quantum & Entrepreneurship in China | 112 | ai-quantum-entrepreneurship-in-china-10c478cb4fbc | 2018-02-12 | 2018-02-12 09:48:35 | https://medium.com/s/story/ai-quantum-entrepreneurship-in-china-10c478cb4fbc | false | 1,199 | Peter Diamandis’ weekly insights on entrepreneurship and the exponential technologies creating a world of abundance. | null | PeterHDiamandis | null | ABUNDANCE INSIGHTS | abundance-insights | TECHNOLOGY,ENTREPRENEURSHIP,LEADERSHIP | PeterDiamandis | China | china | China | 27,999 | Peter Diamandis | Passionate about innovation and creating a world of Abundance. Companies: XPRIZE, Singularity University, Planetary Resources, Human Longevity Inc. | deedc76c1c2 | PeterDiamandis | 52,647 | 422 | 20,181,104 | null | null | null | null | null | null |
|
0 | def compute_awaken_times(dataframe):
df = dataframe.reset_index()
# Compute time difference between rows
r = df.when.diff()
# Transform to booleans,
# where True means more than 15' of separation between points.
r = (r > pd.Timedelta('15 minutes'))
# Coerce to int and sum (i.e. count the times
# we saw data points separated by more than 15')
r = r.astype(int).cumsum()
# Pick last value (final sum) and add one
# to include the initial period.
r = r.iloc[-1] + 1
return r
| 1 | 32881626c9c9 | 2018-08-23 | 2018-08-23 22:26:15 | 2018-09-13 | 2018-09-13 08:07:09 | 4 | false | en | 2018-09-16 | 2018-09-16 10:10:21 | 10 | 10c4b6cef72 | 10.035849 | 6 | 1 | 0 | Having fun coding my first Android app, with a tight deadline. | 5 |
Peeking into the Android development world and Machine Learning, thanks to parenthood
Summary
In this post I'll share my first Android development experience, describing some pragmatic shortcuts I made along the way. In terms of machine learning, to be honest, my project run out of budget (read: personal time) before I could start exploring correlations and algorithms, so you won’t find interesting insights here. Nevertheless, in the end, I managed to set up a data processing pipeline with Jupyter Notebooks and I confirmed what more experienced people say: the investment needed in data cleaning and preparation should not be underestimated!.
Warning: the goal here is to share the experience and my thought process. You won’t find a code-complete Github repo. Feel free to jump to the last section if you are interested only in the Jupyter pipeline.
Background history
A couple of months before our first child was born I was considering buying a baby monitor with a very specific set of requirements:
Must Haves:
Sound level monitoring. The level meter should be quite visible, i.e. something that I can put on my desk and notice even when I’m wearing headphones and not looking directly at the parent unit.
Sound monitoring adjustable depending on the noise level in the room (for example, in case there is background white noise).
Let me show you the finished product to explain the kind of experience that I was looking for:
Sound level monitoring as seen from the client unit
Wish to have:
Camera streaming (no need for infrared).
Possibility to play recorded sounds (white noise, favourites lullabies, etc.).
Main features can be triggered remotely.
Something to gather statistics about wake up and crying events. Ideally, with frequency spectrum analysis to detect crying and record only those time frames.
Most of the dedicated baby monitors I reviewed were fundamentally the same and none of them checked all the marks. Therefore, I continued my search in the mobile application space, thinking that the phone hardware offers a good platform for this type of application. While it’s true that a phone can’t offer night vision, that wasn’t a concern for me (and if you really, really need it there are hacks to get it, for example, with some phones you can remove the camera filters and add IR LEDs).
Unfortunately, I was disappointed. I’m not blaming the app developers. The problem was me: I had a very specific set of requirements, hard to please.
The Android dive
Therefore, I decided to dive into the wonderful work of mobile app development. I chose Android because I’m familiar with Java and I have some old (but still quite respectable) Android phones. Because my time investment was quite limited, I simplified several things:
I decided to code only the most important requirement: sound monitoring. If this was a success, I would continue with other features.
I really wanted to try Kotlin, but stayed with Java to minimise the learning bottleneck. One step at a time.
I chose to skip anything related to app publishing because I would build something just for me, not having to worry about things like production vs. debug builds, telemetry, Google App Store rules, etc.
I targeted a single device: Moto G, Android 6.0. I got this phone several years ago, and it’s battery life and performance still amaze me.
Disclaimer: I’ll share some code in this post just to give you an idea of what I’ve done. Don’t consider it production ready! I took conscious shortcuts because I needed to build this just for me and in record time. There are things I’d done differently if I had to code this professionally.
Story 0: Architecture and foundations
I chose to build the application as a web server, thus using HTTP to display information and to receive commands. In this way, the Android phone would become the baby unit, and any other device with browsing capabilities connected to my home Wi-Fi could be a potential parent unit. The screenshot below shows the finished product:
Final UI as seen from clients
The first step before coding was to check a curated list of libraries in Github to learn about recommended tools, frameworks, and best practices:
In Android, JAR weight and compatibility are quite important, compared to what I was used to in the back-end/service development works.
I chose Timber for logging.
Vanilla code for UI is hard, so there are excellent frameworks to help you with UI bindings. However, my app had no plans for UI on the device (just the minimum live notification required for background services, and maybe a button or two to start/stop the service). Therefore, I decided to go with vanilla UI code.
Telemetry and monitoring: my original plan excluded it, but writing a Timber adapter to dispatch errors to Loggly was really simple, so I did it in order to have any errors tracked online (with email alerts, too).
RxJava: highly recommended, especially to handle tasks between the UI and the non-UI threads. I was familiar with Project Reactor (which I highly recommend), so I was quite tempted to use it. However, I wasn’t sure if my app complexity would justify it, so I started with vanilla code and decided to revisit this decision from time to time.
Story 1: Web server selection
This should be simple because there are many, many web frameworks there, right?... Wrong…. For sound monitoring, I needed to stream data from the server to the client, but many frameworks supported only the simple synchronous request/response model. Initially, Websockets looked like a good bet, so this narrowed the research scope. However, when working in this, I realised that an older technology was simpler and a better fit for my need: Server Side Events.
Story 1b: SSE and a custom HTTP server code to the rescue.
SSE is natively supported in the browsers I use. Unfortunately, I could not find an Android web framework either supporting SSE or exposing low-level abstractions that I could easily plug in to achieve what I needed. Believe it or not, the simpler thing was to implement my own HTTP server on top of sockets. For this project, where I knew this was a very small and short-lived personal app, it made a lot of sense to have a simple walking skeleton I could elaborate on as fast as possible.
Story 2: Sound level monitoring and remote commands
The streaming of events is done via a publisher/subscriber model: one thread handles reading PCM data from AudioRecorder and publishing the maximum value seen in the read buffer, while another thread listens for these values and pushes the data to the TCP socket.
On the UI side, the static HTML has only one HTML5 meter tag using all the available screen space. The background is black and it will be progressively filled by green as the sound increases. It will switch to red above a certain threshold. There is also a slider to adjust the sensitivity.
The HTML page is quite small:
Inline Javascript is used to subscribe to the SSE event stream and to update a local variable every time an event is received.
Another function is scheduled to run every 200ms to pick the latest value and do a smooth transition on the meter UI element with jQuery. The transitions are needed to avoid sudden jumps and to simulate the feeling of real-time streaming.
NoSleep.js is used to prevent the screen from entering sleep mode.
Sound meter source code
Server code streaming sound level to clients using SSE
Story 3: Camera streaming
The noise monitoring worked well, so I ventured into camera streaming. Naively, I imagined that recent advances in HTML5 video elements combined with the powerful Camera API in Android would allow me to stream video in a couple of lines of code. Not true. I won’t go into the detail of all the things that I explored and I’ll jump directly to what I ended up using. Before you keep reading, be warned: my goal was to minimise coding time, trading off other variables as needed: I do not have much time for side projects and I had a very hard deadline! In my case, because this was meant to run in a home Wi-Fi with plenty of unused bandwidth, I had an advantage I could use. Again, for a second time, a very old and simple technology came to the rescue….
Story 3b: Camera streaming with multipart/x-mixed-replace.
It turned out to be really simple to just capture HD camera previews as JPEG images, and push them one by one as fast as possible to the HTML browser using “multipart/x-mixed-replace” to replace each still. Naturally, this means the bandwidth usage is high and the frame rate is low, but still quite usable. Also, if you consider that the code doing the camera preview is ~100 lines, and the code pushing those bytes through the TCP channel (also handling the HTML protocol details manually) is just ~60 lines… the cost/benefit ratio gets really interesting. I like it when I can achieve goals with super cheap and simple alternatives, which I can switch later with more advanced versions later without re-architecting the whole project.
Capturing camera frames
Sending frames to the web browser
Story 4: Playing recorded sounds, remotely triggered.
The next feature required only adding code to list directory contents, find MP3 files, and produce an HTML Bootstrap page with basic playback controls (volume up/down, start, stop).
Story 5: Sound recording and cry patterns data analysis.
Feeling confident, I decided to take the most ambitious goal, that would also give me the pleasure to grow in machine learning and data analysis. Because the mic was already in use if any parent is using the sound monitoring feature, I could not start the recording. I solved this by upgrading the noise monitoring level code: instead of processing raw PCM data via AudioRecord, it records the audio stream using MediaRecorder. I’ve selected AWR-WB format because it offers a good compromise between compression (important, as I was planning to keep several months of data) and keeping only the sound spectrum I was interested in (e.g. the typical baby cry spectrum, not interested in higher or lower bands). The noise level was reported to any SSE listener using recorder.getMaxAmplitude(). If background recording was enabled, the stream was saved to a rolling 3GPP file. If not, it's just dumped to /dev/null.
Story 5b: Unpleasant surprises… fail?
Things worked fine initially, but I started experiencing troubles: the background noise playback was suddenly stopped when the recording was enabled. I tried using different playback applications to play the sound while using mine just to record things, but this also failed. The only one that was a little better was Google Play music, which managed to detect that the playback stopped and resumed it after a couple of seconds, but only sometimes. Some research revealed other people with exactly the same phone (Moto G 2014) complaining about background music stopping under memory stress, presumably due to aggressive process scheduling or prioritisation. I double checked that my app was doing the “right” things to avoid being killed (e.g. persistent notification, coded as a service, etc.), but the problem continued.
Story 5c: A simpler version
I rolled back the change and downloaded an application to record just the sound level in a CSV file. This eliminated the possibility of doing cry detection via spectrum analysis but, considering the sound spike when our baby started to cry was quite high, it was easy to identify the relevant data points. Unfortunately, the that app later was updated adding fancy features, and despite also being a persistent service, is was aggressively killed (but not my app, which was smaller). Because it was open source, I considered building the previous version myself, but eventually, I just coded the CSV recording feature in my app, which only required adding a new subscriber to the flow of published PCM max values.
Story 6: Data analysis, finally!
Then, I built a data processing pipeline using Jupyter notebooks. Data was pulled via FTP (server provided by another app), processed, and finally displayed day by day, to look for patterns:
I’m too shy to share the naive code I desperately wrote in the last days of my deadline… but I’ll share some decisions, which I’m not completely sure were right due to my lack of experience in sound processing, but still helped me to progress and learn:
The distance between the Android phone doing the recording and the crib was not always the same. Also, some nights we played pink noise. This meant the data needed to be normalised. Solution: I’ve used pandas.DataFrame.quantile with q=0.995. After some experimentation, this threshold proved to filter noise with a good accuracy, keeping only interesting data points.
To compute times awakened, I needed a way to discriminate when two data points belonged to the same crying period. I could not partition the time in fixed windows because two data points that were very close, but in different windows, would be incorrectly processed as two crying periods. I’ve solved it with this code:
The End
My original plan was to use this to explore possible correlations with feeding times, food that was eaten that day, naps quantity and duration, etc., as I had all that data tracked in another Android app supporting CSV export. All this, in order to answer a very important question:
What can we try to reduce the night awakenings, and get more sleep?…
Did I find an answer?….
…No. A baby, who is constantly learning, adapting, changing, is not a stable test subject. Wonder weeks, teething, a cold, getting used to solids, etc. are examples of events (some of them, completely unpredictable) that will disturb your baby rest, and therefore mess with your data. To be honest, I knew this beforehand, but I still wanted to have a reason (read: excuse) to learn something about Android, machine learning, data analysis and Python. Unfortunately, once the data processing was completed and I got nice graphs working, my availability to work on side projects was greatly reduced and I stopped my investment in machine learning, just when the real fun was about to start. Nevertheless, thanks to this initiative I bought plenty of books (more than ten!). If you want a recommendation, Hands-On Machine Learning with Scikit-Learn and TensorFlow was by far the best one.
I'm sure the future will bring more opportunities to grow in Machine Learning.
| Peeking into the Android development world and Machine Learning, thanks to parenthood | 126 | peeking-into-the-android-development-world-and-machine-learning-thanks-to-parenthood-10c4b6cef72 | 2018-09-16 | 2018-09-16 10:10:21 | https://medium.com/s/story/peeking-into-the-android-development-world-and-machine-learning-thanks-to-parenthood-10c4b6cef72 | false | 2,474 | Data Driven Investor (DDI) brings you various news and op-ed pieces in the areas of technologies, finance, and society. We are dedicated to relentlessly covering tech topics, their anomalies and controversies, and reviewing all things fascinating and worth knowing. | null | datadriveninvestor | null | Data Driven Investor | datadriveninvestor | CRYPTOCURRENCY,ARTIFICIAL INTELLIGENCE,BLOCKCHAIN,FINANCE AND BANKING,TECHNOLOGY | dd_invest | Android | android | Android | 56,800 | Sebastian Esponda | null | 5dad0beb9992 | .seb | 3 | 2 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | 5d40db1e56f3 | 2017-12-29 | 2017-12-29 06:16:00 | 2017-12-29 | 2017-12-29 08:48:51 | 7 | false | en | 2017-12-29 | 2017-12-29 09:18:20 | 2 | 10c67fc06197 | 2.921698 | 8 | 0 | 0 | Want to work with a fast growing enterprise SaaS company and help grow the next tech unicorn? Keep reading to join Morph.ai growth team! | 5 |
We’re looking for Growth Heroes 🚀
Want to work with a fast growing enterprise SaaS company and help grow the next tech unicorn? Keep reading to join Morph.ai growth team!
About You
You love talking. Talking to people will be the essence of what you do here. On calls, through mail, in-person meeting, events, articles, advertisements whatever you do, you’ll be in conversation. An indeed that’s what Morph.ai is all about, conversations through chatbots!
You have been talking all your life. You have experience with public speaking at school or college. You have been in a role that involves extensive customer interaction in your previous jobs, if any. You know the art of impressing others, by your personality and by your wit. Experience in digital marketing is a big plus to have. You’ll be closely working the rest of the business and product team, so being a team player is a must.
You are a hustler. You are faster than Flash when it comes to getting things done. You don’t mind moving around for events and meetings across cities and countries. You are tech savvy and smart to answer impromptu questions thrown at you in a meeting. Being a deep-tech company you need to thoroughly understand what you are selling. You can get into the shoes of your users and ease out their problems and help them when they need.
And most importantly You love a challenge. We are a small team doing big things and working with large clients. So challenges are a routine excitement at Morph.ai.
About Morph.ai
Founded in 2016, Morph.ai is a chatbot based marketing platform, which helps businesses have personalised conversations with their potential customers to convert them.
Using Morph.ai, businesses can have one-on-one conversations with their audience to increase awareness, improve lead quality and drive more sales. We have set out to revolutionize the marketing and sales process by leveraging messaging as the most active channel of engagement. We strongly believe that chat is the perfect channel of interaction between businesses and customers and it would soon emerge as the strongest medium for marketing and selling products and services.
Today, Morph.ai is one of the fastest growing AI companies of the country, working with brands like Manchester City F.C., Yes Bank, Estée Lauder and many more.
Morph.ai has received a number of recognitions in a short time
Things You’ll Do
Morph.ai is a startup, so you might need to wear many hats when required. That said, here are some things you’ll probably get your hands on:
Help sell and market Morph.ai’s awesome chatbot marketing platform.
Manage interactions with Enterprise accounts to be their single point of contact.
Analyse the market opportunities and build business strategies for partnerships and expansion.
Help customers via support to ensure they have the best experience possible.
Help product team test and ensure that everything is working as desired and the customers love what they see.
Apply
Ready to apply? Just fill this form and we will get in touch with you ASAP.
| We’re looking for Growth Heroes 🚀 | 224 | were-looking-for-a-growth-heroes-10c67fc06197 | 2018-04-09 | 2018-04-09 15:22:27 | https://medium.com/s/story/were-looking-for-a-growth-heroes-10c67fc06197 | false | 496 | Morphs your services into conversations and lets your users access those from anywhere. http://morph.ai | null | morphai | null | Morph.ai | null | morph-ai | CONVCOMM,CONVERSATIONAL COMMERCE,MESSAGING | morph_ai | Chatbots | chatbots | Chatbots | 15,820 | Pratik Jain | null | 302ef35a669f | tunetopj | 160 | 134 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-06-27 | 2018-06-27 13:54:29 | 2018-06-27 | 2018-06-27 14:51:36 | 5 | false | en | 2018-07-13 | 2018-07-13 16:36:46 | 14 | 10c708c2317c | 3.354088 | 2 | 0 | 1 | Yes, and here we have it… the beginning of what Ashtanga yogis call the gatekeeper poses… (the hard ones that inevitably lead to lots of… | 4 | 5 Minutes of Machine Learning: Introduction to TensorFlow [Day 5]
Yes, and here we have it… the beginning of what Ashtanga yogis call the gatekeeper poses… (the hard ones that inevitably lead to lots of falling, but also lots of rewarding challenges when overcome)…
If you caught me here at the door of the gatekeeper things and are already scared shitless, please go back to my previous post in the 5 Minutes of Machine Learning series, or go to the first one and start there! It makes things slightly less scary.
So… TensorFlow!
Ooooff yeah I just did that. So tacky.
Let’s jump right in.
What is Tensorflow?
TensorFlow is a graph-based computational framework that can encode… anything you want it to.
TensorFlow (as most developers will interact with it) consists of low-level APIs that build models using mathematical operations. There are also high level APIs (such as functions like tf.estimator) for predefined architectures that can be used within TensorFlow (see my journey with this object classification and recognition model as an example of dealing with the high level APIs pre-crafted to complete certain known tasks).
There are two programming components to TensorFlow:
Graph protocol buffer.
The runtime to execute the graph.
What else is important to know about getting started programming with TensorFlow? Well, there is a hard way and an easy way (well okay, its not that binary, but you get it).
In TensorFlow, there are custom estimators, and there are pre-made estimators. What is the difference?
Custom vs. Pre-Made Estimators in TensorFlow
Building and using a custom estimator in TensorFlow implies you write the model function yourself. In a pre-made model, someone already wrote the model function for you.
The sketch below (done by yours truly) seemed to clarify the difference between the two.
One of the thoughts that hit me pretty quickly about the difference between the two is the fact that even with custom estimators, you are still using pre-made estimators as building blocks in order to do two things:
Calculate a unique metric that has not already been built somewhere by someone.
Mess with hidden layers of “tensors”, connecting or adding them in unique ways.
I just used a strange word that is not so strange (because TensorFlow has it in the name… ) but let’s get a clearer idea of what a tensor really is now that we are talking specifically about TensorFlow.
What is a Tensor?
A tensor is a primary data structure.
It can be N-dimensional, with N being a very, very large number. Tensors can be scalars, vectors, and/ or matrices (click on them to read more if you are unfamiliar with the algebra-ish terms). All three of these can hold integers, floating points, or string values.
So let’s look at these terms in context of TensorFlow with examples:
Also keep in mind that matrices can go above a 2-D array.
Last but not least, what can be done to tensors? In my next post, I discuss how to create and manipulate tensors, but for now lets focus on basic CRUD operations (create, read, update, delete).
A graph’s nodes are equivalent to operations.
A graph’s edge = tensors.
Tensors flow through the graph, and are manipulated at each node by an operation.
This visualization is of an actual TensorFlow graph as it would appear on TensorBoard, TensorFlow’s analytics/ insights dashboard if you will that helps developers view the performance of their model training and inference processes:
See full article here
I like to imagine the process of training as a ripple of tensors washing over a grid of nodes (like waves would wash over a beach). After the first wave, those nodes have been changed by whatever mathematical operations the tensors brought with them. Like multiplying to matrices together, when a tensor’s operation is performed on the node, the node has changed. These waves can occur in multiples (convolutions) and not necessarily a single isolated “wave-event”.
My over-simplified visualization of a TensorFlow graph
Now that I have illustrated TensorFlow as a beach of types, stay tuned for the next mind blowing post on how to create and manipulate tensors!
| 5 Minutes of Machine Learning: Introduction to TensorFlow [Day 5] | 3 | 5-minutes-of-machine-learning-introduction-to-tensorflow-10c708c2317c | 2018-07-13 | 2018-07-13 16:36:47 | https://medium.com/s/story/5-minutes-of-machine-learning-introduction-to-tensorflow-10c708c2317c | false | 668 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Amina Al Sherif | null | de46c1e173d3 | amina.alsherif | 34 | 3 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 462efa6433f4 | 2018-09-03 | 2018-09-03 20:01:00 | 2018-09-03 | 2018-09-03 20:14:10 | 2 | false | es | 2018-09-18 | 2018-09-18 15:01:53 | 4 | 10c96f3275dd | 3.53805 | 5 | 0 | 0 | Por Agustín Montero, Data Scientist en ALMUNDO | 5 | En busca de la recomendación perfecta: Almundo Recommendation Challenge 2018
Por Agustín Montero, Data Scientist en ALMUNDO
“Un viaje típico en Almundo muchas veces comienza con una buena recomendación” Con esta frase abríamos el Almundo Recommendation Challenge 2018, un desafío que organizamos desde Almundo para la comunidad técnica de Argentina.
¿Por qué una buena recomendación?
Hoy en día, los sistemas de recomendación están muy presentes en la vida de las personas. Desde Netflix, sugiriendo que veamos alguna serie o película en particular; Spotify, mostrándonos canciones o bandas que seguro nos interesan escuchar; o Amazon, proponiendo que compremos determinado libro dando por sentado que será perfecto para nosotros.
A través de la recomendación online (en tiempo real) u offline (facilitando la construcción de campañas dirigidas a segmentos particulares, por ejemplo), los sistemas de recomendación se han transformado en un commodity más que todos desean tener.
En Almundo trabajamos para construir sistemas de recomendación que nos ayuden a descubrir qué destinos le interesa a cada uno de nuestros viajeros.
La recomendación en Almundo
Desde el equipo de Personalización, conformado por Luis Brassara, German Guzelj y yo, estudiamos día a día este y otro tipo de problemas relacionados con Data Science y Machine Learning, y desarrollamos soluciones con el fin de personalizar la experiencia de cada viajero.
Actualmente estamos trabajando en diferentes proyectos que involucran la personalización a través de recomendaciones online y offline, campañas inteligentes de segmentación de usuarios, modelos de ranking bayesianos, learning to rank y otros más que están aún en etapa de evaluación.
Todas estas soluciones basadas en algoritmos de aprendizaje automático dependen de la calidad de los datos. Es por esto que en el equipo también trabajamos mucho para asegurarnos de tener una excelente arquitectura que nos permita tener datos de mucha calidad, manteniendo siempre una alta disponibilidad de los mismos.
Sobre el desafío
Este año estuvimos presentes nuevamente en la Escuela de Ciencias Informáticas (ECI)[1] del Departamento de Computación de la UBA. En el contexto de este evento, que se organizó desde fines de julio a principios de agosto, decidimos organizar una charla sobre sistemas de recomendación[2] y un desafío para la comunidad técnica. Nos pareció interesante que estuviera relacionado con Data Science y Machine Learning, en particular sistemas de recomendación.
Quisimos armar una versión simple del problema, con datos que resultaran de interés, donde hubiera lugar para probar diferentes algoritmos, además de que resultase un ejercicio desafiante.
La versión final del problema planteaba lo siguiente: dada una lista de viajeros (con edad, género y país de origen), y los destinos que buscaron (por ejemplo, Miami, Nueva York y Cancún), se debía construir una lista de destinos para recomendar a cada uno de ellos.
Los participantes contaban con una lista de casi 400.000 viajeros para entrenar sus algoritmos, y debían enviar recomendaciones para una lista de más de 150.000. Además, podían acceder a un catálogo con los destinos válidos para recomendar, con información particular de cada destino. Incluso podían incorporar datos extra que sean de ayuda para desarrollar soluciones, siempre y cuando sean datos públicos.
El objetivo era claro y preciso: había distintas métricas que otorgaban puntos según la calidad de las recomendaciones; quien sumara más puntos al final de la competencia, se llevaría de premio un viaje a Madrid para dos personas.
Es importante decir que, antes de publicar los datos, quitamos algunas búsquedas de cada viajero y las guardamos en privado sin que los competidores pudieran acceder a ellas. De esta manera, cuando los participantes enviaban las listas de recomendaciones para viajeros, podíamos comparar contra las búsquedas que quitamos, de manera de comprobar si había coincidencia, entre otras cosas.
Era posible enviar hasta tres listas con recomendaciones por día durante una semana. Se podía utilizar cualquier lenguaje de programación, y cualquiera de los métodos conocidos de recomendación basados en Collaborative Filtering o Content-based Filtering[3]. Incluso versiones híbridas de estos, así como también ideas innovadoras o ad-hoc para resolver el problema, eran bienvenidas.
Luis Brassara dando la charla del Challenge en la ECI 2018, Escuela de Ciencias Informáticas (UBA)
La solución ganadora
En total hubo casi 450 inscriptos y los competidores hasta último momento pelearon por el podio. El ganador fue Rodolfo Edelmann. Pueden acceder al código de la solución que él mismo compartió acá. ¡Felicitaciones una vez más, estimado Rodolfo!
Creemos que estos eventos son muy importantes, en particular dentro de la comunidad técnica, para fomentar el trabajo de disciplinas relacionadas con la ciencia de datos en el país. Además, generan mucha visibilidad sobre el tipo de trabajo que hacemos en Almundo para personalizar las búsquedas de los usuarios.
Estamos muy contentos por la repercusión que tuvo el desafío, que incluso tuvimos que reabrir en modo playground para que algunos competidores pudieran seguir probando soluciones. (Ya sin el premio de los pasajes a Madrid, claro).
Agradecemos a todos por el tiempo y las ganas que pusieron durante el desafío. Creemos sin dudas que esta es una de las mejores maneras de aprender.
Ya estamos pensando ideas para el desafío del año próximo.
— — — -
[1] https://www.dc.uba.ar/events/eci/2018
[2] https://es.slideshare.net/LuisBrassara/evaluacion-de-sistemas-de-recomendacion
[3] https://en.wikipedia.org/wiki/Recommender_system#Approaches
| En busca de la recomendación perfecta: Almundo Recommendation Challenge 2018 | 67 | en-busca-de-la-recomendación-perfecta-almundo-recommendation-challenge-2018-10c96f3275dd | 2018-09-18 | 2018-09-18 15:01:53 | https://medium.com/s/story/en-busca-de-la-recomendación-perfecta-almundo-recommendation-challenge-2018-10c96f3275dd | false | 836 | Notas de interés, tecnología e innovación | null | almundo.argentina | null | Nerds Almundo | null | nerds-almundo | TECNOLOGIA | almundo_ar | Almundo | almundo | Almundo | 11 | Almundo blog | Primera Comunidad Global de Expertos en Viajes. ¡Comparte tu “Small Data”: los pequeños detalles que hacen grande tu viaje con #AlmundoCommunity! | 42ba954395c0 | almundo.blog | 42 | 17 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2017-12-06 | 2017-12-06 13:08:19 | 2017-12-06 | 2017-12-06 13:39:35 | 0 | false | en | 2017-12-06 | 2017-12-06 13:39:35 | 8 | 10cbb97037bb | 1.841509 | 0 | 0 | 0 | Data science/Machine learning is the hot trending topics of today and everyone wants to get on the ride. But Data science can be daunting… | 5 | Data science self-study learning path
Data science/Machine learning is the hot trending topics of today and everyone wants to get on the ride. But Data science can be daunting to beginners as the field is relatively new and really vast. Especially for the ones looking to change the domain of their job by following the self study path. Anyone who is on the self study path soon finds himself/herself in an ocean of resources and it is very easy to get lost. This is my attempt at summarizing self learning path which I followed to transition from software engineer to a data scientist.
Data science is a blanket term used for multiple technologies and paradigms. Process of learning data science can be broken down into 3 chunks: Theory, Tools and techniques. All these three are interconnected, related and are essential. In my personal experience many people start the journey with absolute theory(mostly Andrew NG’s course) and fall out of it in between as theory/math becomes complex or many just doesn’t appreciate the need of it.
Why can’t I just use scikit library and call a method to classify? Why should I learn so much theory?
This is the question for most of software engineers who are used to leverage libraries or ‘one line’ code to achieve everything. Well, I was one of them.
Generally learning starts with theory but considering huge and complicated theory space of data science domain, I propose bottom up approach as theory can be daunting at first and we cannot appreciate the necessity of its understanding before understanding data science space in general. This has personally worked for me hoping it could help someone else as well. Let me know what you think in the comments!
The learning path
Understand the data science domain and its capabilities(Tools) -> Get hands dirty on a real data science problem(Techniques) -> Deep dive (Theory+ Revisting techniques + Revisiting tools)
Phase 1: Understand data science domain and its capabilities
· 2–5 hours — free: Tutorial competitions on Kaggle.com like Predicting survival of Titanic passengers
· 50–60 hours — free: Udacity’s Intro to machine learning course by Sebastian Thrun and Katie which is free and part of bigger nano degree program
Phase 2: Get hands dirty on a real data science problem
Fail early, fail fast
· 20–30 hours — free: Attempt a real-world competition with real data on Kaggle.com which is an excellent community of data science enthusiasts and experts.
· At this stage, we will know the capabilities of data science and would have got a flair for it. Also, we would be in the state to appreciate the need of theory understanding
Phase 3: Deep dive— Free and paid versions available for each of these
· Statistics: https://www.udacity.com/course/intro-to-inferential-statistics--ud201 and https://www.udacity.com/course/intro-to-descriptive-statistics--ud827
· Linear algebra: https://www.youtube.com/playlist?list=PLE7DDD91010BC51F8
· Machine learning: https://www.coursera.org/learn/machine-learning
· Visualization: https://www.udacity.com/course/data-visualization-and-d3js--ud507
| Data science self-study learning path | 0 | data-science-self-study-learning-path-10cbb97037bb | 2018-04-05 | 2018-04-05 14:31:35 | https://medium.com/s/story/data-science-self-study-learning-path-10cbb97037bb | false | 488 | null | null | null | null | null | null | null | null | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Raghavendra R M | null | e0261e756299 | raghu.949 | 3 | 2 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | 7a943a67770f | 2018-04-01 | 2018-04-01 07:39:02 | 2018-04-01 | 2018-04-01 15:50:47 | 1 | false | en | 2018-04-12 | 2018-04-12 17:35:16 | 45 | 10cf3cf790b1 | 2.513208 | 13 | 0 | 0 | Nitish Kumar is 5th Year Undergraduate student from the department of Geology and Geophysics, IIT Kharagpur.He recently got placed at Oyo… | 5 | Nitish Kumar
Nitish Kumar is 5th Year Undergraduate student from the department of Geology and Geophysics, IIT Kharagpur.He recently got placed at Oyo Rooms and also his team was one among 3 from India to be qualified for the International Data Analysis Olympiad to be held at Moscow.Here he shares his journey to success and path he followed.
1)How you got started with Data Analytics?
Ans)In 2nd semester of 3rd year I started learning Data Science by doing some online courses such as Data Science Specialization , Analytics Edge, Machine Learning, Analytics Vidhya.
After that I applied my learning by practicing on some Data sets such as
a)Titanic.
b)Big Mart sales.
c)Loan Prediction.
d)Black friday.
2)You did Internship in Data Science field in 3rd year?
Ans)Yes, I applied for internships through Linked In and other carrier websites and finally did an unpaid internships in 3rd year summer in Data Science field.
3)Your journey in 4th Year??
Ans)In 4th year,I started taking part in several competitions of Data analysis.
Initially I was not able to perform well but after 4–5 competitions my performance got improved.
4)What are the websites where the Data analysis competition is hosted?
Ans)i)Datahack.
ii)Kaggle.
iii)DrivenData.
iv)CrowdAnalytix.
v)Tunedit.
vi)Hackerearth.
vii)Hackerrank.
After taking part in such competitions one can also take part in International competitions such as
i)DataMining.
ii)Datascience.
iii)Idao.
5)What things you did in order to Improve the performance in competitions?
Ans) i)I started reading blogs related to Data Science.
ii)I read codes of different winner of the competition, that helped me a lot.
iii) I also started making notes.We try so many things in a Datasets.It’s important to note down our approach and learning.
Note:-Here are the links of resources as suggested and used by Nitish.
Winning approaches:
http://blog.kaggle.com/category/winners-interviews/
https://www.analyticsvidhya.com/blog/tag/winners-approach/
https://www.hackerearth.com/practice/machine-learning/challenges-winning-approach/machine-learning-challenge-one/tutorial/
http://rohanrao91.blogspot.in/search/label/Data%20Science
https://bitbucket.org/threecourse/kaggle-wiki/wiki/Past%20Competitions%20and%20Solutions%20(July%202016-%20)
Winner codes:
http://www.chioka.in/kaggle-competition-solutions/
https://github.com/analyticsvidhya
https://github.com/kunalj101
https://github.com/vopani
https://github.com/SudalaiRajkumar
https://github.com/binga
https://github.com/sonnylaskar/Competitions
https://github.com/bishwarup307
https://github.com/aayushmnit/Competitions
https://github.com/mlandry22
https://github.com/supreethmanyam
https://github.com/sadz2201
Additional learning resources:
https://www.hackerearth.com/practice/machine-learning
http://blog.kaggle.com/
https://www.analyticsvidhya.com/blog/
http://www.claoudml.co/
https://machinelearningmastery.com/blog/
https://www.r-bloggers.com/
https://www.kdnuggets.com/
https://docs.google.com/document/d/1edqHaah1hAQi1NKun-3IjDBqeuuINgJNTzBRUELiK5k/edit
https://www.dataquest.io/blog/machine-learning-tutorial/
General Questions
6)What’s your C.G?
Ans)It’s 7.3.Actually my S.G never went beyond 8.
7)Other Involvement in college?
Ans)I was the Tech secretary of my hall in 2nd year.
8)How should one prepare for the Data science jobs?
Ans)Take part in competitions because that is one of the most important way you can showcase your skills to the company.If you have performed well in competitions then it gives the surety to companies that you can work well in real datasets.
Second thing is companies mostly ask theoretical questions in the Interviews such as What are the assumptions of logistics regression or how random forest works.So it’s very important that you know the basic maths and logics behind every algorithm.
9)What should one do if he is getting bored?
Ans)It’s common in this field.You may get bored especially when your result in competition is not improving.So you should keep trying new things and once your Idea improves your performance you will automatically get the motivation.
Very Specific Technical questions.
10)What is your general approach towards datasets?
Ans) I first understand the domain of Dataset then I do Data cleaning.After that I do data visualisation followed by feature engineering and finally I apply algorithm.
11)Any tips you want to give for winning the competition
Ans)Do feature engineering very well and apply boosting algorithms such as XGBoost, Adaboost etc.Most of the time boosting algorithms give you the best results.
12)Does stacking Help in competitions?
Ans)It does not improve the performance drastically but If you are in top 20 and want to go in top 5 then you can try stacking.
Here is the link to his resume
| Nitish Kumar | 40 | nitish-kumar-10cf3cf790b1 | 2018-05-09 | 2018-05-09 12:21:19 | https://medium.com/s/story/nitish-kumar-10cf3cf790b1 | false | 613 | This publication is the source of Information & Inspiration for college students which primarily includes the interviews and suggestions of your seniors about their journey. | null | Beakone | null | Beakone | beakone | null | bea_kone | Machine Learning | machine-learning | Machine Learning | 51,320 | Pravin Mishra | null | caa9116a0339 | pravinmishra1796 | 22 | 22 | 20,181,104 | null | null | null | null | null | null |
|
0 | null | 0 | null | 2018-03-13 | 2018-03-13 09:53:21 | 2018-03-13 | 2018-03-13 09:54:45 | 1 | false | en | 2018-03-13 | 2018-03-13 09:54:45 | 3 | 10d35f2d2271 | 0.743396 | 0 | 0 | 0 | Our referral program is starting. Every member that register on our web page (Register) will have an option for generating a referral link… | 4 | Crypto Angel Referral Program
Our referral program is starting. Every member that register on our web page (Register) will have an option for generating a referral link and will get 7% bonus of each ANGEL transaction that uses that link, likewise each contributor will also get additional 1% bonus for using the referral link.
For example: If you get one of your friends to sign up for whitelisting using your referral link and he buys 1000 ANGELs (notice 1ETH= 1000ANGEL) you will get 7 % bonus from his purchase in ANGELs (70 ANGELs) and your friend will get additional 1% bonus (10 ANGELs) on his purchase.
Rewards will be issued to the ETH address that you will provide when you sign up, and the reward will be issued after the crowdsale closes and will be distributed within 2 weeks.
More info on:
https://www.crypto-angel.com/
| Crypto Angel Referral Program | 0 | crypto-angel-referral-program-10d35f2d2271 | 2018-03-13 | 2018-03-13 09:54:46 | https://medium.com/s/story/crypto-angel-referral-program-10d35f2d2271 | false | 144 | null | null | null | null | null | null | null | null | null | Blockchain | blockchain | Blockchain | 265,164 | Crypto Angel | AI Platform, Designed To Enhance Human Thinking, Planing And Decision-Making Process | 182a2f11b4ad | cryptoangelcoin | 15 | 1 | 20,181,104 | null | null | null | null | null | null |
0 | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset= pd.read_csv(‘Data.csv’)
X= dataset.iloc[:,:-1].values
y= dataset.iloc[:,3].values
| 4 | 9b4c54af66f9 | 2018-09-17 | 2018-09-17 13:48:17 | 2018-09-19 | 2018-09-19 10:21:04 | 1 | false | en | 2018-09-19 | 2018-09-19 10:31:42 | 2 | 10d8553ad616 | 2.830189 | 0 | 0 | 0 | Data preprocessing is a crucial step before making a machine learning model. The model won’t work properly without it. It can be a bit… | 4 | Independent and Dependent Variables (#1)
Data preprocessing is a crucial step before making a machine learning model. The model won’t work properly without it. It can be a bit boring for some, but is a necessary step in order to be able to work on a machine learning model.
You can get the dataset from www.superdatascience.com/machine-learning.
This is what the dataset looks like
The dataset we have here has 4 columns-
Country
Age
Salary
Purchased
The variables here can be classified as independent and dependent variables. The independent variables are used to determine the dependent variable. In our dataset, the first three columns are independent variables which will be used to determine the dependent variable, which is the fourth column.
Before getting started, make sure you have Anaconda installed. If you don’t have it, follow the tutorial here.
Installing Python and Anaconda on Windows
This tutorial will show you how to install Python (via Anaconda) on your machine.hackernoon.com
Importing Libraries
Python library is a collection of functions and methods that allows you to perform lots of actions without writing your own code. These libraries can be imported and this enables us to work on our code a lot faster.
The wheel can be taken as an example to understand this. It has been already invented, so the person who invented the car didn’t waste his time reinventing the wheel. Here, the car is an invention which has imported the wheel. So, the wheel is a module which can be used in other inventions as it is.
The libraries that we’ll be using here are numpy, matplotlib.pyplot(will be used in the later chapters) and pandas. The pandas library is used to import and manage the datasets.
Importing the dataset
First, we need to set the appropriate working directory using the file explorer. You’ll find this on the top right of the Spyder window. The working directory is the directory in which your dataset is stored.
Here, we’ll be using the pandas library for importing the dataset.
Execute the code by selecting the line of code and pressing Ctrl and Enter together.
In the variable explorer, the dataset will be visible. It can be accessed by double-clicking it.
Now, we need to differentiate the matrix of features containing the independent variables from the dependent variable ‘purchased’.
Creating the matrix of features
The matrix of features will contain the variables ‘Country’, ‘Age’ and ‘Salary’.
The code to declare the matrix of features will be as follows:
In the code above, the first ‘:’ stands for the rows which we want to include, and the next one stands for the columns we want to include. By default, if only the ‘:’ (colon) is used, it means that all the rows/columns are to be included. In case of our dataset, we need to include all the rows (:) and all the columns but the last one (:-1). We have finished creating the matrix of features X. Execute the line. It can now be observed that the variable explorer shows the variable X. It can be accessed by double-clicking the ‘X’ in the variable explorer.
Creating the dependent variable vector
We’ll be following the exact same procedure to create the dependent variable vector ‘y’. The only change here is the columns which we want in y. As in the matrix of features, we’ll be including all the rows. But from the columns, we need only the 4th (3rd, keeping in mind the indexes in the python). Therefore, the code the same will look as follows:
After execution, the variable ‘y’ will be shown in the variable explorer and it may be accessed by double-clicking the ‘y’ in the same.
This completes the tutorial for differentiating the dataset into the features (or the independent variables) and the dependent variable.
Do let me know how you liked this tutorial!!
The next tutorial will cover how to handle missing data. The link will be added here when it is published.
Please subscribe to updates for this series to get notified when the next article is out :)
Happy learning!
| Independent and Dependent Variables (#1) | 0 | independent-and-dependent-variables-1-10d8553ad616 | 2018-09-19 | 2018-09-19 10:31:42 | https://medium.com/s/story/independent-and-dependent-variables-1-10d8553ad616 | false | 697 | The journey of a machine learning enthusiast | null | null | null | Machine Learner | null | machine-learner | MACHINE LEARNING,BEGINNERS GUIDE,TECHNOLOGY | null | Machine Learning | machine-learning | Machine Learning | 51,320 | Chinar Amrutkar | Computer Science student. Machine Learning enthusiast. Learning human psychology by simple observation, one person at a time. | 7b3eae6219ed | chinar_amrutkar | 27 | 29 | 20,181,104 | null | null | null | null | null | null |
0 | null | 0 | null | 2018-04-11 | 2018-04-11 15:23:40 | 2018-04-11 | 2018-04-11 15:45:23 | 1 | false | en | 2018-04-11 | 2018-04-11 15:48:13 | 0 | 10da0a70f519 | 0.701887 | 2 | 0 | 0 | Hello guys, | 1 | Correlation (vs) Regression :
Hello guys,
i feel basics are building blocks for impeccable knowledge.
and this is one of the common questions asked in interviews
introduction:
Correlation and regression both are analysis of multivariate data. multivariate analysis is nothing but the process of analysis variables.
correlation helps us to understand the association (or) absence of relation between variables.
adv: Help us to identify the potential variable in a data set and let you know whether the variables are negatively associated or positively ?
Regression :
regression analysis predicts the value of dependent variable based on known values of independent variable, and it will help us to fit a line in between the data points(observations).
Regression indicates the impact of one unit change in the known variable (x) on the estimated variable (y).
| Correlation (vs) Regression : | 52 | correlation-vs-regression-10da0a70f519 | 2018-04-20 | 2018-04-20 17:05:52 | https://medium.com/s/story/correlation-vs-regression-10da0a70f519 | false | 133 | null | null | null | null | null | null | null | null | null | Data Science | data-science | Data Science | 33,617 | Tarun Yellogi | null | 56f27535f3e5 | tharunyellogi | 9 | 21 | 20,181,104 | null | null | null | null | null | null |
0 | Que 1# Which attribute should choose as for being the Root/Parent Node? > After deciding the root node?
Que 2# When should I Start/Stop Split further Node?
Entropy(S) = -p(positive)log2 p(positive) — p(negative) log2 p(negative)
Information Gain (n) =
Entropy(x) — ([weighted average] * entropy(children for feature))
outlook = { sunny, overcast, rain }
temperature = {hot, mild, cool }
humidity = { high, normal }
wind = { weak, strong }
Entropy(S) = — (9/14) Log2 (9/14) — (5/14) Log2 (5/14) = 0.940
Gain(S,Wind) = Entropy(S) — { (8/14)* Entropy(Sweak) — (6/14)* Entropy(Sstrong) }
= 0.940 — { (8/14)*0.811 — (6/14)*1.00 } = 0.048
Entropy(Sweak) = — (6/8)*log2(6/8) — (2/8)*log2(2/8) = 0.811
Entropy(Sstrong) = — (3/6)*log2(3/6) — (3/6)*log2(3/6) = 1.00
S sunny = {D1, D2, D8, D9, D11} = 5 examples from the table with outlook = sunny
Gain(Ssunny, Humidity) = 0.970
Gain(Ssunny, Temperature) = 0.570
Gain(Ssunny, Wind) = 0.019
IF outlook = sunny AND humidity = high THEN play baseball = no
IF outlook = rain AND humidity = high THEN play baseball = no
IF outlook = rain AND wind = strong THEN play baseball = yes
IF outlook = overcast THEN play baseball = yes
IF outlook = rain AND wind = weak THEN play baseball = yes
| 22 | d3b03a0c26c4 | 2018-01-25 | 2018-01-25 06:42:34 | 2018-01-28 | 2018-01-28 16:29:15 | 9 | false | en | 2018-02-01 | 2018-02-01 07:02:28 | 3 | 10dbb3472ec4 | 10.090566 | 13 | 0 | 0 | After learning the process of Decision Tree for M.L, I came to know that if we try hard to understand something and that might be beyond… | 5 |
Decision Tree = A Light Intro to Theory + Math + Code
After learning the process of Decision Tree for M.L, I came to know that if we try hard to understand something and that might be beyond our area of subject, we would defiantly be successful if we keep persistence and perseverance. There is nothing in the world that you can’t learn. You just need a strong determination to get into the core knowledge of any domain.
The ability to learn is a hallmark of intelligence
Well, after such a cliche intro here I’m presenting you the Decision Tree in the simpler way possible. In this part, we shall discuss the Theory, Math, and Code. This article is clearly divided into three Section. In the first section you will know the Fundamental & Visual Concept of Decision Tree, In the second part, we will understand Entropy, Gini Impurity, Information Gain and see a pseudo code to understand this better. Finally, we will see a very famous library called sci-kit Learn and get to know the python Code its self.
What is Decision Tree? How does it work?
#1. Decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It works for both categorical and continuous input and output variables. In this technique, we split the population or sample into two or more homogeneous sets (or sub-populations) based on most significant splitter / differentiator in input variables.
#2. It’s a Tree-structure classifier, a decision tree can be used to visually and explicitly represent decisions and decision making. As the name goes, it uses a tree-like model of decisions. Though a generally used tool in data mining, statistics for deriving a strategy to reach a particular goal, its also widely used in machine learning, which will be the main focus of this article.
#3 Decision Tree can be used for both classification ( Discrete Value Yes/No, 0 or 1) and regression( Continues Value )problem.
Classical Decision Tree is around from the decades but still its most powerful modern variation like Random Forest, Gradient Boosting is the most useful machine learning Technique. (Since the goal of this article is not to teach you about the random forest, gradient boosting we will keep our focus on Decision Tree.)
Example of Decision Tree
At this point in time, you might come to know about Visual Idea of a Decision Tree.
Now let’s get into deep drive and understand the fundamental concepts. To know the concepts you need to at least know few Terminology which is mention below.
Root Node: It decides the entire population or sample data should further get divided into two or more homogeneous sets.
Splitting: It is a process of dividing a node into two or more sub-nodes.
Decision Node: This node decides whether/when a sub-node splits into further sub-nodes or not.
Leaf/ Terminal Node: Nodes do not split is called Leaf or Terminal node
So Now you should know that Decision Tree can be very intuitive. In case of the real life of problem-solving using Decision Tree, You only need to ask two question.
That’s it.!! This was the Theoretical Intro of Decisions Tree. We have successfully understood the aspects of the decision tree. Now we shall discuss how we could make a complex computational decision tree model based upon mathematical understanding.
In following sections we define few mathematical terms related to decision tree and then perform those calculation with sample example.
Most algorithms that have been developed for learning decision trees are variations of the core algorithm that employs a top down, greedy search through the possible space of decision trees.
The major Decision Tree implementations:
ID3, or Iternative Dichotomizer 3, was the first of three Decision Tree implementations developed by Ross Quinlan
CART, or Classification And Regression Trees is often used as a generic acronym for the term Decision Tree, though it apparently has a more specific meaning. In sum, the CART implementation is very similar to C4.5; the one notable difference is that CART constructs the tree based on a numerical splitting criterion recursively applied to the data, whereas C4.5 includes the intermediate step of constructing rule sets.
Later in another article we will be discussing on CART (Classification And Regression Trees )
C4.5, Quinlan’s next iteration. The new features (versus ID3) are: (i) accepts both continuous and discrete features; (ii) handles incomplete data points; (iii) solves over-fitting problem by (very clever) bottom-up technique usually known as “pruning”; and (iv) different weights can be applied the features that comprise the training data. Of these, the first three are very important — and i would suggest that any DT implementation you choose have all three. The fourth (differential weighting) is much less important
C5.0, the most recent Quinlan iteration. This implementation is covered by patent and probably as a result, is rarely implemented (outside of commercial software packages). I have always been skeptical about the improvements claimed by its inventor (Ross Quinlan) — for instance, he claims it is “several orders of magnitude” faster than C4.5. Other claims are similarly broad (“significantly more memory efficient”) and so forth.
CHAID (chi-square automatic interaction detector) actually predates the origianl ID3 implementation by about six years (published in a Ph.D thesis by Gordon Kass in 1980). The R Platform has a Package called CHAID which includes excellent documentation
In this article, we will be focussing on the Iterative Dichotomiser 3, commonly know as the ID3 algorithm because we need to have a ground up knowlegde about Algorithm. Variants/Extensions of the ID3 algorithm, such as C4.5, CART are very much in practical use today.
The ID3 algorithm builds the tree top-down, starting from the root by meticulously choosing which attribute that will be tested at each given node. Each attribute is evaluated through statistical means as to see which attribute splits the dataset the best. The best attribute is made the root, with it’s attribute values branching out. The process continues with the rest of the attributes. Once an attribute is selected, it is not possible to backtrack.
Knowing The Attribute
Pruning
The performance of a tree can be further increased by pruning. It involves removing the branches that make use of features having low importance. This way, we reduce the complexity of tree, and thus increasing its predictive power by reducing overfitting.
Pruning can start at either root or the leaves. The simplest method of pruning starts at leaves and removes each node with most popular class in that leaf, this change is kept if it doesn’t deteriorate accuracy. Its also called reduced error pruning. More sophisticated pruning methods can be used such as cost complexity pruning where a learning parameter (alpha) is used to weigh whether nodes can be removed based on the size of the sub-tree. This is also known as weakest link pruning.
Entropy
Entropy is a measure of unpredictability of the state, or equivalently, of its average information content.
Entropy is a statistical metric that measures the impurity. Given a collection of S, which contains two classes: Positive and Negative, of some arbitrary target concept, entropy with respect to this boolean classification is:
Where ppositive is the proportion (probability) of positive examples in S and pnegative is the proportion of negative examples in S. Entropy is 1 if the collection S contains equal number of examples from both classes, Entropy is 0 if all the examples in S contain the same example.
The entropy values vs the probabilities for a collection S follows a parabolic curve:
One interpretation of entropy is that, entropy specifies the minimum number of bits required to encode the classification of any member of a collection S.
In general terms, when the classes of the target function may not always be boolean, entropy is defined as
What is Information Gain (IG)?
Now that we know what entropy is, let’s look at an attribute that is more attached to the building of the decision tree — Information Gain. Information gain is a metric that measures the expected reduction in the impurity of the collection S, caused by splitting the data according to any given attribute. Whilst building the decision tree, the information gain metric is used by the ID3 algorithm to select the best attribute — the attribute the provides the “best split” — at each level.
It measures the expected reduction in entropy. It decides which attribute goes into a decision node. To minimise the decision tree depth, the attribute with the most entropy reduction is the best choice!
More precisely, the information gain Gain(S, A) of an attribute A relative to a collection of examples S is defined as:
S = Each value v of all possible values of attribute A
Sv = Subset of S for which attribute A has value v
|Sv| = Number of elements in Sv
|S| = Number of elements in S
Let’s see how these measures work!
Suppose we want ID3 to decide whether the weather is good for playing baseball. Over the course of two weeks, data is collected to help ID3 build a decision tree. The target classification is “should we play baseball?” which can be yes or no.
See the following table.
The weather attributes are outlook, temperature, humidity, and wind speed. They can have the following values:
We need to find which attribute will be the root node in our decision tree.
For each attribute, the gain is calculated and the highest gain is used in the decision.
Gain(S, Outlook) = 0.246
Gain(S, Temperature) = 0.029
Gain(S, Humidity) = 0.151
Clearly, the outlook attribute has the highest gain. Therefore, it is used as the decision attribute in the root node.
Since outlook has three possible values, the root node has three branches (sunny, overcast, rain). The next question is, What attribute should be tested at the sunny branch node? Since we’ve used outlook at the root, we only decide on the remaining three attributes: humidity, temperature, or wind.
Humidity has the highest gain; therefore, it is used as the decision node. This process goes on until all data is classified perfectly or we run out of attributes.
The decision tree can also be expressed in rule format:
That’s all mathematical understanding you need to understand Decision Tree. If you think this concept is overwhelming then don’t be. Take it as a learning process like any other skills to learn you need to give some Time & constant Motivation.
Now we have been reached the final part, yes the Coding Part. Take time congratulate yourself you have come along so far. Here I’ve used scikit-learn(Machine learning library)please don’t be disappointed after knowing that if I was going to use some BLACK BOX API then what was the purpose to learn theory in depth and length. Well, we will answer this at the end of the code. First, let’s see the code.
Results
This might be astonishing to you that by using some Black Box API we can automatically generate decision Tree. Well, Actully It Is quite fascinating, but hold your fascination and consider this as a basic thing to immediately get started with a decision tree.
At this point in time, you should have a foundational understanding of Decision Tree Classifier. Here in the coding section, we have used most popular python library a.k.s Scikit-learn. But you might be thinking we’ve learned a lot of theory what was the use for that?
Well, To know the Hyperparameters Optimization. Optimization is core to Machine Learning. We will discuss this in the later article. Apart from that, we are expanding various genres of machine learning so to speak your learning strategy should be dynamic and must cope with current time.
On a side note, you should ask yourself this question before getting into Machine Learning.
Why I’m doing this? What Problem do you want to Solve? Because this will help you to keep steady and constant on your path of learning anything. I would say. “ Do what you want to do/love to do, Otherwise you would end up in living have to do”. It might sounds cliche but be clear on your path of doing anything you want.
Just Follow your Heart. Never Stop Learning. Never Stop Growing.
Extravaganza (Bonus)
Decision Tree Use Cases
Building knowledge management platforms for customer service that improve first call resolution, average handling time, and customer satisfaction rates
In finance, forecasting future outcomes and assigning probabilities to those outcomes
Binomial option pricing predictions and real option analysis
Customer’s willingness to purchase a given product in a given setting, i.e. offline and online both
Product planning; for example, Gerber Products, Inc. used decision trees to decide whether to continue planning PVC for manufacturing toys or not
General business decision-making
Loan approval
| Decision Tree = A Light Intro to Theory + Math + Code | 44 | decision-tree-a-light-intro-to-theory-math-code-10dbb3472ec4 | 2018-05-13 | 2018-05-13 13:30:23 | https://medium.com/s/story/decision-tree-a-light-intro-to-theory-math-code-10dbb3472ec4 | false | 2,356 | A collection of articles published for the people who are looking for getting started the tutorials on Data Science, Machine Learning, Internet of Things, Cloud & few on a count. Intelligence Learning Community. | null | null | null | MetaInsights | meta-design-ideas | DESIGN,MACHINE LEARNING,DATA SCIENCE,QUANTUM COMPUTING | souman_meta | Machine Learning | machine-learning | Machine Learning | 51,320 | Souman Roy | Machine Learning & Data Science practitioner | Interaction and Experience Designer | Founder MetaDesignIdeas | ed3a4bb21d24 | soumanroy.gsa | 312 | 19 | 20,181,104 | null | null | null | null | null | null |
Subsets and Splits